about/gsocProposal/gsoc10-proposal-dnselRewrite.html
9be75c2b
 <html><head>
 <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
   <title>GSoC Application</title>
      <style type="text/css">
        body {
          width:60%;
          text-align:justify;
        }
      </style>
   </head><body>
 <h2><a href="http://torproject.org/">Tor Project</a> - DNSEL Rewrite</h2>
 <b>Google Summer of Code Student Application</b> <br>
 Harry Bock &lt;hbock AT ele DOT uri DOT edu&gt;
 <blockquote>
 <h2>Abstract:</h2>
 
 <p>
   The TorDNSEL project is concerned with identifying individual hosts
   as valid and accessible Tor exit relays.  Each Tor exit relay has an
   associated exit policy governing what traffic may leave the Tor
   circuit and go out as requests to the internet.  A public database
   that can be easily queried or scraped would be of huge benefit to
   the Tor community and to services that are interested in whether
   clients originate from the Tor network, such as Wikipedia and IRC
   networks.
 </p>
 </blockquote>
 
 <ol>
 <li>
   <strong>
     What project would you like to work on? Use our ideas lists as a
     starting point or make up your own idea. Your proposal should
     include high-level descriptions of what you're going to do, with
     more details about the parts you expect to be tricky. Your
     proposal should also try to break down the project into tasks of a
     fairly fine granularity, and convince us you have a plan for
     finishing it.
   </strong>
   <p>
     My primary interest is in the TorDNSEL rewrite.  Currently
     unmaintained and written in Haskell, I would like to rework it
     from the ground up using Python and the Torflow interface.
   </p>
   <ul>
   <li>
     Prior to actually building the new TorDNSEL, some time must be
     spent researching and testing strategies for identifying and verifying
     recent Tor exit relays.
     <ul>
       <li>
 	Much of this information is available straight from the
 	cached-descriptors file distributed to running Tor relays, but
 	not all of it is accurate or up-to-date.
       </li>
       <li>
 	<p>
 	  To ensure the honesty of advertised exit nodes, the program
 	  must actively build circuits over the Tor network to the
 	  intended exit node and verify the IP address and exit policies
 	  listed in the cached router list.  This will be accomplished via
 	  TorFlow, most likely using the NetworkScanners tool SoaT.  If
 	  more detailed checking is required than provided by SoaT, it will
 	  be modified and extended to suit the new requirements.
 	</p>
       </li>
       <li>
 	<p>
 	  Since new relays entering the Tor network are almost
 	  immediately available for use, it is important that new
 	  relays are checked and added as quickly as possible.
 	  Testing ExitPolicy honesty may be time-consuming for
 	  certain relays, destinations, and services.
 	</p>
 	<p>
 	  To improve exit honesty checking latency, hosts that have
 	  complex exit policies may be checked incrementally; for
 	  example, if popular services such as http/https/domain/ssh
 	  are allowed by the relay's ExitPolicy, these should be
 	  checked first and the relay can be marked as "honest
 	  (preliminary)", so that partial results may be listed
 	  earlier pending more thorough circuit testing.
 	</p>
       </li>
     </ul>
   </li><li>
     Once the desired method for compiling and verifying exit relays
     is tested, a formal design specification for the following operations
     must be compiled:
     <ol>
       <li><p>Scraping of all known exit relays and their exit
 	policies.</p>  As noted in the Tor dir-spec v3, in the future
 	it will not scale to have every Tor client/directory cache know
 	the IP of every other router.  We need to be able to accurately
 	obtain this data, up-to-date and in bulk, from an authoritative
 	source.  The DNSEL should be able to verify all exit relays,
 	in the shortest time span allowable.
       </li>
       <li>
 	Active testing for valid exit IP addresses and policies.
       </li>
       <li>
 	Parameters and raw format of the resulting data.
       </li>
       <li>
 	Formal mechanisms and formats for access by consumers of this
 	data.  As a minimal starting point, SQLite and JSON should be
 	supported for build data sets, as both are well-structured,
 	standardised formats that have cross-platform open-source
 	support for the most widely used programming languages. One this
 	data is readily available, it would be extremely useful to provide
 	a simple API in Python to improve ease-of-use and integration of
 	this data with existing services.
       </li>
     </ol>
   </li>
   <li>
     <p>
       Currently, users can check the TorBulkExitList on
       check.torproject.org or perform DNS queries against
       exitlist.torproject.org, but this is not ideal for all consumers
       of this data; currently it is expensive to perform these queries
       and they must be done one at a time over the network. While this
       is less of an issue for services that need to perform infrequent
       exit checks, I propose that this mechanism can actually harm
       anonymity, as an adversary that can track these queries (those
       with access to the network of the querying service, or those
       running rogue DNSEL implementations, should it become more
       distributed) can determine, for example, what exit nodes are
       currently serving a large amount of IRC connections.  This is
       more important for services that do frequent queries on a wide
       range of IP addresses.  By encouraging these queries to be done
       locally, we can improve network latency, throughput, and
       anonymity together.
     </p>
     <p>
       On the other hand, for some services, requiring services to
       download the entire exit set when they only need to query a few
       addresses daily would be a waste of valuable resources on both
       ends.  Thus, both single-exit queries and bulk exit description
       lists must be provided; raw queries would be used by services
       that are simply doing manual checks of possible Tor addresses,
       or infrequent automated checks.
     </p>
     <p>
       The primary consumers of bulk data are services that need
       to do frequent automated checks and benefit strongly from
       local caching of data. Some prominent examples would be:
       </p><ul>
 	<li>The Tor project itself (see TorBulkExitList.py,
 	  check.torproject.org, etc.).</li>
 	<li>IRC networks such as Freenode and OFTC that
 	  automatically scrub any potentially identifiable
 	  information from WHOIS queries.</li>
 	<li>Social networks or collaborative communities such as
 	  Wikipedia that would benefit from knowing if a
 	  particular IP address is shared, allowing them to
 	  (possibly) be more lenient towards abuse from
 	  anonymizing services.
 	</li>
       </ul>
     <p></p>
     <p>
       A particularly important goal is to ensure that Tor users
       that also happen to run Tor relays are not automatically
       blocked by services simply because they are relays or exit
       nodes. If a service is able to ascertain that an IP address
       corresponds to a Tor relay, but its exit policy would not
       allow traffic to access the service from Tor, ideally it
       should not block access to its resources.  The cheaper and
       easier it is for a service operator to validate this kind of
       information, the more likely the service is to use it and
       collaborate with the Tor community.  The more the service is
       used, the more likely we are to get feedback, whether
       it be able the data format, false positives or negatives,
       invalid/incorrect exit policies, etc.
     </p>
   </li>
   <li>
     The TorDNSEL project itself will likely benefit from being broken
     into several components that interact with each other and operate on
     exit list data sets.
     <ul>
       <li>Data scraper and honesty checking daemon: constantly runs in
       the background, fetching new information about Tor relays and
       their exit policies and updating the raw journal.  As new relays
       come in, they are accessed and verified for honesty and
       correctness.  As relay data becomes stale according to a
       configurable TTL, they are re-checked.
       </li>
       <li>
 	Data compiler: Manages, updates, and merges exit lists.  Can
 	import and export data from bulk formats and provide
 	statistics about data sets.
       </li>
       <li>
 	Single-query engine: Allows applications and external services
 	to query about a single IP address and exit policy at a time.
 	Can interface via several convenient methods, starting out
 	simple (e.g., Python API requests or HTTP requests) and
 	eventually working up to RBL-style DNS queries, like the
 	original DNSEL, should time permit.
       </li>
       <li>
 	Data distribution: How do clients actually access bulk data?
 	This doesn't necessarily have to be its own application, but
 	it must be feasible for clients to easily scrape and update
 	their local caches.  The data may be served by a static web
 	server, like lighttpd, or via other mechanisms (sftp, or dns
 	zone transfer as a more convoluted example).
       </li>
     </ul>
   </li>
   <li>
     As an aside, there are several open and accepted Tor proposals
     that are relevant to the work I will complete:
     <ul>
       <li>140 <b>Provide diffs between consensuses</b> (ACCEPTED)</li>
       <li>146 <b>Add new flag to reflect long-term stability</b> (OPEN)</li>
       <li>147 <b>Eliminate the need for v2 directories in generating v3 directories</b> (ACCEPTED)</li>
       <li>159 <b>Exit Scanning</b> (OPEN)</li>
     </ul>
   </li>
 
   <!--
       <li>
     From what I have researched and discussed so far, the tricky parts
     will be verifying that listed exit nodes and their exit policies
     are correct (as advertised) and presenting this data in a useful
     way to dnsel consumers.  It was originally assumed that consumers
     of dnsel records would want to use a DNS RBL-style interface, and
     while this interface has been useful to (for example) IRC server
     operators, these direct queries end up revealing more about the
     usage of exit nodes than is necessary.
   </li>
       -->
 </ul>
 <p></p>
 <b>Estimated timeline:</b>
 <p>For each weekly milestone, appropriate documentation should be written to
 coincide with the completed work.  For example, end-user tools should have
 manual pages at the very least, and preferably include a LaTeX manual.  Milestones
 that are primarily experimental in nature should include complete descriptions and
 proposals in plain-text where appropriate.  All source code will be thoroughly
 commented and include documentation useful to developers.
 </p> 
 <p>
   <b>April 26 - May 24 (Pre-SoC)</b>: Get up to speed with Tor
   directory and caching architecture, pick apart existing Haskell
   implementation of TorDNSEL, and master TorFlow.
 </p>
 <p>
   <b>May 31 (end of week 1)</b>: Have a working mechanism for
   compiling as much testable information about exit relays as
   possible. This data must be easily accessible for subsequent work.
   This may be taken, adapted, or abstracted from existing data directory crawling
   in TorFlow.
 </p>
 <p>
   <b>June 14 (week 3)</b>: Working implementation of tests using TorFlow, especially
   ExitAuthority tools.  This will probably be the most time-consuming period; may take
   up to a week more than anticipated.
 </p>
 <p>
   <b>June 21 (week 4)</b>: Be able to produce consistent, constantly updating exit lists
   with tested and untested exit policies listed.  Find Tor developer guinea pigs to
   test and hunt for glaring holes in exit relay honesty testing and verification. :)
 </p>
 <p>
   <b>June 28 (week 5)</b>: Begin proof-of-concept production of bulk
   data formats (raw, SQLite and JSON), all of which should be similar
   in format.  Consultations should be made with consumers of such data
   (Freenode, Wikipedia, etc.) to ensure the current data presentation
   is not overreaching or missing information that would be useful to
   them.  
 </p>
 <p>
   <b>July 12 (week 7)</b>: Integrate existing functionality and data access methods into
   a Python API that is usable for consumers and the DNSEL application itself.  Style should
   be similar to TorFlow where possible.
 </p><p>
   <b>July 16 (midterm evals):</b> Completed specifications of TorDNSEL
   operation, basic data formats, and delivery methods.  Completed
   first proof-of-concept implementation.  First major review with
   mentors and as much of the Tor developer community as time permits.
 </p>
 <p>
   <b>July 19 (week 8)</b>: Work on designing and testing exit list cache update mechanisms.
   Start with something similar to cached-descriptors.new journaling, and work up
   to something for useful for other data formats.  Integrate mechanism into API.
 </p>
 <p>
   <b>July 26 (week 9)</b>: Solidify main scrape/check application and
   perform as much real-world testing as time permits, adjusting for
   major setbacks, if any.
 </p>
 <p>
   <b>August 2 (week 10)</b>: Make adjustments based on feedback from
   (hopefully) several real-world consumers of TorDNSEL data.
   Generally polish and improve usability of core application(s).
 </p>
 <p>
   <b>August 9 ("pencils down"):</b>: Start pumping out documentation and comprehensive
   code and review.
 </p>
 <p>
   <b>August 16 ("okay really, pencils down"):</b>Major remaining kinks
   should be ironed out; polish specification and documentation and
   begin writing final evaluations. Plan for future maintenance of
   TorDNSEL.
 </p>
 </li>
 
 <li>
   <strong> Point us to a code sample: something good and clean to
   demonstrate that you know what you're doing, ideally from an
   existing project.
   </strong>
   <p>
     Code from almost any project I've worked on is available at
     http://git.spanning-tree.org/.  Some of my better code:
     </p><ul>
       <li><a href="http://git.spanning-tree.org/index.cgi/grizzlor/">libgrizzlor</a> [C, Common Lisp], an abstraction layer for the
 	SILC client library focused on bots.
       </li>
       <li><a href="http://git.spanning-tree.org/index.cgi/rigel/tree/">rigel</a> [C], a UNIX PIC16/PIC18 program loader for use with the FIRST robotics competition.</li>
       <li><a href="http://git.spanning-tree.org/index.cgi/nis/">Network Subsystem Inventory</a> [Python], a Django web application for keeping an inventory of network resources on a large university network. </li>
       <li><a href="http://git.spanning-tree.org/index.cgi/Periscope/">Periscope</a>
  [Common Lisp], a network monitoring application inspired by IP-Audit, 
 designed from the ground up to work with the Argus netflow application.</li>
     </ul>
   <p></p>
 </li>
 <li><strong> Why do you want to work with The Tor Project / EFF in particular?</strong>
 <p>
 The Tor Project interests me primarily from architectural and
 information security perspectives; my primary focus in information
 security has always been authentication and authorization - verifying
 the identity of a user to explicitly or implicitly control access to
 machine and network resources.  The goal of all forms of public-key
 and secure hash cryptography is the authentication of a third party or
 data, essentially pinning their identity down.
 </p>
 <p>
 Tor greatly interests me because it has the opposite goal; it tries to
 ensure that pinning down the identity of any particular user is
 (ideally) impossible or at least greatly hindered for any non-global
 adversary.  Protecting the rights of network users by preserving their
 anonymity is an incredibly important and complicated goal, and Tor's
 role in increasing anonymity of internet access in the face of many
 types of adversaries is extremely valuable.  To this end, I hope that
 my contributions will be found useful by the Tor project, its users,
 and those working to protect these end users.
 </p>
 </li>
 <li>
   <strong>
     Tell us about your experiences in free software development
     environments. We especially want to hear examples of how you have
     collaborated with others rather than just working on a project by
     yourself.
   </strong>
     <p>While nearly all of the projects I've worked on have been free
     software, my experience working directly with the free software
     community at large is minimal.  I have contributed briefly to the
     KDE project, working on their display configuration application,
     and submitted patches to other open source projects (QoSient's
     Argus netflow tools and Google's ipaddr-py, for example).  I have
     collaborated with various universities in New England on
     development of the Nautilus project (http://nautilus.oshean.org/)
     and its main subproject, Periscope
     (http://nautilus.oshean.org/wiki/Periscope), while working at the
     OSHEAN non-profit consortium. </p>
     <p>
 I sincerely look forward to working with the vibrant development
 community of the Tor project and hope to gain more experience in
 collaborating with an experienced group of developers.
 </p>
 </li>
 
 <li>
   <strong>
     Will you be working full-time on the project for the summer, or
     will you have other commitments too (a second job, classes, etc)?
     If you won't be available full-time, please explain, and list
     timing if you know them for other major deadlines
     (e.g. exams). Having other activities isn't a deal-breaker, but we
     don't want to be surprised.
   </strong>
   <p>
     I will be working part-time at the University of Rhode Island
     Information Security Office, and will have one summer class for five
     weeks starting in late May.  I don't anticipate either will
     significantly affect my involvement with the Tor project.
   </p>
 </li>
 <li>
   <strong>
     Will your project need more work and/or maintenance after the
     summer ends? What are the chances you will stick around and help
     out with that and other related projects?
   </strong>
 <p>
 While I am confident I can produce a working initial implementation of
 dnsel in the time allotted, I anticipate it will need more work at the
 end of summer.  One of my primary goals for the dnsel project is to
 make it easier to maintain, as its operation will have to be adjusted
 to fit with changes in the Tor architecture.  Making the project more
 accessible to other maintainers will allow for greater collaboration
 and improvements to dnsel where development on the current
 implementation has stagnated.
 </p>
 </li>
 <li>
   <strong>
     What is your ideal approach to keeping everybody informed of your
     progress, problems, and questions over the course of the project? Said
     another way, how much of a "manager" will you need your mentor to be?
   </strong>
   <p>
     I will do my best to communicate with my mentors and the Tor developer
     community at large as frequently and directly as possible, via
     #tor-dev and the mailing lists.  I also hope to inform others of more
     major milestones in the project via a blog or web page, and keep
     detailed documentation and progress updates on the Tor wiki.
   </p>
 </li><li>
   <strong>What school are you attending? What year are you, and what's 
 your major/degree/focus? If you're part of a research group, which one?</strong>
   <p>
     I am currently attending the University of Rhode Island.  This is my
  fourth year in college and second at URI; I am a Computer Engineering 
 major, intending to graduate next year and obtain my masters degree the 
 following year.  My primary interests are low-level software development
  and systems programming, networking, information security, and signal 
 processing.
   </p>
 </li>
 <li>
   <strong>How can we contact you to ask you further questions? Google 
 doesn't share your contact details with us automatically, so you should 
 include that in your application. In addition, what's your IRC nickname?
  Interacting with us on IRC will help us get to know you, and help you 
 get to know our community.</strong>
   <p>
     You can contact me at hbock@ele.uri.edu; my nickname on IRC is <b>hbock</b>.
   </p>
 </li>
 </ol>
 </body></html>