git.schokokeks.org
Repositories
Help
Report an Issue
tor-webwml.git
Code
Commits
Branches
Tags
Suche
Strukturansicht:
37855015c
Branches
Tags
bridges
docs-debian
jobs
master
press-clips
tor-webwml.git
docs
en
faq.wml
clean up the faq, address ticket 7287, remove polipo.
Andrew Lewman
commited
37855015c
at 2013-01-21 02:44:17
faq.wml
Blame
History
Raw
## translation metadata # Revision: $Revision$ # Translation-Priority: 2-medium #include "head.wmi" TITLE="Tor Project: FAQ" CHARSET="UTF-8" <div id="content" class="clearfix"> <div id="breadcrumbs"> <a href="<page index>">Home » </a> <a href="<page docs/documentation>">Documentation » </a> <a href="<page docs/faq>">FAQ</a> </div> <div id="maincol"> <!-- PUT CONTENT AFTER THIS TAG --> <h1>Tor FAQ</h1> <hr> <p>General questions:</p> <ul> <li><a href="#WhatIsTor">What is Tor?</a></li> <li><a href="#Torisdifferent">How is Tor different from other proxies?</a></li> <li><a href="#CompatibleApplications">What programs can I use with Tor?</a></li> <li><a href="#WhyCalledTor">Why is it called Tor?</a></li> <li><a href="#Backdoor">Is there a backdoor in Tor?</a></li> <li><a href="#DistributingTor">Can I distribute Tor?</a></li> <li><a href="#SupportMail">How can I get support?</a></li> <li><a href="#Forum">Is there a Tor forum?</a></li> <li><a href="#WhySlow">Why is Tor so slow?</a></li> <li><a href="#Funding">What would The Tor Project do with more funding?</a></li> <li><a href="#Metrics">How many people use Tor? How many relays or exit nodes are there?</a></li> <li><a href="#SSLcertfingerprint">What are your SSL certificate fingerprints?</a></li> </ul> <p>Compilation and Installation:</p> <ul> <li><a href="#HowUninstallTor">How do I uninstall Tor?</a></li> <li><a href="#PGPSigs">What are these "sig" files on the download page?</a></li> <li><a href="#GetTor">Your website is blocked in my country. How do I download Tor?</a></li> <li><a href="#CompileTorWindows">How do I compile Tor under Windows?</a></li> <li><a href="#VirusFalsePositives">Why does my Tor executable appear to have a virus or spyware?</a></li> <li><a href="#LiveCD">Is there a LiveCD or other bundle that includes Tor?</a></li> </ul> <p>Tor Browser Bundle:</p> <ul> <li><a href="#TBBFlash">Why can't I view videos on YouTube and other Flash-based sites?</a></li> <li><a href="#TBBSocksPort">I'm on OSX or Linux and I want to run another application through the Tor launched by Tor Browser Bundle. How do I predict my Socks port?</a></li> <li><a href="#TBBPolipo">I need an HTTP proxy. Where did Polipo go?</a></li> <li><a href="#TBBOtherExtensions">Can I install other Firefox extensions?</a></li> <li><a href="#TBBJavaScriptEnabled">Why is NoScript configured to allow JavaScript by default in the Tor Browser Bundle? Isn't that unsafe?</a></li> <li><a href="#TBBCanIBlockJS">I'm an expert! (No, really!) Can I configure NoScript to block JavaScript by default?</a></li> <li><a href="#TBBOtherBrowser">I want to use Chrome/IE/Opera/etc with Tor.</a></li> <li><a href="#TBBCloseBrowser">I want to leave Tor Browser Bundle running but close the browser.</a></li> <li><a href="#GoogleCaptcha">Google makes me solve a Captcha or tells me I have spyware installed.</a></li> <li><a href="#GmailWarning">Gmail warns me that my account may have been compromised.</a></li> </ul> <p>Advanced Tor usage:</p> <ul> <li><a href="#torrc">I'm supposed to "edit my torrc". What does that mean?</a></li> <li><a href="#Logs">How do I set up logging, or see Tor's logs?</a></li> <li><a href="#DoesntWork">Tor is running, but it's not working correctly.</a></li> <li><a href="#VidaliaPassword">Tor/Vidalia prompts for a password at start.</a></li> <li><a href="#ChooseEntryExit">Can I control which nodes (or country) are used for entry/exit?</a></li> <li><a href="#FirewallPorts">My firewall only allows a few outgoing ports.</a></li> </ul> <p>Running a Tor relay:</p> <ul> <li><a href="#RelayFlexible">How stable does my relay need to be?</a></li> <li><a href="#ExitPolicies">I'd run a relay, but I don't want to deal with abuse issues.</a></li> <li><a href="#RelayOrBridge">Should I be a normal relay or bridge relay?</a></li> <li><a href="#MultipleRelays">I want to run more than one relay.</a></li> <li><a href="#RelayMemory">Why is my Tor relay using so much memory?</a></li> <li><a href="#WhyNotNamed">Why is my Tor relay not named?</a></li> <li><a href="#RelayDonations">Can I donate for a relay rather than run my own?</a></li> </ul> <p>Running a Tor hidden service:</p> <p>Anonymity and Security:</p> <ul> <li><a href="#KeyManagement">Tell me about all the keys Tor uses.</a></li> <li><a href="#EntryGuards">What are Entry Guards?</a></li> </ul> <p>Alternate designs that we don't do (yet):</p> <ul> <li><a href="#EverybodyARelay">You should make every Tor user be a relay.</a></li> <li><a href="#TransportIPnotTCP">You should transport all IP packets, not just TCP packets.</a></li> <li><a href="#HideExits">You should hide the list of Tor relays, so people can't block the exits.</a></li> </ul> <p>Abuse:</p> <ul> <li><a href="#Criminals">Doesn't Tor enable criminals to do bad things?</a></li> <li><a href="#RespondISP">How do I respond to my ISP about my exit relay?</a></li> </ul> <p>For other questions not yet on this version of the FAQ, see the <a href="<wikifaq>">wiki FAQ</a> for now.</p> <hr> <a id="General"></a> <a id="WhatIsTor"></a> <h3><a class="anchor" href="#WhatIsTor">What is Tor?</a></h3> <p> The name "Tor" can refer to several different components. </p> <p> The Tor software is a program you can run on your computer that helps keep you safe on the Internet. Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location. This set of volunteer relays is called the Tor network. You can read more about how Tor works on the <a href="<page about/overview>">overview page</a>. </p> <p> The Tor Project is a non-profit (charity) organization that maintains and develops the Tor software. </p> <hr> <a id="Torisdifferent"></a> <h3><a class="anchor" href="#Torisdifferent">How is Tor different from other proxies?</a></h3> <p> A typical proxy provider sets up a server somewhere on the Internet and allows you to use it to relay your traffic. This creates a simple, easy to maintain architecture. The users all enter and leave through the same server. The provider may charge for use of the proxy, or fund their costs through advertisements on the server. In the simplest configuration, you don't have to install anything. You just have to point your browser at their proxy server. Simple proxy providers are fine solutions if you do not want protections for your privacy and anonymity online and you trust the provider from doing bad things. Some simple proxy providers use SSL to secure your connection to them. This may protect you against local eavesdroppers, such as those at a cafe with free wifi Internet. </p> <p> Simple proxy providers also create a single point of failure. The provider knows who you are and where you browse on the Internet. They can see your traffic as it passes through their server. In some cases, they can even see inside your encrypted traffic as they relay it to your banking site or to ecommerce stores. You have to trust the provider isn't doing any number of things, such as watching your traffic, injecting their own advertisements into your traffic stream, and recording your personal details. </p> <p> Tor passes your traffic through at least 3 different servers before sending it on to the destination. Because there's a separate layer of encryption for each of the three relays, Tor does not modify, or even know, what you are sending into it. It merely relays your traffic, completely encrypted through the Tor network and has it pop out somewhere else in the world, completely intact. The Tor client is required because we assume you trust your local computer. The Tor client manages the encryption and the path chosen through the network. The relays located all over the world merely pass encrypted packets between themselves.</p> <p> <dl> <dt>Doesn't the first server see who I am?</dt><dd>Possibly. A bad first of three servers can see encrypted Tor traffic coming from your computer. It still doesn't know who you are and what you are doing over Tor. It merely sees "This IP address is using Tor". Tor is not illegal anywhere in the world, so using Tor by itself is fine. You are still protected from this node figuring out who you are and where you are going on the Internet.</dd> <dt>Can't the third server see my traffic?</dt><dd>Possibly. A bad third of three servers can see the traffic you sent into Tor. It won't know who sent this traffic. If you're using encryption, such as visiting a bank or e-commerce website, or encrypted mail connections, etc, it will only know the destination. It won't be able to see the data inside the traffic stream. You are still protected from this node figuring out who you are and if using encryption, what data you're sending to the destination.</dd> </dl> </p> <hr> <a id="CompatibleApplications"></a> <h3><a class="anchor" href="#CompatibleApplications">What programs can I use with Tor?</a></h3> <p> There are two pieces to "Torifying" a program: connection-level anonymity and application-level anonymity. Connection-level anonymity focuses on making sure the application's Internet connections get sent through Tor. This step is normally done by configuring the program to use your Tor client as a "socks" proxy, but there are other ways to do it too. For application-level anonymity, you need to make sure that the information the application sends out doesn't hurt your privacy. (Even if the connections are being routed through Tor, you still don't want to include sensitive information like your name.) This second step needs to be done on a program-by-program basis, which is why we don't yet recommend very many programs for safe use with Tor. </p> <p> Most of our work so far has focused on the Firefox web browser. The bundles on the <a href="<page download/download>">download page</a> automatically install the <a href="<page torbutton/index>">Torbutton Firefox extension</a> if you have Firefox installed. As of version 1.2.0, Torbutton now takes care of a lot of the connection-level and application-level worries. </p> <p> There are plenty of other programs you can use with Tor, but we haven't researched the application-level anonymity issues on them well enough to be able to recommend a safe configuration. Our wiki has a list of instructions for <a href="<wiki>doc/TorifyHOWTO">Torifying specific applications</a>. There's also a <a href="<wiki>doc/SupportPrograms">list of applications that help you direct your traffic through Tor</a>. Please add to these lists and help us keep them accurate! </p> <hr> <a id="WhyCalledTor"></a> <h3><a class="anchor" href="#WhyCalledTor">Why is it called Tor?</a></h3> <p> Because Tor is the onion routing network. When we were starting the new next-generation design and implementation of onion routing in 2001-2002, we would tell people we were working on onion routing, and they would say "Neat. Which one?" Even if onion routing has become a standard household term, Tor was born out of the actual <a href="http://www.onion-router.net/">onion routing project</a> run by the Naval Research Lab. </p> <p> (It's also got a fine translation from German and Turkish.) </p> <p> Note: even though it originally came from an acronym, Tor is not spelled "TOR". Only the first letter is capitalized. In fact, we can usually spot people who haven't read any of our website (and have instead learned everything they know about Tor from news articles) by the fact that they spell it wrong. </p> <hr> <a id="Backdoor"></a> <h3><a class="anchor" href="#Backdoor">Is there a backdoor in Tor?</a></h3> <p> There is absolutely no backdoor in Tor. Nobody has asked us to put one in, and we know some smart lawyers who say that it's unlikely that anybody will try to make us add one in our jurisdiction (U.S.). If they do ask us, we will fight them, and (the lawyers say) probably win. </p> <p> We think that putting a backdoor in Tor would be tremendously irresponsible to our users, and a bad precedent for security software in general. If we ever put a deliberate backdoor in our security software, it would ruin our professional reputations. Nobody would trust our software ever again — for excellent reason! </p> <p> But that said, there are still plenty of subtle attacks people might try. Somebody might impersonate us, or break into our computers, or something like that. Tor is open source, and you should always check the source (or at least the diffs since the last release) for suspicious things. If we (or the distributors) don't give you source, that's a sure sign something funny might be going on. You should also check the <a href="<page docs/verifying-signatures>">PGP signatures</a> on the releases, to make sure nobody messed with the distribution sites. </p> <p> Also, there might be accidental bugs in Tor that could affect your anonymity. We periodically find and fix anonymity-related bugs, so make sure you keep your Tor versions up-to-date. </p> <hr> <a id="DistributingTor"></a> <h3><a class="anchor" href="#DistributingTor">Can I distribute Tor?</a></h3> <p> Yes. </p> <p> The Tor software is <a href="https://www.fsf.org/">free software</a>. This means we give you the rights to redistribute the Tor software, either modified or unmodified, either for a fee or gratis. You don't have to ask us for specific permission. </p> <p> However, if you want to redistribute the Tor software you must follow our <a href="<gitblob>LICENSE">LICENSE</a>. Essentially this means that you need to include our LICENSE file along with whatever part of the Tor software you're distributing. </p> <p> Most people who ask us this question don't want to distribute just the Tor software, though. They want to distribute the <a href="https://www.torproject.org/projects/torbrowser.html.en">Tor Browser</a>. This includes <a href="https://www.mozilla.org/en-US/firefox/all-aurora.html">Mozilla Aurora</a> and <a href="<page projects/vidalia>">Vidalia</a>. You will need to follow the licenses for those programs as well. Both of them are distributed under the <a href="https://www.fsf.org/licensing/licenses/gpl.html">GNU General Public License</a>. The simplest way to obey their licenses is to include the source code for these programs everywhere you include the bundles themselves. Look for "source" packages on the <a href="<page projects/vidalia>">Vidalia page</a> and <a href="https://www.mozilla.org/en-US/firefox/all-aurora.html">Mozilla Aurora</a> pages. </p> <p> Also, you should make sure not to confuse your readers about what Tor is, who makes it, and what properties it provides (and doesn't provide). See our <a href="<page docs/trademark-faq>">trademark FAQ</a> for details. </p> <p> Lastly, you should realize that we release new versions of the Tor software frequently, and sometimes we make backward incompatible changes. So if you distribute a particular version of the Tor software, it may not be supported — or even work — six months later. This is a fact of life for all security software under heavy development. </p> <hr> <a id="SupportMail"></a> <h3><a class="anchor" href="#SupportMail">How can I get support?</a></h3> <p>Your best bet is to first try the following:</p> <ol> <li>Read through this <a href="<page docs/faq>">FAQ</a>.</li> <li>Read through the <a href="<page docs/documentation>">documentation</a>.</li> <li>Read through the <a href="https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk"> tor-talk archives</a> and see if your question is already answered.</li> <li>Join our <a href="ircs://irc.torproject.org#tor">irc channel</a> and state the issue and wait for help.</li> <li>Send an email to <a href="mailto:help@rt.torproject.org">help@rt.torproject.org</a>.</li> <li>If all else fails, try <a href="<page about/contact>">contacting us</a> directly.</li> </ol> <p>If you find your answer, please stick around on the IRC channel or the mailing list to help others who were once in your position.</p> <hr> <a id="Forum"></a> <h3><a class="anchor" href="#Forum">Is there a Tor forum?</a></h3> <p>Not yet, but we're working on it. Most forum software is a disaster to maintain and keep secure, and at the same time too many of the Tor developers are spread too thin to be able to contribute enough to a forum. As of June 2012, we have a funder who wants to help us do it right. Stay tuned! (Tickets <a href="https://trac.torproject.org/projects/tor/ticket/3592">3592</a> and <a href="https://trac.torproject.org/projects/tor/ticket/5995">5995</a> relate to forums too.) </p> <hr> <a id="WhySlow"></a> <h3><a class="anchor" href="#WhySlow">Why is Tor so slow?</a></h3> <p> There are many reasons why the Tor network is currently slow. </p> <p> Before we answer, though, you should realize that Tor is never going to be blazing fast. Your traffic is bouncing through volunteers' computers in various parts of the world, and some bottlenecks and network latency will always be present. You shouldn't expect to see university-style bandwidth through Tor. </p> <p> But that doesn't mean that it can't be improved. The current Tor network is quite small compared to the number of people trying to use it, and many of these users don't understand or care that Tor can't currently handle file-sharing traffic load. </p> <p> For the much more in-depth answer, see <a href="<blog>why-tor-is-slow">Roger's blog post on the topic</a>, which includes both a detailed PDF and a video to go with it. </p> <p> What can you do to help? </p> <ul> <li> <a href="<page docs/tor-doc-relay>">Configure your Tor to relay traffic for others</a>. Help make the Tor network large enough that we can handle all the users who want privacy and security on the Internet. </li> <li> <a href="<page projects/vidalia>">Help us make Tor more usable</a>. We especially need people to help make it easier to configure your Tor as a relay. Also, we need help with clear simple documentation to walk people through setting it up. </li> <li> There are some bottlenecks in the current Tor network. Help us design experiments to track down and demonstrate where the problems are, and then we can focus better on fixing them. </li> <li> Tor needs some architectural changes too. One important change is to start providing <a href="#EverybodyARelay">better service to people who relay traffic</a>. We're working on this, and we'll finish faster if we get to spend more time on it. </li> <li> Help do other things so we can do the hard stuff. Please take a moment to figure out what your skills and interests are, and then <a href="<page getinvolved/volunteer>">look at our volunteer page</a>. </li> <li> Help find sponsors for Tor. Do you work at a company or government agency that uses Tor or has a use for Internet privacy, e.g. to browse the competition's websites discreetly, or to connect back to the home servers when on the road without revealing affiliations? If your organization has an interest in keeping the Tor network working, please contact them about supporting Tor. Without sponsors, Tor is going to become even slower. </li> <li> If you can't help out with any of the above, you can still help out individually by <a href="<page donate/donate>">donating a bit of money to the cause</a>. It adds up! </li> </ul> <hr> <a id="Funding"></a> <h3><a class="anchor" href="#Funding">What would The Tor Project do with more funding?</a></h3> <p> The Tor network's <a href="https://metrics.torproject.org/network.html#networksize">several thousand</a> relays push <a href="https://metrics.torproject.org/network.html#bandwidth">over 1GB per second on average</a>. We have <a href="https://metrics.torproject.org/users.html#direct-users">several hundred thousand daily users</a>. But the Tor network is not yet self-sustaining. </p> <p> There are six main development/maintenance pushes that need attention: </p> <ul> <li> Scalability: We need to keep scaling and decentralizing the Tor architecture so it can handle thousands of relays and millions of users. The upcoming stable release is a major improvement, but there's lots more to be done next in terms of keeping Tor fast and stable. </li> <li> User support: With this many users, a lot of people are asking questions all the time, offering to help out with things, and so on. We need good clean docs, and we need to spend some effort coordinating volunteers. </li> <li> Relay support: the Tor network is run by volunteers, but they still need attention with prompt bug fixes, explanations when things go wrong, reminders to upgrade, and so on. The network itself is a commons, and somebody needs to spend some energy making sure the relay operators stay happy. We also need to work on stability on some platforms — e.g., Tor relays have problems on Win XP currently. </li> <li> Usability: Beyond documentation, we also need to work on usability of the software itself. This includes installers, clean GUIs, easy configuration to interface with other applications, and generally automating all of the difficult and confusing steps inside Tor. We've got a start on this with the <a href="<page projects/vidalia>">Vidalia GUI</a>, but much more work remains — usability for privacy software has never been easy. </li> <li> Incentives: We need to work on ways to encourage people to configure their Tors as relays and exit nodes rather than just clients. <a href="#EverybodyARelay">We need to make it easy to become a relay, and we need to give people incentives to do it.</a> </li> <li> Research: The anonymous communications field is full of surprises and gotchas. In our copious free time, we also help run top anonymity and privacy conferences like <a href="http://petsymposium.org/">PETS</a>. We've identified a set of critical <a href="<page getinvolved/volunteer>#Research">Tor research questions</a> that will help us figure out how to make Tor secure against the variety of attacks out there. Of course, there are more research questions waiting behind these. </li> </ul> <p> We're continuing to move forward on all of these, but at this rate <a href="#WhySlow">the Tor network is growing faster than the developers can keep up</a>. Now would be an excellent time to add a few more developers to the effort so we can continue to grow the network. </p> <p> We are also excited about tackling related problems, such as censorship-resistance. </p> <p> We are proud to have <a href="<page about/sponsors>">sponsorship and support</a> from the Omidyar Network, the International Broadcasting Bureau, Bell Security Solutions, the Electronic Frontier Foundation, several government agencies and research groups, and hundreds of private contributors. </p> <p> However, this support is not enough to keep Tor abreast of changes in the Internet privacy landscape. Please <a href="<page donate/donate>">donate</a> to the project, or <a href="<page about/contact>">contact</a> our executive director for information on making grants or major donations. </p> <hr> <a id="Metrics"></a> <h3><a class="anchor" href="#Metrics">How many people use Tor? How many relays or exit nodes are there?</a></h3> <p>All this and more about measuring Tor can be found at the <a href="https://metrics.torproject.org/">Tor Metrics Portal</a>.</p> <hr> <a id="SSLcertfingerprint"></a> <h3><a class="anchor" href="#SSLcertfingerprint">What are the SSL certificate fingerprints for Tor's various websites?</a></h3> <p> <pre> *.torproject.org SSL certificate from Digicert: The serial number is: 02:DA:41:04:89:A5:FD:A2:B5:DB:DB:F8:ED:15:0D:BE The SHA-1 fingerprint is: a7e70f8a648fe04a9677f13eedf6f91b5f7f2e25 The SHA-256 fingerprint is: 23b854af6b96co224fd173382c520b46fa94f2d4e7238893f63ad2d783e27b4b blog.torproject.org SSL certificate from RapidSSL: The serial number is: 00:EF:A3 The SHA-1 fingerprint is: 50af43db8438e67f305a3257d8ef198e8c42f13f </pre> </p> <hr> <a id="HowUninstallTor"></a> <h3><a class="anchor" href="#HowUninstallTor">How do I uninstall Tor?</a></h3> <p> Tor Browser does not install itself in the classic sense of applications. You just simply delete the folder or directory named "Tor Browser" and it is removed from your system. </p> <p> If this is not related to Tor Browser, uninstallation depends entirely on how you installed it and which operating system you have. If you installed a package, then hopefully your package has a way to uninstall itself. The Windows packages include uninstallers. The proper way to completely remove Tor, Vidalia, and Torbutton for Firefox on any version of Windows is as follows: </p> <ol> <li>In your taskbar, right click on Vidalia (the green onion or the black head) and choose exit.</li> <li>Right click on the taskbar to bring up TaskManager. Look for tor.exe in the Process List. If it's running, right click and choose End Process.</li> <li>Click the Start button, go to Programs, go to Vidalia, choose Uninstall. This will remove the Vidalia bundle, which includes Tor.</li> <li>Start Firefox. Go to the Tools menu, choose Add-ons. Select Torbutton. Click the Uninstall button.</li> </ol> <p> If you do not follow these steps (for example by trying to uninstall Vidalia and Tor while they are still running), you will need to reboot and manually remove the directory "Program Files\Vidalia Bundle". </p> <p> For Mac OS X, follow the <a href="<page docs/tor-doc-osx>#uninstall">uninstall directions</a>. </p> <p> If you installed by source, I'm afraid there is no easy uninstall method. But on the bright side, by default it only installs into /usr/local/ and it should be pretty easy to notice things there. </p> <hr> <a id="PGPSigs"></a> <h3><a class="anchor" href="#PGPSigs">What are these "sig" files on the download page?</a></h3> <p> These are PGP signatures, so you can verify that the file you've downloaded is exactly the one that we intended you to get. </p> <p> Please read the <a href="<page docs/verifying-signatures>">verifying signatures</a> page for details. </p> <hr> <a id="GetTor"></a> <h3><a class="anchor" href="#GetTor">Your website is blocked in my country. How do I download Tor?</a></h3> <p> Some government or corporate firewalls censor connections to Tor's website. In those cases, you have three options. First, get it from a friend — the <a href="<page projects/torbrowser>">Tor Browser Bundle</a> fits nicely on a USB key. Second, find the <a href="https://encrypted.google.com/search?q=tor+mirrors">google cache</a> for the <a href="<page getinvolved/mirrors>">Tor mirrors</a> page and see if any of those copies of our website work for you. Third, you can download Tor via email: log in to your Gmail account and mail '<tt>gettor@gettor.torproject.org</tt>'. If you include the word 'help' in the body of the email, it will reply with instructions. Note that only a few webmail providers are supported, since they need to be able to receive very large attachments. </p> <p> Be sure to <a href="<page docs/verifying-signatures>">verify the signature</a> of any package you download, especially when you get it from somewhere other than our official HTTPS website. </p> <hr> <a id="CompileTorWindows"></a> <h3><a class="anchor" href="#CompileTorWindows">How do I compile Tor under Windows?</a></h3> <p> Try following the steps at <a href="<gitblob>doc/tor-win32-mingw-creation.txt"> tor-win32-mingw-creation.txt</a>. </p> <p> (Note that you don't need to compile Tor yourself in order to use it. Most people just use the packages available on the <a href="<page download/download>">download page</a>.) </p> <hr> <a id="VirusFalsePositives"></a> <h3><a class="anchor" href="#VirusFalsePositives">Why does my Tor executable appear to have a virus or spyware?</a></h3> <p> Sometimes, overzealous Windows virus and spyware detectors trigger on some parts of the Tor Windows binary. Our best guess is that these are false positives — after all, the anti-virus and anti-spyware business is just a guessing game anyway. You should contact your vendor and explain that you have a program that seems to be triggering false positives. Or pick a better vendor. </p> <p> In the meantime, we encourage you to not just take our word for it. Our job is to provide the source; if you're concerned, please do <a href="#CompileTorWindows">recompile it yourself</a>. </p> <hr> <a id="LiveCD"></a> <h3><a class="anchor" href="#LiveCD">Is there a LiveCD or other bundle that includes Tor?</a></h3> <p> Yes. Use <a href="https://tails.boum.org/">The Amnesic Incognito Live System</a> or <a href="<page projects/torbrowser>">the Tor Browser Bundle</a>. </p> <hr> <a id="TBBFlash"></a> <h3><a class="anchor" href="#TBBFlash">Why can't I view videos on YouTube and other Flash-based sites?</a></h3> <p> <a href="https://www.torproject.org/torbutton/torbutton-faq.html. en#noflash">Answer</a> </p> <hr> <a id="TBBSocksPort"></a> <h3><a class="anchor" href="#TBBSocksPort">I'm on OSX or Linux and I want to run another application through the Tor launched by Tor Browser Bundle. How do I predict my Socks port?</a></h3> <p> Typically Tor listens for Socks connections on port 9050. TBB on OSX and Linux has an experimental feature where Tor listens on random unused ports rather than a fixed port each time. The goal is to avoid conflicting with a "system" Tor install, so you can run a system Tor and TBB at the same time. We're <a href="https://trac.torproject.org/projects/tor/ticket/3948">working on a feature</a> where Tor will try the usual ports first and then back off to a random choice if they're already in use. Until then, if you want to configure some other application to use Tor as a Socks proxy, here's a workaround: </p> <p> In Vidalia, go to Settings->Advanced and uncheck the box that says 'Configure ControlPort automatically'. Click OK and restart TBB. Your Socks port will then be on 9050. </p> <hr> <a id="TBBPolipo"></a> <h3><a class="anchor" href="#TBBPolipo">I need an HTTP proxy. Where did Polipo go?</a></h3> <p> In the past, Tor bundles included an HTTP proxy like Privoxy or Polipo, solely to work around a bug in Firefox that was finally fixed in Firefox 6. Now you don't need a separate HTTP proxy to use Tor, and in fact leaving it out makes you safer because Torbutton has better control over Firefox's interaction with websites. </p> <p> If you are trying to use some external application with Tor, step zero should be to <a href="<page download/download>#warning">reread the set of warnings</a> for ways you can screw up. Step one should be to try to use a Socks proxy rather than an http proxy — Tor runs a Socks proxy on port 9050 on Windows, or <a href="#TBBSocksPort">see above</a> for OSX and Linux. </p> <p> If that fails, feel free to install <a href="http://www.privoxy.org/">privoxy</a>. However, please realize that this approach is not recommended for novice users. Privoxy has an <a href="http://www.privoxy.org/faq/misc.html#TOR">example configuration</a> of Tor and Privoxy. </p> <hr> <a id="TBBOtherExtensions"></a> <h3><a class="anchor" href="#TBBOtherExtensions">Can I install other Firefox extensions?</a></h3> <p> Yes. Just install them like normal. But be sure to avoid extensions like Foxyproxy that screw up your proxy settings. Also, avoid privacy-invasive extensions (for example, pretty much anything with the word Toolbar in its name). </p> <hr> <a id="TBBJavaScriptEnabled"></a> <h3><a class="anchor" href="#TBBJavaScriptEnabled">Why is NoScript configured to allow JavaScript by default in the Tor Browser Bundle? Isn't that unsafe?</a></h3> <p> We configure NoScript to allow JavaScript by default in the Tor Browser Bundle because many websites will not work with JavaScript disabled. Most users would give up on Tor entirely if a website they want to use requires JavaScript, because they would not know how to allow a website to use JavaScript (or that enabling JavaScript might make a website work). </p> <hr> <a id="TBBCanIBlockJS"></a> <h3><a class="anchor" href="#CanIBlockJS">I'm an expert! (No, really!) Can I configure NoScript to block JavaScript by default?</a></h3> <p> You can configure your copies of Tor Browser Bundle however you want to. However, we recommend that even users who know how to use NoScript leave JavaScript enabled if possible, because a website or exit node can easily distinguish users who disable JavaScript from users who use Tor Browser bundle with its default settings (thus users who disable JavaScript are less anonymous). </p> <p> Disabling JavaScript by default, then allowing a few websites to run scripts, is especially bad for your anonymity: the set of websites which you allow to run scripts is very likely to <em>uniquely</em> identify your browser. </p> <hr> <a id="TBBOtherBrowser"></a> <h3><a class="anchor" href="#TBBOtherBrowser">I want to use Chrome/IE/Opera/etc with Tor.</a></h3> <p> Unfortunately, Torbutton only works with Firefox right now, and without <a href="https://www.torproject.org/torbutton/en/design/">Torbutton's extensive privacy fixes</a> there are many ways for websites or other attackers to recognize you, track you back to your IP address, and so on. In short, using any browser besides Tor Browser Bundle with Tor is a really bad idea. </p> <p> We're working with the Chrome team to <a href="https://blog.torproject.org/blog/google-chrome-incognito-mode-tor- and-fingerprinting">fix some bugs and missing APIs in Chrome</a> so it will be possible to write a Torbutton for Chrome. No support for any other browser is on the horizon. </p> <hr> <a id="TBBCloseBrowser"></a> <h3><a class="anchor" href="#TBBCloseBrowser">I want to leave Tor Browser Bundle running but close the browser.</a></h3> <p> We're working on a way to make this possible on all platforms. Please be patient. </p> <hr> <a id="GoogleCaptcha"></a> <h3><a class="anchor" href="#GoogleCaptcha">Google makes me solve a Captcha or tells me I have spyware installed.</a></h3> <p> This is a known and intermittent problem; it does not mean that Google considers Tor to be spyware. </p> <p> When you use Tor, you are sending queries through exit relays that are also shared by thousands of other users. Tor users typically see this message when many Tor users are querying Google in a short period of time. Google interprets the high volume of traffic from a single IP address (the exit relay you happened to pick) as somebody trying to "crawl" their website, so it slows down traffic from that IP address for a short time. </p> <p> An alternate explanation is that Google tries to detect certain kinds of spyware or viruses that send distinctive queries to Google Search. It notes the IP addresses from which those queries are received (not realizing that they are Tor exit relays), and tries to warn any connections coming from those IP addresses that recent queries indicate an infection. </p> <p> To our knowledge, Google is not doing anything intentionally specifically to deter or block Tor use. The error message about an infected machine should clear up again after a short time. </p> <p> Torbutton 1.2.5 (released in mid 2010) detects Google captchas and can automatically redirect you to a more Tor-friendly search engine such as DuckDuckGo, ixquick, or Bing. </p> <hr /> <a id="GmailWarning"></a> <h3><a class="anchor" href="#GmailWarning">Gmail warns me that my account may have been compromised.</a></h3> <p> Sometimes, after you've used Gmail over Tor, Google presents a pop-up notification that your account may have been compromised. The notification window lists a series of IP addresses and locations throughout the world recently used to access your account. </p> <p> In general this is a false alarm: Google saw a bunch of logins from different places, as a result of running the service via Tor, and decided it was a good idea to confirm the account was being accessed by it's rightful owner. </p> <p> Even though this may be a biproduct of using the service via tor, that doesn't mean you can entirely ignore the warning. It is <i>probably</i> a false positive, but it might not be since it is possible for someone to hijack your Google cookie. </p> <p> Cookie hijacking is possible by either physical access to your computer or by watching your network traffic. In theory only physical access should compromise your system because Gmail and similar services should only send the cookie over an SSL link. In practice, alas, it's <a href="http://fscked.org/blog/fully-automated-active-https-cookie- hijacking"> way more complex than that</a>. </p> <p> And if somebody <i>did</i> steal your google cookie, they might end up logging in from unusual places (though of course they also might not). So the summary is that since you're using Tor, this security measure that Google uses isn't so useful for you, because it's full of false positives. You'll have to use other approaches, like seeing if anything looks weird on the account, or looking at the timestamps for recent logins and wondering if you actually logged in at those times. </p> <hr> <a id="torrc"></a> <h3><a class="anchor" href="#torrc">I'm supposed to "edit my torrc". What does that mean?</a></h3> <p> Tor installs a text file called torrc that contains configuration instructions for how your Tor program should behave. The default configuration should work fine for most Tor users. Users of Vidalia can make common changes through the Vidalia interface — only advanced users should need to modify their torrc file directly. </p> <p> Tor Browser Bundle users should edit your torrc through Vidalia. Open the Vidalia Control Panel. Choose Settings. Choose Advanced. Click the button labelled "Edit current torrc". Remember to make sure the checkbox for "Save Settings." is checked. Hit the Ok button and you are done. </p> <p> Otherwise, you will need to edit the file manually. The location of your torrc file depends on the way you installed Tor: </p> <ul> <li>If you installed Tor Browser Bundle, look for <code>Data/Tor/torrc</code> inside your Tor Browser Bundle directory. </li> <li>On Windows, if you installed a Tor bundle with Vidalia, you can find your torrc file in the Start menu under Programs -> Vidalia Bundle -> Tor, or you can find it by hand in <code>\Documents and Settings\<i>username</i>\Application Data\Vidalia\torrc</code>. If you installed Tor without Vidalia, you can find your torrc in the Start menu under Programs -> Tor, or manually in either <code>\Documents and Settings\Application Data\tor\torrc</code> or <code>\Documents and Settings\<i>username</i>\Application Data\tor\torrc</code>. </li> <li>On OS X, if you use Vidalia, edit <code>~/.vidalia/torrc</code>. Otherwise, open your favorite text editor and load <code>/Library/Tor/torrc</code>. </li> <li>On Unix, if you installed a pre-built package, look for <code>/etc/tor/torrc</code> or <code>/etc/torrc</code> or consult your package's documentation. </li> <li>Finally, if you installed from source, you may not have a torrc installed yet: look in <code>/usr/local/etc/</code> and note that you may need to manually copy <code>torrc.sample</code> to <code>torrc</code>. </li> </ul> <p> If you use Vidalia, be sure to exit both Tor and Vidalia before you edit your torrc file manually. Otherwise Vidalia might overwrite your changes. </p> <p> Once you've changed your torrc, you will need to restart Tor for the changes to take effect. (For advanced users on OS X and Unix, note that you actually only need to send Tor a HUP signal, not actually restart it.) </p> <p> For other configuration options you can use, look at the <a href="<page docs/tor-manual>">Tor manual page</a>. Remember, all lines beginning with # in torrc are treated as comments and have no effect on Tor's configuration. </p> <hr> <a id="Logs"></a> <h3><a class="anchor" href="#Logs">How do I set up logging, or see Tor's logs?</a></h3> <p> If you installed a Tor bundle that includes Vidalia, then Vidalia has a window called "Message Log" that will show you Tor's log messages. Click on "Advanced" to see more details. You can click on "Settings" to change your log verbosity or save the messages to a file. You're all set. </p> <p> If you're not using Vidalia, you'll have to go find the log files by hand. Here are some likely places for your logs to be: </p> <ul> <li>On OS X, Debian, Red Hat, etc, the logs are in /var/log/tor/ </li> <li>On Windows, there are no default log files currently. If you enable logs in your torrc file, they default to <code>\username\Application Data\tor\log\</code> or <code>\Application Data\tor\log\</code> </li> <li>If you compiled Tor from source, by default your Tor logs to <a href="http://en.wikipedia.org/wiki/Standard_streams">"stdout"</a> at log-level notice. If you enable logs in your torrc file, they default to <code>/usr/local/var/log/tor/</code>. </li> </ul> <p> To change your logging setup by hand, <a href="#torrc">edit your torrc</a> and find the section (near the top of the file) which contains the following line: </p> <pre> \## Logs go to stdout at level "notice" unless redirected by something \## else, like one of the below lines. </pre> <p> For example, if you want Tor to send complete debug, info, notice, warn, and err level messages to a file, append the following line to the end of the section: </p> <pre> Log debug file c:/program files/tor/debug.log </pre> <p> Replace <code>c:/program files/tor/debug.log</code> with a directory and filename for your Tor log. </p> <hr> <a id="DoesntWork"></a> <h3><a class="anchor" href="#DoesntWork">I installed Tor but it's not working.</a></h3> <p> Once you've got the Tor bundle up and running, the first question to ask is whether your Tor client is able to establish a circuit. </p> <p>If Tor can establish a circuit, the onion icon in Vidalia will turn green (and if you're running Tor Browser Bundle, it will automatically launch a browser for you). You can also check in the Vidalia Control Panel to make sure it says "Connected to the Tor network!" under Status. For those not using Vidalia, check your <a href="#Logs">Tor logs</a> for a line saying that Tor "has successfully opened a circuit. Looks like client functionality is working." </p> <p> If Tor can't establish a circuit, here are some hints: </p> <ol> <li>Are you sure Tor is running? If you're using Vidalia, you may have to click on the onion and select "Start" to launch Tor.</li> <li>Check your system clock. If it's more than a few hours off, Tor will refuse to build circuits. For Microsoft Windows users, synchronize your clock under the clock -> Internet time tab. In addition, correct the day and date under the 'Date & Time' Tab. Also make sure your time zone is correct.</li> <li>Is your Internet connection <a href="#FirewallPorts">firewalled by port</a>, or do you normally need to use a <a href="<wikifaq>#MyInternetconnectionrequiresanHTTPorSOCKSproxy.">proxy</ a>? </li> <li>Are you running programs like Norton Internet Security or SELinux that block certain connections, even though you don't realize they do? They could be preventing Tor from making network connections.</li> <li>Are you in China, or behind a restrictive corporate network firewall that blocks the public Tor relays? If so, you should learn about <a href="<page docs/bridges>">Tor bridges</a>.</li> <li>Check your <a href="#Logs">Tor logs</a>. Do they give you any hints about what's going wrong?</li> </ol> <hr /> <a id="VidaliaPassword"></a> <h3><a class="anchor" href="#VidaliaPassword">Tor/Vidalia prompts for a password at start.</a></h3> <p> Vidalia interacts with the Tor software via Tor's "control port". The control port lets Vidalia receive status updates from Tor, request a new identity, configure Tor's settings, etc. Each time Vidalia starts Tor, Vidalia sets a random password for Tor's control port to prevent other applications from also connecting to the control port and potentially compromising your anonymity. </p> <p> Usually this process of generating and setting a random control password happens in the background. There are three common situations, though, where Vidalia may prompt you for a password: </p> <ol> <li>You're already running Vidalia and Tor. For example, this situation can happen if you installed the Vidalia bundle and now you're trying to run the Tor Browser Bundle. In that case, you'll need to close the old Vidalia and Tor before you can run this one. </li> <li>Vidalia crashed, but left Tor running with the last known random password. After you restart Vidalia, it generates a new random password, but Vidalia can't talk to Tor, because the random passwords are different. <br /> If the dialog that prompts you for a control password has a Reset button, you can click the button and Vidalia will restart Tor with a new random control password. <br /> If you do not see a Reset button, or if Vidalia is unable to restart Tor for you, you can still fix the problem manually. Simply go into your process or task manager, and terminate the Tor process. Then use Vidalia to restart Tor and all will work again. </li> <li>You had previously set Tor to run as a Windows NT service. When Tor is set to run as a service, it starts up when the system boots. If you configured Tor to start as a service through Vidalia, a random password was set and saved in Tor. When you reboot, Tor starts up and uses the random password it saved. You login and start up Vidalia. Vidalia attempts to talk to the already running Tor. Vidalia generates a random password, but it is different than the saved password in the Tor service. <br /> You need to reconfigure Tor to not be a service. See the FAQ entry on <a href="<wikifaq>#HowdoIrunmyTorrelayasanNTservice">running Tor as a Windows NT service</a> for more information on how to remove the Tor service. </li> </ol> <hr> <a id="ChooseEntryExit"></a> <h3><a class="anchor" href="#ChooseEntryExit">Can I control which nodes (or country) are used for entry/exit?</a></h3> <p> Yes. You can set preferred entry and exit nodes as well as inform Tor which nodes you do not want to use. The following options can be added to your config file <a href="#torrc">"torrc"</a> or specified on the command line: </p> <dl> <dt><tt>EntryNodes $fingerprint,$fingerprint,...</tt></dt> <dd>A list of preferred nodes to use for the first hop in the circuit, if possible. </dd> <dt><tt>ExitNodes $fingerprint,$fingerprint,...</tt></dt> <dd>A list of preferred nodes to use for the last hop in the circuit, if possible. </dd> <dt><tt>ExcludeNodes $fingerprint,$fingerprint,...</tt></dt> <dd>A list of nodes to never use when building a circuit. </dd> <dt><tt>ExcludeExitNodes $fingerprint,$fingerprint,...</tt></dt> <dd>A list of nodes to never use when picking an exit. Nodes listed in <tt>ExcludeNodes</tt> are automatically in this list. </dd> </dl> <p> <em>We recommend you do not use these</em> — they are intended for testing and may disappear in future versions. You get the best security that Tor can provide when you leave the route selection to Tor; overriding the entry / exit nodes can mess up your anonymity in ways we don't understand. </p> <p> The <tt>EntryNodes</tt> and <tt>ExitNodes</tt> config options are treated as a request, meaning if the nodes are down or seem slow, Tor will still avoid them. You can make the option mandatory by setting <tt>StrictExitNodes 1</tt> or <tt>StrictEntryNodes 1</tt> — but if you do, your Tor connections will stop working if all of the nodes you have specified become unreachable. See the <a href="<page docs/documentation>#NeatLinks">Tor status pages</a> for some nodes you might pick. </p> <p> Instead of <tt>$fingerprint</tt> you can also specify a <a href="https://secure.wikimedia.org/wikipedia/en/wiki/ISO_3166-1_alpha-2" >2 letter ISO3166 country code</a> in curly braces (for example {de}), or an ip address pattern (for example 255.254.0.0/8), or a node nickname. Make sure there are no spaces between the commas and the list items. </p> <p> If you want to access a service directly through Tor's Socks interface (eg. using ssh via connect.c), another option is to set up an internal mapping in your configuration file using <tt>MapAddress</tt>. See the manual page for details. </p> <hr> <a id="FirewallPorts"></a> <h3><a class="anchor" href="#FirewallPorts">My firewall only allows a few outgoing ports.</a></h3> <p> If your firewall works by blocking ports, then you can tell Tor to only use the ports that your firewall permits by adding "FascistFirewall 1" to your <a href="<page docs/faq>#torrc">torrc configuration file</a>, or by clicking "My firewall only lets me connect to certain ports" in Vidalia's Network Settings window. </p> <p> By default, when you set this Tor assumes that your firewall allows only port 80 and port 443 (HTTP and HTTPS respectively). You can select a different set of ports with the FirewallPorts torrc option. </p> <p> If you want to be more fine-grained with your controls, you can also use the ReachableAddresses config options, e.g.: </p> <pre> ReachableDirAddresses *:80 ReachableORAddresses *:443 </pre> <hr> <a id="RelayFlexible"></a> <h3><a class="anchor" href="#RelayFlexible">How stable does my relay need to be?</a></h3> <p> We aim to make setting up a Tor relay easy and convenient: </p> <ul> <li>Tor has built-in support for <a href="<wikifaq>#WhatbandwidthshapingoptionsareavailabletoTorrelays"> rate limiting</a>. Further, if you have a fast link but want to limit the number of bytes per day (or week or month) that you donate, check out the <a href="<wikifaq>#HowcanIlimitthetotalamountofbandwidthusedbymyTorrelay"> hibernation feature</a>. </li> <li>Each Tor relay has an <a href="#ExitPolicies">exit policy</a> that specifies what sort of outbound connections are allowed or refused from that relay. If you are uncomfortable allowing people to exit from your relay, you can set it up to only allow connections to other Tor relays. </li> <li>It's fine if the relay goes offline sometimes. The directories notice this quickly and stop advertising the relay. Just try to make sure it's not too often, since connections using the relay when it disconnects will break. </li> <li>We can handle relays with dynamic IPs just fine — simply leave the Address config option blank, and Tor will try to guess. </li> <li>If your relay is behind a NAT and it doesn't know its public IP (e.g. it has an IP of 192.168.x.y), you'll need to set up port forwarding. Forwarding TCP connections is system dependent but <a href="<wikifaq>#ImbehindaNATFirewall">this FAQ entry</a> offers some examples on how to do this. </li> <li>Your relay will passively estimate and advertise its recent bandwidth capacity, so high-bandwidth relays will attract more users than low-bandwidth ones. Therefore having low-bandwidth relays is useful too. </li> </ul> <hr> <a id="RunARelayBut"></a> <a id="ExitPolicies"></a> <h3><a class="anchor" href="#ExitPolicies">I'd run a relay, but I don't want to deal with abuse issues.</a></h3> <p> Great. That's exactly why we implemented exit policies. </p> <p> Each Tor relay has an exit policy that specifies what sort of outbound connections are allowed or refused from that relay. The exit policies are propagated to Tor clients via the directory, so clients will automatically avoid picking exit relays that would refuse to exit to their intended destination. This way each relay can decide the services, hosts, and networks he wants to allow connections to, based on abuse potential and his own situation. Read the FAQ entry on <a href="<page docs/faq-abuse>#TypicalAbuses">issues you might encounter</a> if you use the default exit policy, and then read Mike Perry's <a href="<blog>tips-running-exit-node-minimal-harassment">tips for running an exit node with minimal harassment</a>. </p> <p> The default exit policy allows access to many popular services (e.g. web browsing), but <a href="<wikifaq>#Istherealistofdefaultexitports">restricts</a> some due to abuse potential (e.g. mail) and some since the Tor network can't handle the load (e.g. default file-sharing ports). You can change your exit policy using Vidalia's "Sharing" tab, or by manually editing your <a href="<page docs/faq>#torrc">torrc</a> file. If you want to avoid most if not all abuse potential, set it to "reject *:*" (or un-check all the boxes in Vidalia). This setting means that your relay will be used for relaying traffic inside the Tor network, but not for connections to external websites or other services. </p> <p> If you do allow any exit connections, make sure name resolution works (that is, your computer can resolve Internet addresses correctly). If there are any resources that your computer can't reach (for example, you are behind a restrictive firewall or content filter), please explicitly reject them in your exit policy — otherwise Tor users will be impacted too. </p> <hr> <a id="RelayOrBridge"></a> <h3><a class="anchor" href="#RelayOrBridge">Should I be a normal relay or bridge relay?</a></h3> <p><a href="<page docs/bridges>">Bridge relays</a> (or "bridges" for short) are <a href="<page docs/tor-doc-relay>">Tor relays</a> that aren't listed in the main Tor directory. That means that even an ISP or government trying to filter connections to the Tor network probably won't be able to block all the bridges. </p> <p>Being a normal relay vs being a bridge relay is almost the same configuration: it's just a matter of whether your relay is listed publically or not. </p> <p>Right now, China is the main place in the world that filters connections to the Tor network. So bridges are useful a) for users in China, b) as a backup measure in case the Tor network gets blocked in more places, and c) for people who want an extra layer of security because they're worried somebody will recognize that it's a public Tor relay IP address they're contacting. </p> <p>So should you run a normal relay or bridge relay? If you have lots of bandwidth, you should definitely run a normal relay — the average bridge doesn't see much load these days. If you're willing to <a href="#ExitPolicies">be an exit</a>, you should definitely run a normal relay, since we need more exits. If you can't be an exit and only have a little bit of bandwidth, be a bridge. Thanks for volunteering! </p> <hr> <a id="MultipleRelays"></a> <h3><a class="anchor" href="#MultipleRelays">I want to run more than one relay.</a></h3> <p> Great. If you want to run several relays to donate more to the network, we're happy with that. But please don't run more than a few dozen on the same network, since part of the goal of the Tor network is dispersal and diversity. </p> <p> If you do decide to run more than one relay, please set the "MyFamily" config option in the <a href="#torrc">torrc</a> of each relay, listing all the relays (comma-separated) that are under your control: </p> <pre> MyFamily $fingerprint1,$fingerprint2,$fingerprint3 </pre> <p> where each fingerprint is the 40 character identity fingerprint (without spaces). You can also list them by nickname, but fingerprint is safer. Be sure to prefix the digest strings with a dollar sign ('$') so that the digest is not confused with a nickname in the config file. </p> <p> That way clients will know to avoid using more than one of your relays in a single circuit. You should set MyFamily if you have administrative control of the computers or of their network, even if they're not all in the same geographic location. </p> <hr> <a id="RelayMemory"></a> <h3><a class="anchor" href="#RelayMemory">Why is my Tor relay using so much memory?</a></h3> <p>If your Tor relay is using more memory than you'd like, here are some tips for reducing its footprint: </p> <ol> <li>If you're on Linux, you may be encountering memory fragmentation bugs in glibc's malloc implementation. That is, when Tor releases memory back to the system, the pieces of memory are fragmented so they're hard to reuse. The Tor tarball ships with OpenBSD's malloc implementation, which doesn't have as many fragmentation bugs (but the tradeoff is higher CPU load). You can tell Tor to use this malloc implementation instead: <tt>./configure --enable-openbsd-malloc</tt></li> <li>If you're running a fast relay, meaning you have many TLS connections open, you are probably losing a lot of memory to OpenSSL's internal buffers (38KB+ per socket). We've patched OpenSSL to <a href="https://lists.torproject.org/pipermail/tor-dev/2008-June/001519. html">release unused buffer memory more aggressively</a>. If you update to OpenSSL 1.0.0 or newer, Tor's build process will automatically recognize and use this feature.</li> <li>If you're running on Solaris, OpenBSD, NetBSD, or old FreeBSD, Tor is probably forking separate processes rather than using threads. Consider switching to a <a href="<wikifaq>#WhydoesntmyWindowsorotherOSTorrelayrunwell">better operating system</a>.</li> <li>If you still can't handle the memory load, consider reducing the amount of bandwidth your relay advertises. Advertising less bandwidth means you will attract fewer users, so your relay shouldn't grow as large. See the <tt>MaxAdvertisedBandwidth</tt> option in the man page.</li> </ol> <p> All of this said, fast Tor relays do use a lot of ram. It is not unusual for a fast exit relay to use 500-1000 MB of memory. </p> <hr> <a id="WhyNotNamed"></a> <h3><a class="anchor" href="#WhyNotNamed">Why is my Tor relay not named?</a></h3> <p> We currently use these metrics to determine if your relay should be named:<br> </p> <ul> <li>The name is not currently mapped to a different key. Existing mappings are removed after 6 months of inactivity from a relay.</li> <li>The relay must have been around for at least two weeks.</li> <li>No other router may have wanted the same name in the past month.</li> </ul> <hr> <a id="RelayDonations"></a> <h3><a class="anchor" href="#RelayDonations">Can I donate for a relay rather than run my own?</a></h3> <p> Sure! We recommend two non-profit charities that are happy to turn your donations into better speed and anonymity for the Tor network: </p> <ul> <li><a href="https://www.torservers.net/">torservers.net</a> is a German charitable non-profit that runs a wide variety of exit relays. They also like donations of bandwidth from ISPs.</li> <li><a href="https://www.noisebridge.net/wiki/Noisebridge_Tor">Noisebridge</a> is a US-based 501(c)(3) non-profit that collects donations and turns them into more exit relay capacity.</li> </ul> <p> These organizations are not the same as <a href="<page donate/donate>">The Tor Project, Inc</a>, but we consider that a good thing. They're both run by nice people who are part of the Tor community. </p> <p> Note that there can be a tradeoff here between anonymity and performance. The Tor network's anonymity comes in part from diversity, so if you are in a position to run your own relay, you will be improving Tor's anonymity more than by donating. At the same time though, economies of scale for bandwidth mean that combining many small donations into several larger relays is more efficient at improving network performance. Improving anonymity and improving performance are both worthwhile goals, so however you can help is great! </p> <hr> <a id="KeyManagement"></a> <h3><a class="anchor" href="#KeyManagement">Tell me about all the keys Tor uses.</a></h3> <p> Tor uses a variety of different keys, with three goals in mind: 1) encryption to ensure privacy of data within the Tor network, 2) authentication so clients know they're talking to the relays they meant to talk to, and 3) signatures to make sure all clients know the same set of relays. </p> <p> <b>Encryption</b>: first, all connections in Tor use TLS link encryption, so observers can't look inside to see which circuit a given cell is intended for. Further, the Tor client establishes an ephemeral encryption key with each relay in the circuit; these extra layers of encryption mean that only the exit relay can read the cells. Both sides discard the circuit key when the circuit ends, so logging traffic and then breaking into the relay to discover the key won't work. </p> <p> <b>Authentication</b>: Every Tor relay has a public decryption key called the "onion key". Each relay rotates its onion key once a week. When the Tor client establishes circuits, at each step it <a href="<svnprojects>design-paper/tor-design.html#subsec:circuits">demands that the Tor relay prove knowledge of its onion key</a>. That way the first node in the path can't just spoof the rest of the path. Because the Tor client chooses the path, it can make sure to get Tor's "distributed trust" property: no single relay in the path can know about both the client and what the client is doing. </p> <p> <b>Coordination</b>: How do clients know what the relays are, and how do they know that they have the right keys for them? Each relay has a long-term public signing key called the "identity key". Each directory authority additionally has a "directory signing key". The directory authorities <a href="<specblob>dir-spec.txt">provide a signed list</a> of all the known relays, and in that list are a set of certificates from each relay (self-signed by their identity key) specifying their keys, locations, exit policies, and so on. So unless the adversary can control a majority of the directory authorities (as of 2012 there are 8 directory authorities), he can't trick the Tor client into using other Tor relays. </p> <p> How do clients know what the directory authorities are? The Tor software comes with a built-in list of location and public key for each directory authority. So the only way to trick users into using a fake Tor network is to give them a specially modified version of the software. </p> <p> How do users know they've got the right software? When we distribute the source code or a package, we digitally sign it with <a href="http://www.gnupg.org/">GNU Privacy Guard</a>. See the <a href="<page docs/verifying-signatures>">instructions on how to check Tor's signatures</a>. </p> <p> In order to be certain that it's really signed by us, you need to have met us in person and gotten a copy of our GPG key fingerprint, or you need to know somebody who has. If you're concerned about an attack on this level, we recommend you get involved with the security community and start meeting people. </p> <hr> <a id="EntryGuards"></a> <h3><a class="anchor" href="#EntryGuards">What are Entry Guards?</a></h3> <p> Tor (like all current practical low-latency anonymity designs) fails when the attacker can see both ends of the communications channel. For example, suppose the attacker controls or watches the Tor relay you choose to enter the network, and also controls or watches the website you visit. In this case, the research community knows no practical low-latency design that can reliably stop the attacker from correlating volume and timing information on the two sides. </p> <p> So, what should we do? Suppose the attacker controls, or can observe, <i>C</i> relays. Suppose there are <i>N</i> relays total. If you select new entry and exit relays each time you use the network, the attacker will be able to correlate all traffic you send with probability <i>(c/n)<sup>2</sup></i>. But profiling is, for most users, as bad as being traced all the time: they want to do something often without an attacker noticing, and the attacker noticing once is as bad as the attacker noticing more often. Thus, choosing many random entries and exits gives the user no chance of escaping profiling by this kind of attacker. </p> <p> The solution is "entry guards": each Tor client selects a few relays at random to use as entry points, and uses only those relays for her first hop. If those relays are not controlled or observed, the attacker can't win, ever, and the user is secure. If those relays <i>are</i> observed or controlled by the attacker, the attacker sees a larger <i>fraction</i> of the user's traffic — but still the user is no more profiled than before. Thus, the user has some chance (on the order of <i>(n-c)/n</i>) of avoiding profiling, whereas she had none before. </p> <p> You can read more at <a href="http://freehaven.net/anonbib/#wright02">An Analysis of the Degradation of Anonymous Protocols</a>, <a href="http://freehaven.net/anonbib/#wright03">Defending Anonymous Communication Against Passive Logging Attacks</a>, and especially <a href="http://freehaven.net/anonbib/#hs-attack06">Locating Hidden Servers</a>. </p> <p> Restricting your entry nodes may also help against attackers who want to run a few Tor nodes and easily enumerate all of the Tor user IP addresses. (Even though they can't learn what destinations the users are talking to, they still might be able to do bad things with just a list of users.) However, that feature won't really become useful until we move to a "directory guard" design as well. </p> <hr> <a id="EverybodyARelay"></a> <h3><a class="anchor" href="#EverybodyARelay">You should make every Tor user be a relay.</a></h3> <p> Requiring every Tor user to be a relay would help with scaling the network to handle all our users, and <a href="<wikifaq>#DoIgetbetteranonymityifIrunarelay">running a Tor relay may help your anonymity</a>. However, many Tor users cannot be good relays — for example, some Tor clients operate from behind restrictive firewalls, connect via modem, or otherwise aren't in a position where they can relay traffic. Providing service to these clients is a critical part of providing effective anonymity for everyone, since many Tor users are subject to these or similar constraints and including these clients increases the size of the anonymity set. </p> <p> That said, we do want to encourage Tor users to run relays, so what we really want to do is simplify the process of setting up and maintaining a relay. We've made a lot of progress with easy configuration in the past few years: Vidalia has an easy relay configuration interface, and supports uPnP too. Tor is good at automatically detecting whether it's reachable and how much bandwidth it can offer. </p> <p> There are five steps we need to address before we can do this though: </p> <p> First, we need to make Tor stable as a relay on all common operating systems. The main remaining platform is Windows, and we're mostly there. See Section 4.1 of <a href="https://www.torproject.org/press/2008-12-19-roadmap-press-release" >our development roadmap</a>. </p> <p> Second, we still need to get better at automatically estimating the right amount of bandwidth to allow. See item #7 on the <a href="<page getinvolved/volunteer>#Research">research section of the volunteer page</a>: "Tor doesn't work very well when relays have asymmetric bandwidth (e.g. cable or DSL)". It might be that <a href="<page docs/faq>#TransportIPnotTCP">switching to UDP transport</a> is the simplest answer here — which alas is not a very simple answer at all. </p> <p> Third, we need to work on scalability, both of the network (how to stop requiring that all Tor relays be able to connect to all Tor relays) and of the directory (how to stop requiring that all Tor users know about all Tor relays). Changes like this can have large impact on potential and actual anonymity. See Section 5 of the <a href="<svnprojects>design-paper/challenges.pdf">Challenges</a> paper for details. Again, UDP transport would help here. </p> <p> Fourth, we need to better understand the risks from letting the attacker send traffic through your relay while you're also initiating your own anonymized traffic. <a href="http://freehaven.net/anonbib/#back01">Three</a> <a href="http://freehaven.net/anonbib/#clog-the-queue">different</a> <a href="http://freehaven.net/anonbib/#torta05">research</a> papers describe ways to identify the relays in a circuit by running traffic through candidate relays and looking for dips in the traffic while the circuit is active. These clogging attacks are not that scary in the Tor context so long as relays are never clients too. But if we're trying to encourage more clients to turn on relay functionality too (whether as <a href="<page docs/bridges>">bridge relays</a> or as normal relays), then we need to understand this threat better and learn how to mitigate it. </p> <p> Fifth, we might need some sort of incentive scheme to encourage people to relay traffic for others, and/or to become exit nodes. Here are our <a href="<blog>two-incentive-designs-tor">current thoughts on Tor incentives</a>. </p> <p> Please help on all of these! </p> <hr> <a id="TransportIPnotTCP"></a> <h3><a class="anchor" href="#TransportIPnotTCP">You should transport all IP packets, not just TCP packets.</a></h3> <p> This would be handy, because it would make Tor better able to handle new protocols like VoIP, it could solve the whole need to socksify applications, and it would solve the fact that exit relays need to allocate a lot of file descriptors to hold open all the exit connections. </p> <p> We're heading in this direction: see <a href="https://trac.torproject.org/projects/tor/ticket/1855">this trac ticket</a> for directions we should investigate. Some of the hard problems are: </p> <ol> <li>IP packets reveal OS characteristics. We would still need to do IP-level packet normalization, to stop things like TCP fingerprinting attacks. Given the diversity and complexity of TCP stacks, along with <a href="<wikifaq>#DoesTorresistremotephysicaldevicefingerprinting">device fingerprinting attacks</a>, it looks like our best bet is shipping our own user-space TCP stack. </li> <li>Application-level streams still need scrubbing. We will still need user-side applications like Torbutton. So it won't become just a matter of capturing packets and anonymizing them at the IP layer. </li> <li>Certain protocols will still leak information. For example, we must rewrite DNS requests so they are delivered to an unlinkable DNS server rather than the DNS server at a user's ISP; thus, we must understand the protocols we are transporting. </li> <li><a href="http://crypto.stanford.edu/~nagendra/projects/dtls/dtls.html">DTLS </a> (datagram TLS) basically has no users, and IPsec sure is big. Once we've picked a transport mechanism, we need to design a new end-to-end Tor protocol for avoiding tagging attacks and other potential anonymity and integrity issues now that we allow drops, resends, et cetera. </li> <li>Exit policies for arbitrary IP packets mean building a secure IDS. Our node operators tell us that exit policies are one of the main reasons they're willing to run Tor. Adding an Intrusion Detection System to handle exit policies would increase the security complexity of Tor, and would likely not work anyway, as evidenced by the entire field of IDS and counter-IDS papers. Many potential abuse issues are resolved by the fact that Tor only transports valid TCP streams (as opposed to arbitrary IP including malformed packets and IP floods), so exit policies become even <i>more</i> important as we become able to transport IP packets. We also need to compactly describe exit policies in the Tor directory, so clients can predict which nodes will allow their packets to exit — and clients need to predict all the packets they will want to send in a session before picking their exit node! </li> <li>The Tor-internal name spaces would need to be redesigned. We support hidden service ".onion" addresses by intercepting the addresses when they are passed to the Tor client. Doing so at the IP level will require a more complex interface between Tor and the local DNS resolver. </li> </ol> <hr> <a id="HideExits"></a> <h3><a class="anchor" href="#HideExits">You should hide the list of Tor relays, so people can't block the exits.</a></h3> <p> There are a few reasons we don't: </p> <ol> <li>We can't help but make the information available, since Tor clients need to use it to pick their paths. So if the "blockers" want it, they can get it anyway. Further, even if we didn't tell clients about the list of relays directly, somebody could still make a lot of connections through Tor to a test site and build a list of the addresses they see. </li> <li>If people want to block us, we believe that they should be allowed to do so. Obviously, we would prefer for everybody to allow Tor users to connect to them, but people have the right to decide who their services should allow connections from, and if they want to block anonymous users, they can. </li> <li>Being blockable also has tactical advantages: it may be a persuasive response to website maintainers who feel threatened by Tor. Giving them the option may inspire them to <a href="<page docs/faq-abuse>#Bans">stop and think</a> about whether they really want to eliminate private access to their system, and if not, what other options they might have. The time they might otherwise have spent blocking Tor, they may instead spend rethinking their overall approach to privacy and anonymity. </li> </ol> <hr> <a id="Criminals"></a> <h3><a class="anchor" href="#Criminals">Doesn't Tor enable criminals to do bad things?</a></h3> <p> For the answer to this question and others, please see our <a href="<page docs/faq-abuse>">Tor Abuse FAQ</a>. </p> <hr> <a id="RespondISP"></a> <h3><a class="anchor" href="#RespondISP">How do I respond to my ISP about my exit relay?</a></h3> <p> A collection of templates for successfully responding to ISPs is <a href="<wiki>doc/TorAbuseTemplates">collected here</a>. </p> <hr> </div> <!-- END MAINCOL --> <div id = "sidecol"> #include "side.wmi" #include "info.wmi" </div> <!-- END SIDECOL --> </div> <!-- END CONTENT --> #include <foot.wmi>