## translation metadata
# Revision: $Revision$
# Translation-Priority: 2-medium

#include "head.wmi" TITLE="Tor Project: FAQ" CHARSET="UTF-8"
<div id="content" class="clearfix">
  <div id="breadcrumbs">
    <a href="<page index>">Home &raquo; </a>
    <a href="<page docs/documentation>">Documentation &raquo; </a>
    <a href="<page docs/faq>">FAQ</a>
  <div id="maincol">
    <h1>Tor FAQ</h1>

    <p><a href="#General">General questions:</a><br />
    <a href="#CompilationAndInstallation">Compilation and Installation:</a><br />
    <a href="#TBBGeneral">Tor Browser Bundle (general):</a><br />
    <a href="#TBB3.x">Tor Browser Bundle (3.x series):</a><br />
    <a href="#AdvancedTorUsage">Advanced Tor usage:</a><br />
    <a href="#RunningATorRelay">Running a Tor relay:</a><br />
    <a href="#TorHiddenServices">Tor hidden services:</a><br />
    <a href="#Development">Development:</a><br />
    <a href="#AnonymityAndSecurity">Anonymity and Security:</a><br />
    <a href="#AlternateDesigns">Alternate designs that we don't do (yet):</a><br />
    <a href="#Abuse">Abuse:</a></p>


    <p>General questions:</p>
    <li><a href="#WhatIsTor">What is Tor?</a></li>
    <li><a href="#Torisdifferent">How is Tor different from other
    <li><a href="#CompatibleApplications">What programs can I use with
    <li><a href="#WhyCalledTor">Why is it called Tor?</a></li>
    <li><a href="#Backdoor">Is there a backdoor in Tor?</a></li>
    <li><a href="#DistributingTor">Can I distribute Tor?</a></li>
    <li><a href="#SupportMail">How can I get support?</a></li>
    <li><a href="#Forum">Is there a Tor forum?</a></li>
    <li><a href="#WhySlow">Why is Tor so slow?</a></li>
    <li><a href="#FileSharing">How can I share files anonymously through Tor?
    <li><a href="#Funding">What would The Tor Project do with more
    <li><a href="#IsItWorking">How can I tell if Tor is working, and that my
    connections really are anonymized?</a></li>
    <li><a href="#Mobile">Can I use Tor on my phone or mobile device?</a></li>
    <li><a href="#OutboundPorts">Do I have to open all these outbound ports
    on my firewall?</a></li>
    <li><a href="#FTP">How do I use my browser for ftp with Tor?</a></li>
    <li><a href="#NoDataScrubbing">Does Tor remove personal information
    from the data my application sends?</a></li>
    <li><a href="#Metrics">How many people use Tor? How many relays or
    exit nodes are there?</a></li>
    <li><a href="#SSLcertfingerprint">What are your SSL certificate

    <p>Compilation and Installation:</p>

    <li><a href="#HowUninstallTor">How do I uninstall Tor?</a></li>
    <li><a href="#PGPSigs">What are these "sig" files on the download
    <li><a href="#GetTor">Your website is blocked in my country. How
    do I download Tor?</a></li>
    <li><a href="#VirusFalsePositives">Why does my Tor executable appear to
    have a virus or spyware?</a></li>
    <li><a href="#tarballs">How do I open a .tar.gz or .tar.xz file?</a></li>
    <li><a href="#LiveCD">Is there a LiveCD or other bundle that
includes Tor?</a></li>

    <p>Tor Browser Bundle (general):</p>

    <li><a href="#TBBFlash">Why can't I view videos on YouTube and other
    Flash-based sites?</a></li>
    <li><a href="#Ubuntu">I'm using Ubuntu, and I can't start Tor Browser.
    <li><a href="#SophosOnMac">I'm using the Sophos anti-virus
    software on my Mac, and Tor starts but I can't browse anywhere.</a></li>
    <li><a href="#XPCOMError">When I open the Tor Browser Bundle I get an
error message from the browser: "Cannot load XPCOM".</a></li>
    <li><a href="#TBBOtherExtensions">Can I install other Firefox
    extensions? Which extensions should I avoid using?</a></li>
    <li><a href="#TBBJavaScriptEnabled">Why is NoScript configured to
allow JavaScript by default in the Tor Browser Bundle?  Isn't that
    <li><a href="#TBBOtherBrowser">I want to use Chrome/IE/Opera/etc
    with Tor.</a></li>
    <li><a href="#GoogleCAPTCHA">Google makes me solve a CAPTCHA or tells
    me I have spyware installed.</a></li>
    <li><a href="#ForeignLanguages">Why does Google show up in foreign
    <li><a href="#GmailWarning">Gmail warns me that my account may have
    been compromised.</a></li>
    <li><a href="#NeedToUseAProxy">My internet connection requires an HTTP
    or SOCKS Proxy</a></li>
    <li><a href="#TBBSocksPort">I want to
    run another application through Tor.</a></li>
    <li><a href="#CantSetProxy">What should I do if I can't set a proxy
    with my application?</a></li>

    <p>Tor Browser Bundle (3.x series):</p>

    <li><a href="#WhereDidVidaliaGo">Where did the world map (Vidalia)
    <li><a href="#DisableJS">How do I disable JavaScript?</a></li>
    <li><a href="#VerifyDownload">How do I verify the download
    <li><a href="#NewIdentityClosingTabs">Why does "New Identity" close
    all my open tabs?</a></li>
    <li><a href="#ConfigureRelayOrBridge">How do I configure Tor as a relay
    or bridge?</a></li>
    <li><a href="#Timestamps">Why are the file timestamps from 2000?</a></li>
    <li><a href="#TBBSourceCode">Where is the source code for the bundle? How do
    I verify a build?</a></li>

    <p>Advanced Tor usage:</p>

    <li><a href="#torrc">I'm supposed to "edit my torrc". What does
    that mean?</a></li>
    <li><a href="#Logs">How do I set up logging, or see Tor's
    <li><a href="#LogLevel">What log level should I use?</a></li>
    <li><a href="#DoesntWork">Tor is running, but it's not working
    <li><a href="#TorCrash">My Tor keeps crashing.</a></li>
    <li><a href="#ChooseEntryExit">Can I control which nodes (or
    are used for entry/exit?</a></li>
    <li><a href="#FirewallPorts">My firewall only allows a few outgoing
    <li><a href="#DefaultExitPorts">Is there a list of default exit ports?</a></li>
    <li><a href="#WarningsAboutSOCKSandDNSInformationLeaks">I keep seeing
    these warnings about SOCKS and DNS information leaks. Should I
    <li><a href="#SocksAndDNS">How do I check if my application that uses
    SOCKS is leaking DNS requests?</a></li>

    <p>Running a Tor relay:</p>

    <li><a href="#HowDoIDecide">How do I decide if I should run a relay?
    <li><a href="#WhyIsntMyRelayBeingUsedMore">Why isn't my relay being
    used more?</a></li>
    <li><a href="#IDontHaveAStaticIP">I don't have a static IP.</a></li>
    <li><a href="#PortscannedMore">Why do I get portscanned more often
    when I run a Tor relay?</a></li>
    <li><a href="#HighCapacityConnection">How can I get Tor to fully
    make use of my high capacity connection?</a></li>
    <li><a href="#RelayFlexible">How stable does my relay need to
    <li><a href="#BandwidthShaping">What bandwidth shaping options are
    available to Tor relays?</a></li>
    <li><a href="#LimitTotalBandwidth">How can I limit the total amount
    of bandwidth used by my Tor relay?</a></li>
    <li><a href="#RelayWritesMoreThanItReads">Why does my relay write
    more bytes onto the network than it reads?</a></li>
    <li><a href="#Hibernation">Why can I not browse anymore after
    limiting bandwidth on my Tor relay?</a></li>
    <li><a href="#ExitPolicies">I'd run a relay, but I don't want to deal
    with abuse issues.</a></li>
    <li><a href="#BestOSForRelay">Why doesn't my Windows (or other OS) Tor
    relay run well?</a></li>
    <li><a href="#PackagedTor">Should I install Tor from my package manager,
    or build from source?</a></li>
    <li><a href="#WhatIsTheBadExitFlag">What is the BadExit flag?</a></li>
    <li><a href="#IGotTheBadExitFlagWhyDidThatHappen">I got the BadExit flag.
    Why did that happen?</a></li>
    <li><a href="#MyRelayRecentlyGotTheGuardFlagAndTrafficDroppedByHalf">My
    relay recently got the Guard flag and traffic dropped by half.</a></li>
    <li><a href="#TorClientOnADifferentComputerThanMyApplications">I want to run my Tor client on a
    different computer than my applications.</a></li>
    <li><a href="#ServerClient">Can I install Tor on a central server, and
    have my clients connect to it?</a></li>
    <li><a href="#JoinTheNetwork">So I can just configure a nickname and
    ORPort and join the network?</a></li>
    <li><a href="#RelayOrBridge">Should I be a normal relay or bridge
    <li><a href="#UpgradeOrMove">I want to upgrade/move my relay. How do I
    keep the same key?</a></li>
    <li><a href="#MultipleRelays">I want to run more than one
    <li><a href="#NTService">How do I run my Tor relay as an NT service?
    <li><a href="#VirtualServer">Can I run a Tor relay from my virtual server
    <li><a href="#WrongIP">My relay is picking the wrong IP address.</a></li>
    <li><a href="#BehindANAT">I'm behind a NAT/Firewall</a></li>
    <li><a href="#RelayMemory">Why is my Tor relay using so much memory?
    <li><a href="#BetterAnonymity">Do I get better anonymity if I run a relay?
    <li><a href="#FacingLegalTrouble">I'm facing legal trouble. How do I
    prove that my server was a Tor relay at a given time?</a></li>
    <li><a href="#RelayDonations">Can I donate for a relay rather than
    run my own?</a></li>

    <p>Tor hidden services:</p>

    <li><a href="#AccessHiddenServices">How do I access hidden services?</a></li>
    <li><a href="#ProvideAHiddenService">How do I provide a hidden service?</a></li>


    <li><a href="#VersionNumbers">What do these weird version numbers
    <li><a href="#PrivateTorNetwork">How do I set up my own private
    Tor network?</a></li>
    <li><a href="#UseTorWithJava">How can I make my Java program use the
    Tor network?</a></li>
    <li><a href="#WhatIsLibevent">What is Libevent?</a></li>
    <li><a href="#MyNewFeature">What do I need to do to get a new feature
    into Tor?</a></li>

    <p>Anonymity and Security:</p>
    <li><a href="#WhatProtectionsDoesTorProvide">What protections does Tor
    <li><a href="#CanExitNodesEavesdrop">Can exit nodes eavesdrop on
    communications? Isn't that bad? </a></li>
    <li><a href="#AmITotallyAnonymous">So I'm totally anonymous if I use
    <li><a href="#ExitEnclaving">What is Exit Enclaving?</a></li>
    <li><a href="#KeyManagement">Tell me about all the keys Tor
    <li><a href="#EntryGuards">What are Entry Guards?</a></li>
    <li><a href="#ChangePaths">How often does Tor change its paths?</a></li>
    <li><a href="#CellSize">Tor uses hundreds of bytes for every IRC line. I
    can't afford that!</a></li>
    <li><a href="#OutboundConnections">Why does netstat show these outbound
    <li><a href="#PowerfulBlockers">What about powerful blocking mechanisms
    <li><a href="#RemotePhysicalDeviceFingerprinting">Does Tor resist
    "remote physical device fingerprinting"?</a></li>
    <li><a href="#IsTorLikeAVPN">Is Tor like a VPN?</a></li>
    <li><a href="#Proxychains">Aren't 10 proxies (proxychains) better than
    Tor with only 3 hops?</a></li>
    <li><a href="#AttacksOnOnionRouting">What attacks remain against onion
    <li><a href="#LearnMoreAboutAnonymity">Where can I learn more about anonymity?</a></li>

    <p>Alternate designs that we don't do (yet):</p>

    <li><a href="#EverybodyARelay">You should make every Tor user be a
    <li><a href="#TransportIPnotTCP">You should transport all IP
    not just TCP packets.</a></li>
    <li><a href="#HideExits">You should hide the list of Tor relays,
    so people can't block the exits.</a></li>
    <li><a href="#ChoosePathLength">You should let people choose their path
    <li><a href="#SplitEachConnection">You should split each connection over
    many paths.</a></li>
    <li><a href="#MigrateApplicationStreamsAcrossCircuits">You should migrate
    application streams across circuits.</a></li>
    <li><a href="#LetTheNetworkPickThePath">You should let the network pick
    the path, not the client.</a></li>
    <li><a href="#UnallocatedNetBlocks">Your default exit policy should block
    unallocated net blocks too.</a></li>
    <li><a href="#BlockWebsites">Exit policies should be able to block
    websites, not just IP addresses.</a></li>
    <li><a href="#BlockContent">You should change Tor to prevent users from
    posting certain content.</a></li>
    <li><a href="#SendPadding">You should send padding so it's more secure.
    <li><a href="#Steganography">You should use steganography to hide Tor

    <li><a href="#Criminals">Doesn't Tor enable criminals to do bad
    <li><a href="#RespondISP">How do I respond to my ISP about my exit
    <li><a href="#HelpPoliceOrLawyers">I have questions about
   a Tor IP address for a legal case.</a></li>

    <p>For other questions not yet on this version of the FAQ, see the
    href="<wikifaq>">wiki FAQ</a> for now.</p>


    <a id="General"></a>
    <h2><a class="anchor">General:</a></h2>

    <a id="WhatIsTor"></a>
    <h3><a class="anchor" href="#WhatIsTor">What is Tor?</a></h3>

    The name "Tor" can refer to several different components.

    The Tor software is a program you can run on your computer that
helps keep
    you safe on the Internet. Tor protects you by bouncing your
    around a distributed network of relays run by volunteers all around
    the world: it prevents somebody watching your Internet connection
    learning what sites you visit, and it prevents the sites you visit
    from learning your physical location. This set of volunteer relays
    called the Tor network. You can read more about how Tor works on the
    href="<page about/overview>">overview page</a>.

    The Tor Project is a non-profit (charity) organization that
    and develops the Tor software.


    <a id="Torisdifferent"></a>
    <h3><a class="anchor" href="#Torisdifferent">How is Tor different
from other proxies?</a></h3>
    A typical proxy provider sets up a server somewhere on the Internet
allows you to use it to relay your traffic.  This creates a simple, easy
maintain architecture.  The users all enter and leave through the same
The provider may charge for use of the proxy, or fund their costs
advertisements on the server.  In the simplest configuration, you don't
have to
install anything.  You just have to point your browser at their proxy
Simple proxy providers are fine solutions if you do not want protections
your privacy and anonymity online and you trust the provider from doing
things.  Some simple proxy providers use SSL to secure your connection
to them.
This may protect you against local eavesdroppers, such as those at a
cafe with
free wifi Internet.
    Simple proxy providers also create a single point of failure.  The
knows who you are and where you browse on the Internet.  They can see
traffic as it passes through their server.  In some cases, they can even
inside your
encrypted traffic as they relay it to your banking site or to ecommerce
You have to trust the provider isn't doing any number of things, such as
watching your traffic, injecting their own advertisements into your
stream, and recording your personal details.
    Tor passes your traffic through at least 3 different servers before
it on to the destination. Because there's a separate layer of encryption
each of the three relays, Tor does not modify, or even know, what you
sending into it.  It merely relays your traffic, completely encrypted
the Tor network and has it pop out somewhere else in the world,
intact.  The Tor client is required because we assume you trust your
computer.  The Tor client manages the encryption and the path chosen
the network.  The relays located all over the world merely pass
packets between themselves.</p>
    <dt>Doesn't the first server see who I am?</dt><dd>Possibly. A bad
first of
three servers can see encrypted Tor traffic coming from your computer.
still doesn't know who you are and what you are doing over Tor.  It
merely sees
"This IP address is using Tor".  Tor is not illegal anywhere in the
world, so
using Tor by itself is fine.  You are still protected from this node
out who you are and where you are going on the Internet.</dd>
    <dt>Can't the third server see my traffic?</dt><dd>Possibly.  A bad
of three servers can see the traffic you sent into Tor.  It won't know
who sent
this traffic.  If you're using encryption, such as visiting a bank or
e-commerce website, or encrypted mail connections, etc, it will only
know the
destination.  It won't be able to see the data inside the traffic
stream.  You
are still protected from this node figuring out who you are and if using
encryption, what data you're sending to the destination.</dd>


    <a id="CompatibleApplications"></a>
    <h3><a class="anchor" href="#CompatibleApplications">What programs
can I use with Tor?</a></h3>

    If you want to use Tor with a web browser, we provide the Tor Browser
    Bundle, which includes everything you need to browse the web safely using
    Tor. If you want to use another web browser with Tor, see <a
    href="#TBBOtherBrowser">Other web browsers</a>.
    There are plenty of other programs you can use with Tor,
    but we haven't researched the application-level anonymity
    issues on all of them well enough to be able to recommend a safe
    configuration. Our wiki has a list of instructions for <a
    specific applications</a>.
    Please add to these lists and help us keep them accurate!


    <a id="WhyCalledTor"></a>
    <h3><a class="anchor" href="#WhyCalledTor">Why is it called

    Because Tor is the onion routing network. When we were starting the
    new next-generation design and implementation of onion routing in
    2001-2002, we would tell people we were working on onion routing,
    and they would say "Neat. Which one?" Even if onion routing has
    become a standard household term, Tor was born out of the actual <a
    href="http://www.onion-router.net/">onion routing project</a> run by
    the Naval Research Lab.

    (It's also got a fine translation from German and Turkish.)

    Note: even though it originally came from an acronym, Tor is not
    "TOR". Only the first letter is capitalized. In fact, we can usually
    spot people who haven't read any of our website (and have instead
    everything they know about Tor from news articles) by the fact that
    spell it wrong.


    <a id="Backdoor"></a>
    <h3><a class="anchor" href="#Backdoor">Is there a backdoor in

    There is absolutely no backdoor in Tor. Nobody has asked us to put
    in, and we know some smart lawyers who say that it's unlikely that
    will try to make us add one in our jurisdiction (U.S.). If they do
    ask us, we will fight them, and (the lawyers say) probably win.

    We think that putting a backdoor in Tor would be tremendously
    irresponsible to our users, and a bad precedent for security
    in general. If we ever put a deliberate backdoor in our security
    software, it would ruin our professional reputations. Nobody would
    trust our software ever again &mdash; for excellent reason!

    But that said, there are still plenty of subtle attacks
    people might try. Somebody might impersonate us, or break into our
    computers, or something like that. Tor is open source, and you
    always check the source (or at least the diffs since the last
    for suspicious things. If we (or the distributors) don't give you
    source, that's a sure sign something funny might be going on. You
    should also check the <a href="<page docs/verifying-signatures>">PGP
    signatures</a> on the releases, to make sure nobody messed with the
    distribution sites.

    Also, there might be accidental bugs in Tor that could affect your
    anonymity. We periodically find and fix anonymity-related bugs, so
    sure you keep your Tor versions up-to-date.


    <a id="DistributingTor"></a>
    <h3><a class="anchor" href="#DistributingTor">Can I distribute


    The Tor software is <a href="https://www.fsf.org/">free software</a>. This
    means we give you the rights to redistribute the Tor software, either
    modified or unmodified, either for a fee or gratis. You don't have to
    ask us for specific permission.

    However, if you want to redistribute the Tor software you must follow our
    <a href="<gitblob>LICENSE">LICENSE</a>.
    Essentially this means that you need to include our LICENSE file along
    with whatever part of the Tor software you're distributing.

    Most people who ask us this question don't want to distribute just the
    Tor software, though. They want to distribute the <a
    href="<page projects/torbrowser>">Tor Browser</a>. This includes <a
    Extended Support Release</a>, and the NoScript and HTTPS-Everywhere
    extensions. You will need to follow the license for those programs as
    well. Both of those Firefox extensions are distributed under
    the <a href="https://www.fsf.org/licensing/licenses/gpl.html">GNU General
    Public License</a>, while Firefox ESR is released under the Mozilla Public
    License. The simplest way to obey their licenses is to include the source
    code for these programs everywhere you include the bundles themselves.

    Also, you should make sure not to confuse your readers about what Tor is,
    who makes it, and what properties it provides (and doesn't provide). See
    our <a href="<page docs/trademark-faq>">trademark FAQ</a> for details.

    Lastly, you should realize that we release new versions of the
    Tor software frequently, and sometimes we make backward incompatible
    changes. So if you distribute a particular version of the Tor software, it
    may not be supported &mdash; or even work &mdash; six months later. This
    is a fact of life for all security software under heavy development.


    <a id="SupportMail"></a>
    <h3><a class="anchor" href="#SupportMail">How can I get

    <p>Your best bet is to first try the following:</p>
    <li>Read through this <a href="<page docs/faq>">FAQ</a>.</li>
    <li>Read through the <a href="<page
    <li>Read through the <a

    archives</a> and see if your question is already answered.</li>
    <li>Join our <a href="ircs://irc.torproject.org#tor">irc channel</a>
    state the issue and wait for help.</li>
    <li>Send an email to <a

    <li>If all else fails, try <a href="<page about/contact>">contacting
us</a> directly.</li>

    <p>If you find your answer, please stick around on the IRC channel
or the
    mailing list to help others who were once in your position.</p>


    <a id="Forum"></a>
    <h3><a class="anchor" href="#Forum">Is there a Tor forum?</a></h3>

    <p>We have a <a href="https://tor.stackexchange.com/">StackExchange
    page</a> that is currently in public beta.


    <a id="WhySlow"></a>
    <h3><a class="anchor" href="#WhySlow">Why is Tor so slow?</a></h3>

    There are many reasons why the Tor network is currently slow.

    Before we answer, though, you should realize that Tor is never going
    be blazing fast. Your traffic is bouncing through volunteers'
    in various parts of the world, and some bottlenecks and network
    will always be present. You shouldn't expect to see university-style
    bandwidth through Tor.

    But that doesn't mean that it can't be improved. The current Tor
    is quite small compared to the number of people trying to use it,
    many of these users don't understand or care that Tor can't
    handle file-sharing traffic load.

    For the much more in-depth answer, see <a
    href="<blog>why-tor-is-slow">Roger's blog
    post on the topic</a>, which includes both a detailed PDF and a
    to go with it.

    What can you do to help?


    <a href="<page docs/tor-doc-relay>">Configure your Tor to relay
    for others</a>. Help make the Tor network large enough that we can
    all the users who want privacy and security on the Internet.

    <a href="<page projects/vidalia>">Help us make Tor more usable</a>.
    especially need people to help make it easier to configure your Tor
    as a relay. Also, we need help with clear simple documentation to
    walk people through setting it up.

    There are some bottlenecks in the current Tor network. Help us
    experiments to track down and demonstrate where the problems are,
    then we can focus better on fixing them.

    Tor needs some architectural changes too. One important change is to
    start providing <a href="#EverybodyARelay">better service to people
    relay traffic</a>. We're working on this, and we'll finish faster if
    get to spend more time on it.

    Help do other things so we can do the hard stuff. Please take a
    to figure out what your skills and interests are, and then <a
    getinvolved/volunteer>">look at our volunteer page</a>.

    Help find sponsors for Tor. Do you work at a company or government
    that uses Tor or has a use for Internet privacy, e.g. to browse the
    competition's websites discreetly, or to connect back to the home
    when on the road without revealing affiliations? If your
organization has
    an interest in keeping the Tor network working, please contact them
    supporting Tor. Without sponsors, Tor is going to become even

    If you can't help out with any of the above, you can still help out
    individually by <a href="<page donate/donate>">donating a bit of
money to the
    cause</a>. It adds up!



    <a id="FileSharing"></a>
    <h3><a class="anchor" href="#FileSharing">How can I share files
    anonymously through Tor?</a></h3>

    File sharing (peer-to-peer/P2P) is widely unwanted in the Tor network,
    and exit nodes are configured to block file sharing traffic by default.
    Tor is not really designed for it, and file sharing through Tor slows
    down everyone's browsing. Also, Bittorrent over Tor <a
    is not anonymous</a>!


    <a id="Funding"></a>
    <h3><a class="anchor" href="#Funding">What would The Tor Project do
with more funding?</a></h3>

    The Tor network's <a

    thousand</a> relays push <a
    1GB per second on average</a>. We have <a

    hundred thousand daily users</a>. But the Tor network is not yet

    There are six main development/maintenance pushes that need


    Scalability: We need to keep scaling and decentralizing the Tor
    architecture so it can handle thousands of relays and millions of
    users. The upcoming stable release is a major improvement, but
    lots more to be done next in terms of keeping Tor fast and stable.

    User support: With this many users, a lot of people are asking
    all the time, offering to help out with things, and so on. We need
    clean docs, and we need to spend some effort coordinating

    Relay support: the Tor network is run by volunteers, but they still
    attention with prompt bug fixes, explanations when things go wrong,
    reminders to upgrade, and so on. The network itself is a commons,
    somebody needs to spend some energy making sure the relay operators
    happy. We also need to work on stability on some platforms &mdash;
    Tor relays have problems on Win XP currently.

    Usability: Beyond documentation, we also need to work on usability
of the
    software itself. This includes installers, clean GUIs, easy
    to interface with other applications, and generally automating all
    the difficult and confusing steps inside Tor. We've got a start on
    with the <a href="<page projects/vidalia>">Vidalia GUI</a>, but much
more work
    remains &mdash; usability for privacy software has never been easy.

    Incentives: We need to work on ways to encourage people to configure
    their Tors as relays and exit nodes rather than just clients.
    <a href="#EverybodyARelay">We need to make it easy to become a
    and we need to give people incentives to do it.</a>

    Research: The anonymous communications field is full
    of surprises and gotchas. In our copious free time, we
    also help run top anonymity and privacy conferences like <a
    href="http://petsymposium.org/">PETS</a>. We've identified a set of
    critical <a href="<page getinvolved/volunteer>#Research">Tor
research questions</a>
    that will help us figure out how to make Tor secure against the
variety of
    attacks out there. Of course, there are more research questions
    behind these.


    We're continuing to move forward on all of these, but at this rate
    <a href="#WhySlow">the Tor network is growing faster than the
    can keep up</a>.
    Now would be an excellent time to add a few more developers to the
    so we can continue to grow the network.

    We are also excited about tackling related problems, such as

    We are proud to have <a href="<page about/sponsors>">sponsorship and
    from the Omidyar Network, the International Broadcasting Bureau,
    Security Solutions, the Electronic Frontier Foundation, several
    agencies and research groups, and hundreds of private contributors.

    However, this support is not enough to keep Tor abreast of changes
in the
    Internet privacy landscape. Please <a href="<page
    to the project, or <a href="<page about/contact>">contact</a> our
    director for information on making grants or major donations.


    <a id="Mobile"></a>
    <h3><a class="anchor" href="#Mobile">Can I use Tor on my phone or mobile

    Tor on Android devices is maintained by the <a
    href="https://guardianproject.info">Guardian Project</a>. Currently, there
    is no supported way of using Tor on iOS; the Guardian Project is
    working to make this a reality in the future.


     <a id="OutboundPorts"></a>
    <h3><a class="anchor" href="#OutboundPorts">Do I have to open all these
    outbound ports on my firewall?</a></h3>

    Tor may attempt to connect to any port that is advertised in the
    directory as an ORPort (for making Tor connections) or a DirPort (for
    fetching updates to the directory). There are a variety of these ports:
    many of them are running on 80, 443, 9001, and 9030, but many use other
    ports too.
    As a client: you could probably get away with opening only those four
    ports. Since Tor does all its connections in the background, it will retry
    ones that fail, and hopefully you'll never have to know that it failed, as
    long as it finds a working one often enough. However, to get the most
    diversity in your entry nodes &mdash; and thus the most security
    &mdash; as well as the most robustness in your connectivity, you'll
    want to let it connect to all of them.
    See the FAQ entry on <a href="#FirewallPorts">firewalled ports</a> if
    you want to explicitly tell your Tor client which ports are reachable
    for you.
    As a relay: you must allow outgoing connections to every other relay
    and to anywhere your exit policy advertises that you allow. The
    cleanest way to do that is simply to allow all outgoing connections
    at your firewall. If you don't, clients will ask you to extend to
    those relays, and those connections will fail, leading to complex
    anonymity implications for the clients which we'd like to avoid.


    <a id="IsItWorking"></a>
    <h3><a class="anchor" href="#IsItWorking">How can I tell if Tor is
    working, and that my connections really are anonymized?</a></h3>

    There are sites you can visit that will tell you if you appear to be
    coming through the Tor network. Try the <a href="https://check.torproject.org">
    Tor Check</a> site and see whether it thinks you are using Tor or not.


    <a id="FTP"></a>
    <h3><a class="anchor" href="#FTP">How do I use my browser for ftp with Tor?

    Use the <a href="https://torproject.org/projects/torbrowser.html">Tor
    Browser Bundle</a>. If you want a separate application for an
    ftp client, we've heard good things about  FileZilla for Windows. You can
    configure it to point to Tor as a "socks4a" proxy on "localhost" port


    <a id="NoDataScrubbing"></a>
    <h3><a class="anchor" href="#NoDataScrubbing">Does Tor remove personal
    information from the data my application sends?</a></h3>

    <p>No, it doesn't. You need to use a separate program that understands
    your application and protocol and knows how to clean or "scrub" the data
    it sends. The Tor Browser Bundle tries to keep application-level data,
    like the user-agent string, uniform for all users. The Tor Browser can't
    do anything about text that you type into forms, though. <a
    href="<page download/download-easy>#warning">Be
    careful and be smart.</a>


    <a id="Metrics"></a>
    <h3><a class="anchor" href="#Metrics">How many people use Tor? How
    many relays or exit nodes are there?</a></h3>

    All this and more about measuring Tor can be found at the <a
    href="https://metrics.torproject.org/">Tor Metrics Portal</a>.</p>

    <a id="SSLcertfingerprint"></a>
    <h3><a class="anchor" href="#SSLcertfingerprint">What are the SSL
    certificate fingerprints for Tor's various websites?</a></h3>
    *.torproject.org SSL certificate from Digicert:
Issued Certificate
Version: 3
Serial Number: 09 48 B1 A9 3B 25 1D 0D B1 05 10 59 E2 C2 68 0A
Not Valid Before: 2013-10-22
Not Valid After: 2016-05-03
Certificate Fingerprints
SHA1: 84 24 56 56 8E D7 90 43 47 AA 89 AB 77 7D A4 94 3B A1 A7 D5
MD5: A4 16 66 80 AE B9 A4 EC AA 88 01 1B 6F B9 EB CB
blog.torproject.org SSL certificate from RapidSSL:
Issued Certificate
Version: 3
Serial Number: 05 CA 2A A9 A5 D6 ED 44 C7 2D 88 1A 18 B0 E7 DC
Not Valid Before: 2014-04-09
Not Valid After: 2017-06-14
Certificate Fingerprints
SHA1: DE 20 3D 46 FD C3 68 EB BA 40 56 39 F5 FA FD F5 4E 3A 1F 83
MD5: 8A 8A A2 5E D9 7F 84 4C 8F 00 3B 43 E0 2D E6 4D

    <a id="CompilationAndInstallation"></a>
    <h2><a class="anchor">Compilation And Installation:</a></h2>

    <a id="HowUninstallTor"></a>
    <h3><a class="anchor" href="#HowUninstallTor">How do I uninstall

    Tor Browser does not install itself in the classic sense of
applications. You just simply delete the folder or directory named "Tor
Browser" and it is removed from your system.

    If this is not related to Tor Browser, uninstallation depends
entirely on how you installed it and which operating system you
    have. If you installed a package, then hopefully your package has a
way to
    uninstall itself. The Windows packages include uninstallers.

    For Mac OS X, follow the <a
    href="<page docs/tor-doc-osx>#uninstall">uninstall directions</a>.

    If you installed by source, I'm afraid there is no easy uninstall
method. But
    on the bright side, by default it only installs into /usr/local/ and
it should
    be pretty easy to notice things there.


    <a id="PGPSigs"></a>
    <h3><a class="anchor" href="#PGPSigs">What are these "sig" files on
the download page?</a></h3>

    These are PGP signatures, so you can verify that the file you've
downloaded is
    exactly the one that we intended you to get.

    Please read the <a
    href="<page docs/verifying-signatures>">verifying signatures</a>
page for details.


<a id="GetTor"></a>
<h3><a class="anchor" href="#GetTor">Your website is blocked in my
country. How do I download Tor?</a></h3>

Some government or corporate firewalls censor connections to Tor's
website. In those cases, you have three options. First, get it from
a friend &mdash; the <a href="<page projects/torbrowser>">Tor Browser
Bundle</a> fits nicely on a USB key. Second, find the <a
for the <a href="<page getinvolved/mirrors>">Tor mirrors</a> page
and see if any of those copies of our website work for you. Third,
you can download Tor via email: log in to your Gmail account and mail
'<tt>gettor@gettor.torproject.org</tt>'. If you include the word 'help'
in the body of the email, it will reply with instructions. Note that
only a few webmail providers are supported, since they need to be able
to receive very large attachments.

Be sure to <a href="<page docs/verifying-signatures>">verify the
of any package you download, especially when you get it from somewhere
other than our official HTTPS website.


    <a id="VirusFalsePositives"></a>
    <h3><a class="anchor" href="#VirusFalsePositives"></a>Why does my
    Tor executable appear to have a virus or spyware?</h3>
    Sometimes, overzealous Windows virus and spyware detectors trigger on
    some parts of the Tor Windows binary. Our best guess is that these are
    false positives — after all, the anti-virus and anti-spyware business is
    just a guessing game anyway. You should contact your vendor and explain
    that you have a program that seems to be triggering false positives. Or
    pick a better vendor.
    <p>In the meantime, we encourage you to not just take our word for it.
    Our job is to provide the source; if you're concerned, please do
    recompile it yourself.</p>


    <a id="tarballs"></a>
    <h3><a class="anchor" href="#tarballs">How do I open a .tar.gz
    or .tar.xz file?</a></h3>

    Tar is a common archive utility for Unix and Linux systems. If your
    system has a mouse, you can usually open them by double clicking.
    Otherwise open a command prompt and execute</p>
    <pre>tar xzf &lt;FILENAME&gt;.tar.gz</pre> or <pre>tar xJf &lt;FILENAME&gt;.tar.xz</pre>
    as documented on tar's man page.


    <a id="LiveCD"></a>
    <h3><a class="anchor" href="#LiveCD">Is there a LiveCD or other
bundle that includes Tor?</a></h3>

    Yes.  Use <a href="https://tails.boum.org/">The Amnesic Incognito
    Live System</a> or <a href="<page projects/torbrowser>">the Tor


<a id="TBBGeneral"></a>
<h2><a class="anchor">Tor Browser Bundle (general):</a></h2>

<a id="TBBFlash"></a>
<h3><a class="anchor" href="#TBBFlash">Why can't I view videos on
and other Flash-based sites?</a></h3>

YouTube and similar sites require third party browser plugins such as Flash.
Plugins operate independently from Firefox and can perform
activity on your computer that ruins your anonymity. This includes
but is not limited to: <a href="http://decloak.net">completely disregarding
proxy settings</a>, querying your <a
local IP address</a>, and <a
href="http://epic.org/privacy/cookies/flash.html">storing their own
cookies</a>. It is possible to use a LiveCD solution such as
or <a href="https://tails.boum.org/">The Amnesic Incognito Live System</a>
that creates a secure, transparent proxy to protect you from proxy bypass,
however issues with local IP address discovery and Flash cookies still remain.

<a href="https://www.youtube.com/html5">YouTube offers experimental HTML5 video
support</a> for many of their videos. Often you can get the HTML5 version of
videos that don't want to play by grabbing the YouTube URL from the "Embed"
code under a video's "Share" option. The link switches out a URL that looks</p>
<p>to something that looks like</p>


<a id="Ubuntu"></a>
<h3><a class="anchor" href="#Ubuntu">
I'm using Ubuntu and I can't start Tor Browser.</a></h3>
You'll need to tell Ubuntu that you want the ability to execute shell scripts
from the graphical interface. Open "Files" (Unity's explorer), open
Preferences-> Behavior Tab -> Set "Run executable text files when they are
opened" to "Ask every time", then OK.
<p>You can also start the Tor Browser from the command line by running </p>
from inside the Tor Browser directory.


<a id="SophosOnMac"></a>
<h3><a class="anchor" href="#SophosOnMac">I'm using the Sophos anti-virus
    software on my Mac, and Tor starts but I can't browse anywhere.</a></h3>
You'll need to modify Sophos anti-virus so that Tor can connect to the
internet. Go to Preferences -> Web Protection -> General, and turn off
the protections for "Malicious websites" and "Malicious downloads".
We encourage affected Sophos users to contact Sophos support about
this issue.


<a id="XPCOMError"></a>
<h3><a class="anchor" href="#XPCOMError">When I open the Tor Browser Bundle
I get an error message from the browser: "Cannot load XPCOM".</a></h3>

This <a
href="https://trac.torproject.org/projects/tor/ticket/10789">problem</a> is
specifically caused by the Webroot SecureAnywhere Antivirus software.
Consider switching to a different antivirus program. We encourage affected
Webroot users to contact Webroot support about this issue.


<a id="TBBOtherExtensions"></a>
<h3><a class="anchor" href="#TBBOtherExtensions">Can I install other
Firefox extensions?</a></h3>

The Tor Browser is free software, so there is nothing preventing you from
modifying it any way you like. However, we do not recommend installing any
additional Firefox add-ons with the Tor Browser Bundle. Add-ons can break
your anonymity in a number of ways, including browser fingerprinting and
bypassing proxy settings.
Some people have suggested we include ad-blocking software or
anti-tracking software with the Tor Browser Bundle. Right now, we do not
think that's such a good idea. The Tor Browser Bundle aims to provide
sufficient privacy that additional add-ons to stop ads and trackers are
not necessary. Using add-ons like these may cause some sites to break, which
<a href="https://www.torproject.org/projects/torbrowser/design/#philosophy">
we don't want to do</a>. Additionally, maintaining a list of "bad" sites that
should be black-listed provides another opportunity to uniquely fingerprint


<a id="TBBJavaScriptEnabled"></a>
<a id="TBBCanIBlockJS"></a>
<h3><a class="anchor" href="#TBBJavaScriptEnabled">Why is NoScript
configured to allow JavaScript by default in the Tor Browser Bundle?
Isn't that unsafe?</a></h3>

We configure NoScript to allow JavaScript by default in the Tor
Browser Bundle because many websites will not work with JavaScript
disabled.  Most users would give up on Tor entirely if a website
they want to use requires JavaScript, because they would not know
how to allow a website to use JavaScript (or that enabling
JavaScript might make a website work).

There's a tradeoff here. On the one hand, we should leave
JavaScript enabled by default so websites work the way
users expect. On the other hand, we should disable JavaScript
by default to better protect against browser vulnerabilities (<a
not just a theoretical concern!</a>). But there's a third issue: websites
can easily determine whether you have allowed JavaScript for them,
and if you disable JavaScript by default but then allow a few websites
to run scripts (the way most people use NoScript), then your choice of
whitelisted websites acts as a sort of cookie that makes you recognizable
(and distinguishable), thus harming your anonymity.

Ultimately, we want the default Tor bundles to use
a combination of firewalls (like the iptables rules
in <a href="https://tails.boum.org/">Tails</a>) and <a
to make JavaScript not so scary. In
the shorter term, TBB 3.0 will hopefully <a
href="https://trac.torproject.org/projects/tor/ticket/9387">allow users
to choose their JavaScript settings more easily</a> &mdash; but the
partitioning concern will remain.

Until we get there, feel free to leave JavaScript on or off depending
on your security, anonymity, and usability priorities.


<a id="TBBOtherBrowser"></a>
<h3><a class="anchor" href="#TBBOtherBrowser">I want to use
Chrome/IE/Opera/etc with Tor.</a></h3>

In short, using any browser besides Tor Browser Bundle with Tor is a
really bad idea.

We're working with the Chrome team to <a
href="https://blog.torproject.org/blog/google-chrome-incognito-mode-tor-and-fingerprinting">fix some bugs and missing APIs in Chrome</a> so it
will be possible to write a Torbutton for Chrome. No support for any
other browser is on the horizon.


<a id="GoogleCAPTCHA"></a>
<h3><a class="anchor" href="#GoogleCAPTCHA">Google makes me solve a
CAPTCHA or tells me I have spyware installed.</a></h3>

This is a known and intermittent problem; it does not mean that Google
considers Tor to be spyware.

When you use Tor, you are sending queries through exit relays that are
also shared by thousands of other users. Tor users typically see this
message when many Tor users are querying Google in a short period of time.
Google interprets the high volume of traffic from a single IP address
(the exit relay you happened to pick) as somebody trying to "crawl" their
website, so it slows down traffic from that IP address for a short time.
An alternate explanation is that Google tries to detect certain
kinds of spyware or viruses that send distinctive queries to Google
Search. It notes the IP addresses from which those queries are received
(not realizing that they are Tor exit relays), and tries to warn any
connections coming from those IP addresses that recent queries indicate
an infection.

To our knowledge, Google is not doing anything intentionally specifically
to deter or block Tor use. The error message about an infected machine
should clear up again after a short time.

<hr />

<a id="ForeignLanguages"></a>
<h3><a class="anchor" href="#ForeignLanguages">
Why does Google show up in foreign languages?</a></h3>

 Google uses "geolocation" to determine where in the world you are, so it
 can give you a personalized experience. This includes using the language
 it thinks you prefer, and it also includes giving you different results
 on your queries.
If you really want to see Google in English you can click the link that
provides that. But we consider this a feature with Tor, not a bug --- the
Internet is not flat, and it in fact does look different depending on
where you are. This feature reminds people of this fact.
Note that Google search URLs take name/value pairs as arguments and one
of those names is "hl". If you set "hl" to "en" then Google will return
search results in English regardless of what Google server you have been
sent to. On a query this looks like:
Another method is to simply use your country code for accessing Google.
This can be google.be, google.de, google.us and so on.
<hr />
<a id="GmailWarning"></a>
<h3><a class="anchor" href="#GmailWarning">Gmail warns me that my
account may have been compromised.</a></h3>

Sometimes, after you've used Gmail over Tor, Google presents a
pop-up notification that your account may have been compromised.
The notification window lists a series of IP addresses and locations
throughout the world recently used to access your account.

In general this is a false alarm: Google saw a bunch of logins from
different places, as a result of running the service via Tor, and
it was a good idea to confirm the account was being accessed by it's
rightful owner.

Even though this may be a biproduct of using the service via tor,
that doesn't mean you can entirely ignore the warning. It is
<i>probably</i> a false positive, but it might not be since it is
possible for someone to hijack your Google cookie.

Cookie hijacking is possible by either physical access to your computer
or by watching your network traffic.  In theory only physical access
should compromise your system because Gmail and similar services
should only send the cookie over an SSL link. In practice, alas, it's <a
way more complex than that</a>.

And if somebody <i>did</i> steal your google cookie, they might end
up logging in from unusual places (though of course they also might
not). So the summary is that since you're using Tor, this security
measure that Google uses isn't so useful for you, because it's full of
false positives. You'll have to use other approaches, like seeing if
anything looks weird on the account, or looking at the timestamps for
recent logins and wondering if you actually logged in at those times.


<a id="NeedToUseAProxy"></a>
<h3><a class="anchor" href="#NeedToUseAProxy">My internet connection
requires an HTTP or SOCKS Proxy</a></h3>

You can set Proxy IP address, port, and authentication information in
Tor Browser's Network Settings. If you're using Tor another way, check
out the HTTPProxy and HTTPSProxy config options in the <a
href="<page docs/tor-manual>">man page</a>,
and modify your torrc file accordingly. You will need an HTTP proxy for
doing GET requests to fetch the Tor directory, and you will need an
HTTPS proxy for doing CONNECT requests to get to Tor relays. (It's fine
if they're the same proxy.) Tor also recognizes the torrc options
Socks4Proxy and Socks5Proxy.
Also read up on the HTTPProxyAuthenticator and HTTPSProxyAuthenticator
options if your proxy requires auth. We only support basic auth currently,
but if you need NTLM authentication, you may find <a
href="http://archives.seul.org/or/talk/Jun-2005/msg00223.html">this post
in the archives</a> useful.
If your proxies only allow you to connect to certain ports, look at the
entry on <a href="#FirewallPorts">Firewalled clients</a> for how
to restrict what ports your Tor will try to access.


<a id="TBBSocksPort"></a>
<h3><a class="anchor" href="#TBBSocksPort">
I want to run another application through Tor.</a></h3>

If you are trying to use some external application with Tor, step zero
should be to <a href="<page download/download>#warning">reread the set
of warnings</a> for ways you can screw up. Step one should be to try
to use a SOCKS proxy rather than an HTTP proxy.
Typically Tor listens for SOCKS connections on port 9050. Tor Browser listens
on port 9150.

If your application doesn't support SOCKS proxies, feel free to install <a
However, please realize that this approach is not recommended for novice
users. Privoxy has an <a
configuration</a> of Tor and Privoxy.

If you're unable to use the application's native proxy settings, all hope is
not lost. See <a href="#CantSetProxy">below</a>.


<a id="CantSetProxy"></a>
<h3><a class="anchor" href="#CantSetProxy">What should I do if I can't
set a proxy with my application?</a></h3>

On Unix, we recommend you give <a
href="https://github.com/dgoulet/torsocks/">torsocks</a> a try.
Alternative proxifying tools like <a
href="http://www.dest-unreach.org/socat/">socat</a> and <a
href="http://proxychains.sourceforge.net/">proxychains</a> are also
The Windows way to force applications through Tor is less clear. <a
href="http://freecap.ru/eng/">Some</a> <a
href="http://www.freehaven.net/~aphex/torcap/">tools</a> have been <a
</a>, but we'd also like to see further testing done here.


<a id="TBB3.x"></a>
<h2><a class="anchor">Tor Browser Bundle (3.x series):</a></h2>
    <a id="WhereDidVidaliaGo"></a>
    <h3><a class="anchor" href="#WhereDidVidaliaGo">Where did the world map
    (Vidalia) go?</a></h3>

    <p>Vidalia has been replaced with Tor Launcher, which is a Firefox
    extension that provides similar functionality. Unfortunately, circuit
    status reporting is still missing, but we are <a
    on providing it</a>. </p>

    <p>In the meantime, we are providing standalone Vidalia packages for
    people who still want the map. Windows and Linux versions are <a
    available here</a>.</p>

    <p>To use these packages, extract them, then run the startup script.
    On Windows, this is "Start Vidalia.exe". On Linux, it is start-vidalia.
    They can be placed in a different directory from TBB (and likely should
    be). </p>

    <p>This Vidalia package will only run properly if Tor Browser has already
    been launched. You cannot start it before launching Tor Browser. </p>

    <p>MacOS is still under development, but in the mean time you can modify
    your TBB 2.x to be a standalone Vidalia (and then use it after starting
    TBB 3.x) by opening your TBB 2.x vidalia.conf file in an editor and
    replacing its contents with just these lines:</p>




    <a id="DisableJS"></a>
    <h3><a class="anchor" href="#DisableJS">How do I disable JavaScript?</a>

    <p>Alas, Mozilla decided to get rid of the config checkbox for JavaScript
    from earlier Firefox versions. And since TBB 3.5 is based on Firefox 24
    (FF17 is unmaintained), that means TBB 3.5 doesn't have the config
    checkbox anymore either, which is unfortunate.</p>

    <p>The simplest way to disable JavaScript in TBB 3.5 is to click on the
    Noscript "S" (between the green onion and the address bar), and select
    "Forbid scripts globally". Note that vanilla NoScript actually whitelists
    several domains even when you try to disable scripts globally, whereas
    Tor Browser's NoScript configuration disables all of them. </p>

    <p>The more klunky way to disable JavaScript is to go to about:config,
    find javascript.enabled, and set it to false.</p>

    <p>There is also a very simple addon available at addons.mozilla.org
    called QuickJS, which provides a toolbar toggle for the javascript.enabled
    about:config control. There are no configuration options for the addon,
    it just switches the javascript.enabled entry between true and false and
    provides a button for it. </p>

    <p>If you want to be extra safe, use both the about:config setting and
    NoScript. </p>

    <p>As for whether you should disable it or leave it enabled, that's <a
    href="#TBBJavaScriptEnabled">a tradeoff we leave to you</a>.</p>


    <a id="VerifyDownload"></a>
    <h3><a class="anchor" href="#VerifyDownload">How do I verify the download

    <p>Instructions are on the <a
    href="<page docs/verifying-signatures>#BuildVerification">verifying
    signatures</a> page.</p>


    <a id="NewIdentityClosingTabs"></a>
    <h3><a class="anchor" href="#NewIdentityClosingTabs">Why does "New
    Identity" close all my open tabs?</a></h3>

    That's actually a feature, since it's discarding your application-level
    browser data too. But it sure is a surprising feature, for people who
    are used to Vidalia's "new identity" behavior.

    We're working on ways to make the behavior less surprising, e.g. a popup
    warning or auto restoring tabs. See ticket <a
    href="https://trac.torproject.org/projects/tor/ticket/9906">#9906</a> and
    ticket <a
    to follow progress there.

    In the mean time, you can get Vidalia's old "newnym" functionality by
    attaching a Vidalia to your TBB 3.x. See the instructions <a


    <a id="ConfigureRelayOrBridge"></a>
    <h3><a class="anchor" href="#ConfigureRelayOrBridge">How do I configure Tor as a relay or bridge?</a></h3>

    You've got three options.

    First (best option), if you're on Linux, you can install the system
    Tor package (e.g. apt-get install tor) and then set it up to be a relay
    (<a href="https://www.torproject.org/docs/tor-relay-debian">instructions</a>).
    You can then use TBB independent of that.

    Second (simpler option), if you're on Windows, you can fetch the separate
    "Vidalia relay bundle" or "Vidalia bridge bundle" from the download page
    and then use that (again you can use TBB independent of it).

    Third (complex option), you can either hook your Vidalia up to TBB (as
    described in the FAQ above) or edit your torrc file (in Data/Tor/torrc)
    directly to add the following lines:
    ORPort 443
    Exitpolicy reject *:*
    BridgeRelay 1  # only add this line if you want to be a bridge
    If you've installed <a
    href="<page projects/obfsproxy-debian-instructions>#instructions">Obfsproxy</a>,
    you'll need to add one more line:
    ServerTransportPlugin obfs3 exec /usr/bin/obfsproxy managed
    This third option is pretty klunky right now; see e.g. <a
    href="https://trac.torproject.org/projects/tor/ticket/10449">this bug</a>;
    but we're hoping it will become an easy option in the future.


    <a id="Timestamps"></a>
    <h3><a class="anchor" href="#Timestamps">Why are the file timestamps
    from 2000?</a></h3>

    <p>One of the huge new features in TBB 3.x is the "deterministic build"
    process, which allows many people to build the Tor Browser Bundle and
    verify that they all make exactly the same package. See Mike's <a
    blog</a> post for the motivation, and his <a
    blog post</a> for the technical details of how we do it.

    <p>Part of creating identical builds is having everybody use the same
    timestamp. Mike picked the beginning of 2000 for that time. The reason
    you might see 7pm in 1999 is because of time zones. </p>


    <a id="TBBSourceCode"></a>
    <h3><a class="anchor" href="#TBBSourceCode">Where is the source code for the bundle? How do I verify a build?</a></h3>

    Start with <a href="https://gitweb.torproject.org/builders/tor-browser-bundle.git">https://gitweb.torproject.org/builders/tor-browser-bundle.git</a> and <a href="https://gitweb.torproject.org/builders/tor-browser-bundle.git/blob/HEAD:/gitian/README.build">https://gitweb.torproject.org/builders/tor-browser-bundle.git/blob/HEAD:/gitian/README.build</a>.


<a id="AdvancedTorUsage"></a>
<h2><a class="anchor">Advanced Tor usage:</a></h2>

<a id="torrc"></a>
<h3><a class="anchor" href="#torrc">I'm supposed to "edit my torrc".
What does that mean?</a></h3>

Tor installs a text file called torrc that contains configuration
instructions for how your Tor program should behave. The default
configuration should work fine for most Tor users.
If you installed Tor Browser Bundle, look for
<code>Data/Tor/torrc</code> inside your Tor Browser Bundle directory.
On OS X, you must right-click or command-click on the browser bundle icon,
and select "Show Package Contents" before the Tor Browser directories become
Tor puts the torrc file in <code>/usr/local/etc/tor/torrc</code> if you compiled tor from source, and <code>/etc/tor/torrc</code> or <code>/etc/torrc</code> if you installed a pre-built package.</p>

Once you've changed your torrc, you will need to restart tor for the
changes to take effect. (For advanced users, note that
you actually only need to send Tor a HUP signal, not actually restart

For other configuration options you can use, see the <a href="<page
docs/tor-manual>">Tor manual page</a>. Have a look at <a
the sample torrc file</a> for hints on common configurations. Remember, all
lines beginning with # in torrc are treated as comments and have no effect
on Tor's configuration.


<a id="Logs"></a>
<h3><a class="anchor" href="#Logs">How do I set up logging, or see Tor's

If you installed a Tor bundle that includes Vidalia, then Vidalia has a
window called "Message Log" that will show you Tor's log messages. Click
on "Advanced" to see more details. You can click on "Settings" to change
your log verbosity or save the messages to a file. You're all set.

If you're not using Vidalia, you'll have to go find the log files by
hand. Here are some likely places for your logs to be:

<li>On OS X, Debian, Red Hat, etc, the logs are in /var/log/tor/
<li>On Windows, there are no default log files currently. If you enable
logs in your torrc file, they default to <code>\username\Application
Data\tor\log\</code> or <code>\Application Data\tor\log\</code>
<li>If you compiled Tor from source, by default your Tor logs to <a
at log-level notice. If you enable logs in your torrc file, they
default to <code>/usr/local/var/log/tor/</code>.

To change your logging setup by hand, <a href="#torrc">edit your
and find the section (near the top of the file) which contains the
following line:

\## Logs go to stdout at level "notice" unless redirected by something
\## else, like one of the below lines.

For example, if you want Tor to send complete debug, info, notice, warn,
and err level messages to a file, append the following line to the end
of the section:

Log debug file c:/program files/tor/debug.log

Replace <code>c:/program files/tor/debug.log</code> with a directory
and filename for your Tor log.


<a id="LogLevel"></a>
<h3><a class="anchor" href="#LogLevel">What log level should I use?</a></h3>

There are five log levels (also called "log severities") you might see in
Tor's logs:

    <li>"err": something bad just happened, and we can't recover. Tor will
    <li>"warn": something bad happened, but we're still running. The bad
    thing might be a bug in the code, some other Tor process doing something
    unexpected, etc. The operator should examine the message and try to
    correct the problem.</li>
    <li>"notice": something the operator will want to know about.</li>
    <li>"info": something happened (maybe bad, maybe ok), but there's
    nothing you need to (or can) do about it.</li>
    <li>"debug": for everything louder than info. It is quite loud indeed.</li>

Alas, some of the warn messages are hard for ordinary users to correct -- the
developers are slowly making progress at making Tor automatically react
correctly for each situation.

We recommend running at the default, which is "notice". You will hear about
important things, and you won't hear about unimportant things.

Tor relays in particular should avoid logging at info or debug in normal
operation, since they might end up recording sensitive information in
their logs.


<a id="DoesntWork"></a>
<h3><a class="anchor" href="#DoesntWork">I installed Tor but it's not

Once you've got the Tor bundle up and running, the first question to
ask is whether your Tor client is able to establish a circuit.

<p>If Tor can establish a circuit, the onion icon in
Vidalia will turn green (and if you're running Tor Browser Bundle, it
automatically launch a browser for you). You can also check in the
Control Panel to make sure it says "Connected to the Tor
network!" under Status. For those not using Vidalia, check your <a
href="#Logs">Tor logs</a> for
a line saying that Tor "has successfully opened a circuit. Looks like
client functionality is working."

If Tor can't establish a circuit, here are some hints:

<li>Are you sure Tor is running? If you're using Vidalia, you may have
to click on the onion and select "Start" to launch Tor.</li>
<li>Check your system clock. If it's more than a few hours off, Tor will
refuse to build circuits. For Microsoft Windows users, synchronize your
clock under the clock -&gt; Internet time tab. In addition, correct the
day and date under the 'Date &amp; Time' Tab. Also make sure your time
zone is correct.</li>
<li>Is your Internet connection <a href="#FirewallPorts">firewalled
by port</a>, or do you normally need to use a <a
<li>Are you running programs like Norton Internet Security or SELinux
block certain connections, even though you don't realize they do? They
could be preventing Tor from making network connections.</li>
<li>Are you in China, or behind a restrictive corporate network firewall
that blocks the public Tor relays? If so, you should learn about <a
href="<page docs/bridges>">Tor bridges</a>.</li>
<li>Check your <a href="#Logs">Tor logs</a>. Do they give you any hints
about what's going wrong?</li>

<hr />

<a id="TorCrash"></a>
<h3><a class="anchor" href="#TorCrash">My Tor keeps crashing.</a></h3>
 We want to hear from you! There are supposed to be zero crash bugs in Tor.
 This FAQ entry describes the best way for you to be helpful to us. But even
 if you can't work out all the details, we still want to hear about it, so
 we can help you track it down.
First, make sure you're using the latest version of Tor (either the latest
stable or the latest development version).
Second, make sure your version of libevent is new enough. We recommend at
least libevent 1.3a.
Third, see if there's already an entry for your bug in the <a
href="https://bugs.torproject.org/">Tor bugtracker</a>. If so,
check if there are any new details that you can add.
Fourth, is the crash repeatable? Can you cause the crash? Can
you isolate some of the circumstances or config options that
make it happen? How quickly or often does the bug show up?
Can you check if it happens with other versions of Tor, for
example the latest stable release?
Fifth, what sort of crash do you get?
Does your Tor log include an "assert failure"? If so, please
tell us that line, since it helps us figure out what's going on.
Tell us the previous couple of log messages as well, especially
if they seem important.
If it says "Segmentation fault - core dumped" then you need to
do a bit more to track it down. Look for a file like "core" or
"tor.core" or "core.12345" in your current directory, or in your
Data Directory. If it's there, run "gdb tor core" and then "bt",
and include the output. If you can't find a core, run "ulimit -c
unlimited", restart Tor, and try to make it crash again. (This core
thing will only work on Unix -- alas, tracking down bugs on Windows
is harder. If you're on Windows, can you get somebody to duplicate
your bug on Unix?)
If Tor simply vanishes mysteriously, it probably is a segmentation
fault but you're running Tor in the background (as a daemon) so you
won't notice. Go look at the end of your log file, and look for a
core file as above. If you don't find any good hints, you should
consider running Tor in the foreground (from a shell) so you can
see how it dies. Warning: if you switch to running Tor in the foreground,
you might start using a different torrc file, with a different default
Data Directory; see the <a href="#UpgradeOrMove">relay-upgrade FAQ entry</a>
for details.
If it's still vanishing mysteriously, perhaps something else is killing it?
Do you have resource limits (ulimits) configured that kill off processes
sometimes? (This is especially common on OpenBSD.) On Linux, try running
"dmesg" to see if the out-of-memory killer removed your process. (Tor will
exit cleanly if it notices that it's run out of memory, but in some cases
it might not have time to notice.) In very rare circumstances, hardware
problems could also be the culprit.
Sixth, if the above ideas don't point out the bug, consider increasing your
log level to "loglevel debug". You can look at the log-configuration FAQ
entry for instructions on what to put in your torrc file. If it usually
takes a long time for the crash to show up, you will want to reserve a whole
lot of disk space for the debug log. Alternatively, you could just send
debug-level logs to the screen (it's called "stdout" in the torrc), and then
when it crashes you'll see the last couple of log lines it had printed.
(Note that running with verbose logging like this will slow Tor down
considerably, and note also that it's generally not a good idea security-wise
to keep logs like this sitting around.)

<hr />

    <a id="ChooseEntryExit"></a>
    <h3><a class="anchor" href="#ChooseEntryExit">Can I control which
nodes (or country) are used for entry/exit?</a></h3>

    Yes. You can set preferred entry and exit nodes as well as
    inform Tor which nodes you do not want to use.
    The following options can be added to your config file <a
    href="#torrc">"torrc"</a> or specified on the command line:
      <dt><tt>EntryNodes $fingerprint,$fingerprint,...</tt></dt>
        <dd>A list of preferred nodes to use for the first hop in the
circuit, if possible.
      <dt><tt>ExitNodes $fingerprint,$fingerprint,...</tt></dt>
        <dd>A list of preferred nodes to use for the last hop in the
circuit, if possible.
      <dt><tt>ExcludeNodes $fingerprint,$fingerprint,...</tt></dt>
        <dd>A list of nodes to never use when building a circuit.
      <dt><tt>ExcludeExitNodes $fingerprint,$fingerprint,...</tt></dt>
        <dd>A list of nodes to never use when picking an exit.
            Nodes listed in <tt>ExcludeNodes</tt> are automatically in
this list.
    <em>We recommend you do not use these</em>
    &mdash; they are intended for testing and may disappear in future
    You get the best security that Tor can provide when you leave the
    route selection to Tor; overriding the entry / exit nodes can mess
    up your anonymity in ways we don't understand.
    Note also that not every circuit is used to deliver traffic outside of
    the Tor network. It is normal to see non-exit circuits (such as those
    used to connect to hidden services, those that do directory fetches,
    those used for relay reachability self-tests, and so on) that end at
    a non-exit node. To keep a node from being used entirely, see
    <tt>ExcludeNodes</tt> and <tt>StrictNodes</tt> in the
    <a href="<page docs/tor-manual>">manual</a>.
    Instead of <tt>$fingerprint</tt> you can also specify a <a

    letter ISO3166 country code</a> in curly braces (for example <tt>{de}</tt>),
    or an ip address pattern (for example, or a node
    nickname. Make sure there are no spaces between the commas and the
    list items.
    If you want to access a service directly through Tor's Socks
    (eg. using ssh via connect.c), another option is to set up an
    internal mapping in your configuration file using
    See the manual page for details.


<a id="FirewallPorts"></a>
<h3><a class="anchor" href="#FirewallPorts">My firewall only allows a
few outgoing ports.</a></h3>

If your firewall works by blocking ports, then you can tell Tor to only
use the ports that your firewall permits by adding "FascistFirewall 1"
your <a href="<page docs/faq>#torrc">torrc
configuration file</a>, or by clicking "My firewall only lets me connect
to certain ports" in Vidalia's Network Settings window.

By default, when you set this Tor assumes that your firewall allows only
port 80 and port 443 (HTTP and HTTPS respectively). You can select a
different set of ports with the FirewallPorts torrc option.

If you want to be more fine-grained with your controls, you can also
use the ReachableAddresses config options, e.g.:

  ReachableDirAddresses *:80
  ReachableORAddresses *:443


    <a id="DefaultExitPorts"></a>
    <h3><a class="anchor" href="#DefaultExitPorts">Is there a list of default exit
The default open ports are listed below but keep in mind that, any port or
ports can be opened by the relay operator by configuring it in torrc or
modifying the source code. But the default according to src/or/policies.c
from the source code release tor- is:
  reject *:25
  reject *:119
  reject *:135-139
  reject *:445
  reject *:563
  reject *:1214
  reject *:4661-4666
  reject *:6346-6429
  reject *:6699
  reject *:6881-6999
  accept *:*
    A relay will block access to its own IP address, as well local network
    IP addresses. A relay always blocks itself by default. This prevents
    Tor users from accidentally accessing any of the exit operator's local


    <a id="WarningsAboutSOCKSandDNSInformationLeaks"></a>
    <h3><a class="anchor" href="#WarningsAboutSOCKSandDNSInformationLeaks">I
    keep seeing these warnings about SOCKS and DNS information leaks.
    Should I worry?</a></h3>
    The warning is:
    Your application (using socks5 on port %d) is giving Tor only an IP
    address. Applications that do DNS resolves themselves may leak
    information. Consider using Socks4A (e.g. via Polipo or socat) instead.
    If you are running Tor to get anonymity, and you are worried about an
    attacker who is even slightly clever, then yes, you should worry. Here's why.
    <b>The Problem.</b> When your applications connect to servers on the
    Internet, they need to resolve hostnames that you can read (like
    www.torproject.org) into IP addresses that the Internet can use (like To do this, your application sends a request to a DNS
    server, telling it the hostname it wants to resolve. The DNS server
    replies by telling your application the IP address.
    Clearly, this is a bad idea if you plan to connect to the remote host
    anonymously: when your application sends the request to the DNS server,
    the DNS server (and anybody else who might be watching) can see what
    hostname you are asking for. Even if your application then uses Tor to
    connect to the IP anonymously, it will be pretty obvious that the user
    making the anonymous connection is probably the same person who made
    the DNS request.
    <b>Where SOCKS comes in.</b> Your application uses the SOCKS protocol
    to connect to your local Tor client. There are 3 versions of SOCKS you
    are likely to run into: SOCKS 4 (which only uses IP addresses), SOCKS 5
    (which usually uses IP addresses in practice), and SOCKS 4a (which uses
    When your application uses SOCKS 4 or SOCKS 5 to give Tor an IP address,
    Tor guesses that it 'probably' got the IP address non-anonymously from a
    DNS server. That's why it gives you a warning message: you probably aren't
    as anonymous as you think.
    <b>So what can I do?</b> We describe a few solutions below.
    <li>If your application speaks SOCKS 4a, use it. </li>
    <li>If you only need one or two hosts, or you are good at programming,
    you may be able to get a socks-based port-forwarder like socat to work
    for you; see <a
    Torify HOWTO</a> for examples. </li>
    <li>Tor ships with a program called tor-resolve that can use the Tor
    network to look up hostnames remotely; if you resolve hostnames to IPs
    with tor-resolve, then pass the IPs to your applications, you'll be fine.
    (Tor will still give the warning, but now you know what it means.) </li>
<!-- I'm not sure if this project is still maintained or not

<li>You can use TorDNS as a local DNS server to rectify the DNS leakage. See the Torify HOWTO for info on how to run particular applications anonymously. </li>
    <p>If you think that you applied one of the solutions properly but still
    experience DNS leaks please verify there is no third-party application
    using DNS independently of Tor. Please see <a
    href="#AmITotallyAnonymous">the FAQ entry on whether you're really
    absolutely anonymous using Tor</a> for some examples.


    <a id="SocksAndDNS"></a>
    <h3><a class="anchor" href="#SocksAndDNS">How do I check if my application that uses
    SOCKS is leaking DNS requests?</a></h3>

    These are two steps you need to take here. The first is to make sure
    that it's using the correct variant of the SOCKS protocol, and the
    second is to make sure that there aren't other leaks.

    Step one: add "TestSocks 1" to your torrc file, and then watch your
    logs as you use your application. Tor will then log, for each SOCKS
    connection, whether it was using a 'good' variant or a 'bad' one.
    (If you want to automatically disable all 'bad' variants, set
    "SafeSocks 1" in your <a href="#torrc">torrc</a> file.)

    Step two: even if your application is using the correct variant of
    the SOCKS protocol, there is still a risk that it could be leaking
    DNS queries. This problem happens in Firefox extensions that resolve
    the destination hostname themselves, for example to show you its IP
    address, what country it's in, etc. These applications may use a safe
    SOCKS variant when actually making connections, but they still do DNS
    resolves locally. If you suspect your application might behave like
    this, you should use a network sniffer like <a
    href="https://www.wireshark.org/">Wireshark</a> and look for
    suspicious outbound DNS requests. I'm afraid the details of how to look
    for these problems are beyond the scope of a FAQ entry though -- find
    a friend to help if you have problems.


    <a id="RunningATorRelay"></a>
    <h2><a class="anchor">Running a Tor relay:</a></h2>

    <a id="HowDoIDecide"></a>
    <h3><a class="anchor" href="#HowDoIDecide">How do I decide if I should
    run a relay?</a></h3>
    We're looking for people with reasonably reliable Internet connections,
    that have at least 100 kilobytes/second each way. If that's you, please
    consider <a href="https://www.torproject.org/docs/tor-relay-debian">helping


    <a id="WhyIsntMyRelayBeingUsedMore"></a>
    <h3><a class="anchor" href="#WhyIsntMyRelayBeingUsedMore">Why isn't my
    relay being used more?</a></h3>
    If your relay is relatively new then give it time. Tor decides which
    relays it uses heuristically based on reports from Bandwidth
    Authorities. These authorities take measurements of your relay's
    capacity and, over time, directs more traffic there until it reaches
    an optimal load. The lifecycle of a new relay is explained in more
    depth in <a href="https://blog.torproject.org/blog/lifecycle-of-a-new-relay">
    this blog post</a>.
    If you've been running a relay for a while and still having issues
    then try asking on the <a href=
    tor-relays list</a>.


    <a id="IDontHaveAStaticIP"></a>
    <h3><a class="anchor" href="#IDontHaveAStaticIP">I don't have a static

    Tor can handle relays with dynamic IP addresses just fine. Just leave
    the "Address" line in your torrc blank, and Tor will guess.


    <a id="PortscannedMore"></a>
    <h3><a class="anchor" href="#PortscannedMore">Why do I get portscanned
    more often when I run a Tor relay?</a></h3>

    If you allow exit connections, some services that people connect to
    from your relay will connect back to collect more information about you.
    For example, some IRC servers connect back to your identd port to record
    which user made the connection. (This doesn't really work for them,
    because Tor doesn't know this information, but they try anyway.) Also,
    users exiting from you might attract the attention of other users on the
    IRC server, website, etc. who want to know more about the host they're
    relaying through.
    Another reason is that groups who scan for open proxies on the Internet
    have learned that sometimes Tor relays expose their socks port to the
    world. We recommend that you bind your socksport to local networks only.
    In any case, you need to keep up to date with your security. See this <a
    on operational security for Tor relays</a> for more suggestions.


    <a id="HighCapacityConnection"></a>
    <h3><a class="anchor" href="#HighCapacityConnection">How can I get Tor to fully
    make use of my high capacity connection?</a></h3>

    See <a href="http://archives.seul.org/or/relays/Aug-2010/msg00034.html">this
    tor-relays thread</a>.


    <a id="RelayFlexible"></a>
    <h3><a class="anchor" href="#RelayFlexible">How stable does my relay
need to be?</a></h3>

    We aim to make setting up a Tor relay easy and convenient:

    <li>Tor has built-in support for <a
    rate limiting</a>. Further, if you have a fast
    link but want to limit the number of bytes per
    day (or week or month) that you donate, check out the <a

    <li>Each Tor relay has an <a href="#ExitPolicies">exit policy</a>
    specifies what sort of outbound connections are allowed or refused
    that relay. If you are uncomfortable allowing people to exit from
    relay, you can set it up to only allow connections to other Tor
    <li>It's fine if the relay goes offline sometimes. The directories
    notice this quickly and stop advertising the relay. Just try to make
    sure it's not too often, since connections using the relay when it
    disconnects will break.
    <li>We can handle relays with dynamic IPs just fine &mdash; simply
    leave the Address config option blank, and Tor will try to guess.
    <li>If your relay is behind a NAT and it doesn't know its public
    IP (e.g. it has an IP of 192.168.x.y), you'll need to set up port
    forwarding. Forwarding TCP connections is system dependent but
    <a href="#BehindANAT">this FAQ entry</a>
    offers some examples on how to do this.
    <li>Your relay will passively estimate and advertise its recent
    bandwidth capacity, so high-bandwidth relays will attract more users
    low-bandwidth ones. Therefore having low-bandwidth relays is useful


    <a id="BandwidthShaping"></a>
    <h3><a class="anchor" href="#BandwidthShaping">What bandwidth shaping
    options are available to Tor relays?</a></h3>

    There are two options you can add to your torrc file:
    BandwidthRate is the maximum long-term bandwidth allowed (bytes per
    second). For example, you might want to choose "BandwidthRate 10 MBytes"
    for 10 megabytes per second (a fast connection), or "BandwidthRate 500
    KBytes" for 500 kilobytes per second (a pretty good cable connection).
    The minimum BandwidthRate setting is 20 kilobytes per second.
    BandwidthBurst is a pool of bytes used to fulfill requests during
    short periods of traffic above BandwidthRate but still keeps the
    average over a long period to BandwidthRate. A low Rate but a high
    Burst enforces a long-term average while still allowing more traffic
    during peak times if the average hasn't been reached lately. For example,
    if you choose "BandwidthBurst 500 KBytes" and also use that for your
    BandwidthRate, then you will never use more than 500 kilobytes per second;
    but if you choose a higher BandwidthBurst (like 5 MBytes), it will allow
    more bytes through until the pool is empty.
    If you have an asymmetric connection (upload less than download) such
    as a cable modem, you should set BandwidthRate to less than your smaller
    bandwidth (Usually that's the upload bandwidth). (Otherwise, you could
    drop many packets during periods of maximum bandwidth usage -- you may
    need to experiment with which values make your connection comfortable.)
    Then set BandwidthBurst to the same as BandwidthRate.
    Linux-based Tor nodes have another option at their disposal: they can
    prioritize Tor traffic below other traffic on their machine, so that
    their own personal traffic is not impacted by Tor load. A <a
    to do this</a> can be found in the Tor source distribution's contrib
    Additionally, there are hibernation options where you can tell Tor to
    only serve a certain amount of bandwidth per time period (such as 100
    GB per month). These are covered in the <a
    href="#LimitTotalBandwidth">hibernation entry</a> below.
    Note that BandwidthRate and BandwidthBurst are in <b>Bytes</b>, not Bits.


    <a id="LimitTotalBandwidth"></a>
    <h3><a class="anchor" href="#LimitTotalBandwidth">How can I limit the
    total amount of bandwidth used by my Tor relay?</a></h3>
    The accounting options in the torrc file allow you to specify the maximum
    amount of bytes your relay uses for a time period.
    AccountingStart day week month [day] HH:MM
    This specifies when the accounting should reset. For instance, to setup
    a total amount of bytes served for a week (that resets every Wednesday
    at 10:00am), you would use:
    AccountingStart week 3 10:00
    AccountingMax 500 GBytes
    This specifies the maximum amount of data your relay will send during an
    accounting period, and the maximum amount of data your relay will receive
    during an account period. When the accounting period resets (from
    AccountingStart), then the counters for AccountingMax are reset to 0.
    Example: Let's say you want to allow 50 GB of traffic every day in each
    direction and the accounting should reset at noon each day:
    AccountingStart day 12:00
    AccountingMax 50 GBytes
    Note that your relay won't wake up exactly at the beginning of each
    accounting period. It will keep track of how quickly it used its
    quota in the last period, and choose a random point in the new interval
    to wake up. This way we avoid having hundreds of relays working at the
    beginning of each month but none still up by the end.
    If you have only a small amount of bandwidth to donate compared to your
    connection speed, we recommend you use daily accounting, so you don't
    end up using your entire monthly quota in the first day. Just divide
    your monthly amount by 30. You might also consider rate limiting to
    spread your usefulness over more of the day: if you want to offer X GB
    in each direction, you could set your RelayBandwidthRate to 20*X KBytes.
    For example,
    if you have 50 GB to offer each way, you might set your RelayBandwidthRate to
    1000 KBytes: this way your relay will always be useful for at least half of
    each day.
    AccountingStart day 0:00
    AccountingMax 50 GBytes
    RelayBandwidthRate 1000 KBytes
    RelayBandwidthBurst 5000 KBytes # allow higher bursts but maintain average


    <a id="RelayWritesMoreThanItReads"></a>
    <h3><a class="anchor" href="#RelayWritesMoreThanItReads">Why does my relay
    write more bytes onto the network than it reads?</a></h3>

    <p>You're right, for the most part a byte into your Tor relay means a
    byte out, and vice versa. But there are a few exceptions:</p>

    <p>If you open your DirPort, then Tor clients will ask you for a copy of
    the directory. The request they make (an HTTP GET) is quite small, and the
    response is sometimes quite large. This probably accounts for most of the
    difference between your "write" byte count and your "read" byte count.</p>

    <p>Another minor exception shows up when you operate as an exit node, and
    you read a few bytes from an exit connection (for example, an instant
    messaging or ssh connection) and wrap it up into an entire 512 byte cell
    for transport through the Tor network.</p>


    <a id="Hibernation"></a>
    <h3><a class="anchor" href="#Hibernation">Why can I not browse anymore
    after limiting bandwidth on my Tor relay?</a></h3>

    <p>The parameters assigned in the <a
    href="#LimitTotalBandwidth">AccountingMax</a> and <a
    href="#BandwidthShaping">BandwidthRate</a> apply to both client and
    relay functions of the Tor process. Thus you may find that you are unable
    to browse as soon as your Tor goes into hibernation, signaled by this
    entry in the log:</p>

    <pre>Bandwidth soft limit reached; commencing hibernation. No new
    connections will be accepted</pre>

    <p>The solution is to run two Tor processes - one relay and one client,
    each with its own config. One way to do this (if you are starting from a
    working relay setup) is as follows:</p>

        <li>In the relay Tor torrc file, simply set the SocksPort to 0.</li>
        <li>Create a new client torrc file from the torrc.sample and ensure
        it uses a different log file from the relay. One naming convention
        may be torrc.client and torrc.relay.</li>
        <li>Modify the Tor client and relay startup scripts to include
        '-f /path/to/correct/torrc'.</li>
        <li>In Linux/BSD/OSX, changing the startup scripts to Tor.client
        and Tor.relay may make separation of configs easier.</li>


    <a id="ExitPolicies"></a>
    <h3><a class="anchor" href="#ExitPolicies">I'd run a relay, but I
don't want to deal with abuse issues.</a></h3>

    Great. That's exactly why we implemented exit policies.

    Each Tor relay has an exit policy that specifies what sort of
    outbound connections are allowed or refused from that relay. The
    policies are propagated to Tor clients via the directory, so clients
    will automatically avoid picking exit relays that would refuse to
    exit to their intended destination. This way each relay can decide
    the services, hosts, and networks he wants to allow connections to,
    based on abuse potential and his own situation. Read the FAQ entry
    <a href="<page docs/faq-abuse>#TypicalAbuses">issues you might
    if you use the default exit policy, and then read Mike Perry's
    <a href="<blog>tips-running-exit-node-minimal-harassment">tips
    for running an exit node with minimal harassment</a>.

    The default exit policy allows access to many popular services
    (e.g. web browsing), but <a
    some due to abuse potential (e.g. mail) and some since
    the Tor network can't handle the load (e.g. default
    file-sharing ports). You can change your exit policy
    using Vidalia's "Sharing" tab, or by manually editing your
    <a href="<page docs/faq>#torrc">torrc</a>
    file. If you want to avoid most if not all abuse potential, set it
    "reject *:*" (or un-check all the boxes in Vidalia). This setting
    that your relay will be used for relaying traffic inside the Tor
    but not for connections to external websites or other services.

    If you do allow any exit connections, make sure name resolution
    (that is, your computer can resolve Internet addresses correctly).
    If there are any resources that your computer can't reach (for
    you are behind a restrictive firewall or content filter), please
    explicitly reject them in your exit policy &mdash; otherwise Tor
    will be impacted too.


    <a id="BestOSForRelay"></a>
    <h3><a class="anchor" href="#BestOSForRelay">Why doesn't my Windows (or other OS) Tor relay run well?</h3></a>

    Tor relays work best on Linux, FreeBSD 5.x+, OS X Tiger or
    later, and Windows Server 2003 or later.

    <p>You can probably get it working just fine on other operating
    systems too, but note the following caveats:

    Versions of Windows without the word "server" in their name
    sometimes have problems. This is especially the case for Win98,
    but it also happens in some cases for XP, especially if you don't
    have much memory. The problem is that we don't use the networking
    system calls in a very Windows-like way, so we run out of space in
    a fixed-size memory space known as the non-page pool, and then
    everything goes bad. The symptom is an assert error with the
    message "No buffer space available [WSAENOBUFS ] [10055]".  <a
    can read more here.</a>

    Most developers who contribute to Tor work with Unix-like operating
    systems. It would be great if more people with Windows experience help
    out, so we can improve Tor's usability and stability in

    More esoteric or archaic operating systems, like SunOS 5.9 or
    Irix64, may have problems with some libevent methods (devpoll,
    etc), probably due to bugs in libevent. If you experience crashes,
    try setting the EVENT_NODEVPOLL or equivalent environment


    <a id="PackagedTor"></a>
    <h3><a class="anchor" href="#PackagedTor">Should I install Tor from my
    package manager, or build from source?</a></h3>
    If you're using Debian or Ubuntu especially, there are a number of benefits
    to installing Tor from the <a
    href="<page docs/debian>">Tor Project's repository</a>.
      Your ulimit -n gets set to 32768 &mdash; high enough for Tor to
      keep open all the connections it needs.
      A user profile is created just for Tor, so Tor doesn't need to run as
      An init script is included so that Tor runs at boot.
      Tor runs with --verify-config, so that most problems with your
      config file get caught.
      Tor can bind to low level ports, then drop privileges.


    <a id="WhatIsTheBadExitFlag"></a>
    <h3><a class="anchor" href="#WhatIsTheBadExitFlag">What is the
    BadExit flag?</a></h3>

    <p>When an exit is misconfigured or malicious it's assigned the BadExit
    flag. This tells Tor to avoid exiting through that relay. In effect,
    relays with this flag become non-exits.</p>


    <a id="IGotTheBadExitFlagWhyDidThatHappen"></a>
    <h3><a class="anchor" href="#IGotTheBadExitFlagWhyDidThatHappen">I got
    the BadExit flag why did that happen?</a></h3>

    <p>If you got this flag then we either discovered a problem or suspicious
    activity coming from your exit and weren't able to contact you. The reason
    for most flaggings are documented on the <a
    relays wiki</a>. Please <a
    href="<page about/contact>">contact us</a> so
    we can sort out the issue.</p>


    <a id="MyRelayRecentlyGotTheGuardFlagAndTrafficDroppedByHalf"></a>
    <h3><a class="anchor" href="#MyRelayRecentlyGotTheGuardFlagAndTrafficDroppedByHalf">My
    relay recently got the Guard flag and traffic dropped by half.</a></h3>
    Since it's now a guard, clients are using it less in other positions, but
    not many clients have rotated their existing guards out to use it as a
    guard yet. Read more details in this <a
    post</a> or in <a href="http://freehaven.net/anonbib/#wpes12-cogs">Changing
    of the Guards: A Framework for Understanding and Improving Entry Guard
    Selection in Tor</a>.


    <a id="TorClientOnADifferentComputerThanMyApplications"></a>
    <h3><a class="anchor" href="#TorClientOnADifferentComputerThanMyApplications">I
    want to run my Tor client on a different computer than my applications.
    By default, your Tor client only listens for applications that
    connect from localhost. Connections from other computers are
    refused. If you want to torify applications on different computers
    than the Tor client, you should edit your torrc to define
    SocksListenAddress and then restart (or hup) Tor. If you
    want to get more advanced, you can configure your Tor client on a
    firewall to bind to your internal IP but not your external IP.


    <a id="ServerClient"></a>
    <h3><a class="anchor" href="#ServerClient">Can I install Tor on a
    central server, and have my clients connect to it?</a></h3>
     Yes. Tor can be configured as a client or a relay on another
     machine, and allow other machines to be able to connect to it
     for anonymity. This is most useful in an environment where many
     computers want a gateway of anonymity to the rest of the world.
     However, be forwarned that with this configuration, anyone within
     your private network (existing between you and the Tor
     client/relay) can see what traffic you are sending in clear text.
     The anonymity doesn't start until you get to the Tor relay.
     Because of this, if you are the controller of your domain and you
     know everything's locked down, you will be OK, but this configuration
     may not be suitable for large private networks where security is
     key all around.
Configuration is simple, editing your torrc file's SocksListenAddress
according to the following examples:

  #This provides local interface access only,
  #needs SocksPort to be greater than 0

  #This provides access to Tor on a specified interface
  SocksListenAddress 192.168.x.x:9100

  #Accept from all interfaces
You can state multiple listen addresses, in the case that you are
part of several networks or subnets.
  SocksListenAddress 192.168.x.x:9100 #eth0
  SocksListenAddress 10.x.x.x:9100 #eth1
After this, your clients on their respective networks/subnets would specify
a socks proxy with the address and port you specified SocksListenAddress
to be.
Please note that the SocksPort configuration option gives the port ONLY for
localhost ( When setting up your SocksListenAddress(es), you need
to give the port with the address, as shown above.
If you are interested in forcing all outgoing data through the central Tor
client/relay, instead of the server only being an optional proxy, you may find
the program iptables (for *nix) useful.


    <a id="RelayOrBridge"></a>
    <h3><a class="anchor" href="#RelayOrBridge">Should I be a normal
relay or bridge relay?</a></h3>

    <p><a href="<page docs/bridges>">Bridge relays</a> (or "bridges" for
    are <a href="<page docs/tor-doc-relay>">Tor relays</a> that aren't
    listed in the public Tor directory.
    That means that ISPs or governments trying to block access to the
    Tor network can't simply block all bridges.

    <p>Being a normal relay vs being a bridge relay is almost the same
    configuration: it's just a matter of whether your relay is listed
    publicly or not.

    So bridges are useful a) for Tor users in oppressive regimes,
    and b) for people who want an extra layer of security
    because they're worried somebody will recognize that it's a public
    Tor relay IP address they're contacting.

    Several countries, including China and Iran, have found ways to
    detect and block connections to Tor bridges.
    <a href="<page projects/obfsproxy>">Obfsproxy</a> bridges address
    this by adding another layer of obfuscation.

    <p>So should you run a normal relay or bridge relay? If you have
    of bandwidth, you should definitely run a normal relay.
    If you're willing
    to <a href="#ExitPolicies">be an exit</a>, you should definitely
    run a normal relay, since we need more exits. If you can't be an
    exit and only have a little bit of bandwidth, be a bridge. Thanks
    for volunteering!


<a id="UpgradeOrMove"></a>
<h3><a class="anchor" href="#UpgradeOrMove">I want to upgrade/move my relay.
How do I keep the same key?</a></h3>

 When upgrading your Tor relay, or running it on a different computer,
 the important part is to keep the same nickname (defined in your torrc
 file) and the same identity key (stored in "keys/secret_id_key" in
 your DataDirectory).
This means that if you're upgrading your Tor relay and you keep the same
torrc and the same DataDirectory, then the upgrade should just work and
your relay will keep using the same key. If you need to pick a new
DataDirectory, be sure to copy your old keys/secret_id_key over.


<a id="NTService"></a>
<h3><a class="anchor" href="#NTService">How do I run my Tor relay as an NT

 You can run Tor as a service on all versions of Windows except Windows
 95/98/ME. This way you can run a Tor relay without needing to always have
 Vidalia running.
If you've already configured your Tor to be a relay, please note that when
you enable Tor as a service, it will use a different DatagDirectory, and
thus will generate a different key. If you want to keep using the old key,
see the Upgrading your Tor relay FAQ entry for how to restore the old
identity key.
To install Tor as a service, you can simply run:
tor --service install
A service called Tor Win32 Service will be installed and started. This
service will also automatically start every time Windows boots, unless
you change the Start-up type. An easy way to check the status of Tor,
start or stop the service, and change the start-up type is by running
services.msc and finding the Tor service in the list of currently
installed services.
Optionally, you can specify additional options for the Tor service using
the -options argument. For example, if you want Tor to use C:\tor\torrc,
instead of the default torrc, and open a control port on port 9151, you
would run:
tor --service install -options -f C:\tor\torrc ControlPort 9151
You can also start or stop the Tor service from the command line by typing:
 tor --service start
 tor --service stop
To remove the Tor service, you can run the following command:
tor --service remove
If you are running Tor as a service and you want to uninstall Tor entirely,
be sure to run the service removal command (shown above) first before
running the uninstaller from "Add/Remove Programs". The uninstaller is
currently not capable of removing the active service.


<a id="VirtualServer"></a>
<h3><a class="anchor" href="#VirtualServer">Can I run a Tor relay from my
virtual server account?</a></h3>

Some ISPs are selling "vserver" accounts that provide what they call a
virtual server -- you can't actually interact with the hardware, and
they can artificially limit certain resources such as the number of file
descriptors you can open at once. Competent vserver admins are able to
configure your server to not hit these limits. For example, in SWSoft's
Virtuozzo, investigate /proc/user_beancounters. Look for "failcnt" in
tcpsndbuf, tcprecvbuf, numothersock, and othersockbuf. Ask for these to
be increased accordingly. Some users have seen settings work well as follows:
<table border="1">
 Xen, Virtual Box and VMware virtual servers have no such limits normally.
If the vserver admin will not increase system limits another option is
to reduce the memory allocated to the send and receive buffers on TCP
connections Tor uses. An experimental feature to constrain socket buffers
has recently been added. If your version of Tor supports it, set
"ConstrainedSockets 1" in your configuration. See the tor man page for
additional details about this option.
Unfortunately, since Tor currently requires you to be able to connect to
all the other Tor relays, we need you to be able to use at least 1024 file
descriptors. This means we can't make use of Tor relays that are crippled
in this way.
We hope to fix this in the future, once we know how to build a Tor network
with restricted topologies -- that is, where each node connects to only a
few other nodes. But this is still a long way off.


<a id="MultipleRelays"></a>
<h3><a class="anchor" href="#MultipleRelays">I want to run more than one

Great. If you want to run several relays to donate more to the network,
we're happy with that. But please don't run more than a few dozen on
the same network, since part of the goal of the Tor network is dispersal
and diversity.

If you do decide to run more than one relay, please set the "MyFamily"
config option in the <a href="#torrc">torrc</a> of each relay, listing
all the relays (comma-separated) that are under your control:

    MyFamily $fingerprint1,$fingerprint2,$fingerprint3

where each fingerprint is the 40 character identity fingerprint (without
spaces). You can also list them by nickname, but fingerprint is safer.
sure to prefix the digest strings with a dollar sign ('$') so that the
digest is not confused with a nickname in the config file.

That way clients will know to avoid using more than one of your relays
in a single circuit. You should set MyFamily if you have administrative
control of the computers or of their network, even if they're not all in
the same geographic location.


    <a id="WrongIP"></a>
    <h3><a class="anchor" href="#WrongIP">My relay is picking the wrong
    IP address.</a></h3>
 Tor guesses its IP address by asking the computer for its hostname, and
 then resolving that hostname. Often people have old entries in their
 /etc/hosts file that point to old IP addresses.
If that doesn't fix it, you should use the "Address" config option to
specify the IP you want it to pick. If your computer is behind a NAT and
it only has an internal IP address, see the following FAQ entry on <a
href="#RelayFlexible">dynamic IP addresses</a>.
Also, if you have many addresses, you might also want to set
"OutboundBindAddress" so external connections come from the IP you intend
to present to the world.


    <a id="BehindANAT"></a>
    <h3><a class="anchor" href="#BehindANAT">I'm behind a NAT/Firewall.</a></h3>

See <a>http://portforward.com/</a> for directions on how to port forward with
your NAT/router device.
If your relay is running on a internal net you need to setup port forwarding.
Forwarding TCP connections is system dependent but the firewalled-clients FAQ
entry offers some examples on how to do this.
Also, here's an example of how you would do this on GNU/Linux if you're using
/sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 9001 -j ACCEPT
You may have to change "eth0" if you have a different external interface
(the one connected to the Internet). Chances are you have only one (except
the loopback) so it shouldn't be too hard to figure out.

    <a id="RelayMemory"></a>
    <h3><a class="anchor" href="#RelayMemory">Why is my Tor relay using
so much memory?</a></h3>

    <p>If your Tor relay is using more memory than you'd like, here are
    tips for reducing its footprint:

    <li>If you're on Linux, you may be encountering memory fragmentation
    bugs in glibc's malloc implementation. That is, when Tor releases
    back to the system, the pieces of memory are fragmented so they're
    to reuse. The Tor tarball ships with OpenBSD's malloc
    which doesn't have as many fragmentation bugs (but the tradeoff is
    CPU load). You can tell Tor to use this malloc implementation
    <tt>./configure --enable-openbsd-malloc</tt></li>

    <li>If you're running a fast relay, meaning you have many TLS
    open, you are probably losing a lot of memory to OpenSSL's internal
    buffers (38KB+ per socket). We've patched OpenSSL to <a href="https://lists.torproject.org/pipermail/tor-dev/2008-June/001519.html">release
    unused buffer memory more aggressively</a>. If you update to OpenSSL
    1.0.0 or newer, Tor's build process will automatically recognize and
    this feature.</li>

<!-- Nickm says he's not sure this is still accurate

    <li>If you're running on Solaris, OpenBSD, NetBSD, or
    old FreeBSD, Tor is probably forking separate processes
    rather than using threads. Consider switching to a <a
    operating system</a>.</li>
    <li>If you still can't handle the memory load, consider reducing the
    amount of bandwidth your relay advertises. Advertising less
    means you will attract fewer users, so your relay shouldn't grow
    as large. See the <tt>MaxAdvertisedBandwidth</tt> option in the man


    All of this said, fast Tor relays do use a lot of ram. It is not
    for a fast exit relay to use 500-1000 MB of memory.


    <a id="BetterAnonymity"></a>
    <h3><a class="anchor" href="#BetterAnonymity">Do I get better anonymity
    if I run a relay?</a></h3>

Yes, you do get better anonymity against some attacks.
The simplest example is an attacker who owns a small number of Tor relays.
He will see a connection from you, but he won't be able to know whether
the connection originated at your computer or was relayed from somebody else.
There are some cases where it doesn't seem to help: if an attacker can
watch all of your incoming and outgoing traffic, then it's easy for him
to learn which connections were relayed and which started at you. (In
this case he still doesn't know your destinations unless he is watching
them too, but you're no better off than if you were an ordinary client.)
There are also some downsides to running a Tor relay. First, while we
only have a few hundred relays, the fact that you're running one might
signal to an attacker that you place a high value on your anonymity.
Second, there are some more esoteric attacks that are not as
well-understood or well-tested that involve making use of the knowledge
that you're running a relay -- for example, an attacker may be able to
"observe" whether you're sending traffic even if he can't actually watch
your network, by relaying traffic through your Tor relay and noticing
changes in traffic timing.
It is an open research question whether the benefits outweigh the risks.
A lot of that depends on the attacks you are most worried about. For
most users, we think it's a smart move.


    <a id="FacingLegalTrouble"></a>
    <h3><a class="anchor" href="#FacingLegalTrouble">I'm facing legal
    trouble. How do I prove that my server was a Tor relay at a given

    <p><a href="https://exonerator.torproject.org/">
    Exonerator</a> is a web service that can check if an IP address was a
    relay at a given time. We can also <a
    href="<page about/contact>">provide a signed
    letter</a> if needed.</p>


    <a id="RelayDonations"></a>
    <h3><a class="anchor" href="#RelayDonations">Can I donate for a
    relay rather than run my own?</a></h3>

    Sure! We recommend these non-profit charities that are happy to turn
    your donations into better speed and anonymity for the Tor network:
    <li><a href="https://www.torservers.net/">torservers.net</a>
    is a German charitable non-profit that runs a wide variety of
    exit relays worldwide. They also like donations of bandwidth from
    is a US-based 501(c)(3) non-profit that collects donations and turns
    them into more US-based exit relay capacity.</li>
    <li><a href="https://nos-oignons.net/">Nos Oignons</a> is a French
    charitable non-profit that runs fast exit relays in France.</li>
    <li><a href="https://www.dfri.se/donera/?lang=en">DFRI</a> is a
    Swedish non-profit running exit relays.</li>

    These organizations are not the same as <a href="<page
    donate/donate>">The Tor Project, Inc</a>, but we consider that a
    good thing. They're both run by nice people who are part of the
    Tor community.

    Note that there can be a tradeoff here between anonymity and
    performance. The Tor network's anonymity comes in part from
    so if you are in a position to run your own relay, you will be
    improving Tor's anonymity more than by donating. At the same time
    though, economies
    of scale for bandwidth mean that combining many small donations into
    several larger relays is more efficient at improving network
    performance. Improving anonymity and improving performance are both
    worthwhile goals, so however you can help is great!


<a id="TorHiddenServices"></a>
<h2><a class="anchor">Tor hidden services:</a></h2>

    <a id="AccessHiddenServices"></a>
    <h3><a class="anchor" href="#AccessHiddenServices">How do I access
    hidden services?</a></h3>

    Tor hidden services are named with a special top-level domain (TLD)
    name in DNS: .onion. Since the .onion TLD is not recognized by the
    official root DNS servers on the Internet, your application will not
    get the response it needs to locate the service. Currently, the Tor
    directory server provides this look-up service; and thus the look-up
    request must get to the Tor network.

 Therefore, your application <b>needs</b> to pass the .onion hostname to
 Tor directly. You can't try to resolve it to an IP address, since there
 <i>is</i> no corresponding IP address: the server is hidden, after all!

    So, how do you make your application pass the hostname directly to Tor?
    You can't use SOCKS 4, since SOCKS 4 proxies require an IP from the
    client (a web browser is an example of a SOCKS client). Even though
    SOCKS 5 can accept either an IP or a hostname, most applications
    supporting SOCKS 5 try to resolve the name before passing it to the
    SOCKS proxy. SOCKS 4a, however, always accepts a hostname: You'll need
    to use SOCKS 4a.

    Some applications, such as the browsers Mozilla Firefox and Apple's
    Safari, support sending DNS queries to Tor's SOCKS 5 proxy. Most web
    browsers don't support SOCKS 4a very well, though. The workaround is
    to point your web browser at an HTTP proxy, and tell the HTTP proxy
    to speak to Tor with SOCKS 4a. We recommend Polipo as your HTTP proxy.

    For applications that do not support HTTP proxy, and so cannot use
    Polipo, <a href="http://www.freecap.ru/eng/">FreeCap</a> is an
    alternative. When using FreeCap set proxy protocol  to SOCKS 5 and under
    settings set DNS name resolving to remote. This
    will allow you to use almost any program with Tor without leaking DNS
    lookups and allow those same programs to access hidden services.

    See also the <a href="#SocksAndDNS">question on DNS</a>.


    <a id="ProvideAHiddenService"></a>
    <h3><a class="anchor" href="#ProvideAHiddenService">How do I provide a
    hidden service?</a></h3>

    See the <a href="<page docs/tor-hidden-service>">
    official hidden service configuration instructions</a>.


    <a id="Development"></a>
    <h2><a class="anchor">Development:</a></h2>

    <a id="VersionNumbers"></a>
    <h3><a class="anchor" href="#VersionNumbers">What do these weird
    version numbers mean?</a></h3>

    Versions of Tor before 0.1.0 used a strange and hard-to-explain
    version scheme. Let's forget about those.
    Starting with 0.1.0, versions all look like this:
    MAJOR.MINOR.MICRO(.PATCHLEVEL)(-TAG). The stuff in parenthesis is
    optional. MAJOR, MINOR, MICRO, and PATCHLEVEL are all numbers. Only one
    release is ever made with any given set of these version numbers. The
    TAG lets you know how stable we think the release is: "alpha" is pretty
    unstable; "rc" is a release candidate; and no tag at all means that we
    have a final release. If the tag ends with "-cvs", you're looking at
    a development snapshot that came after a given release.
    So for example, we might start a development branch with (say) The patchlevel increments consistently as the status
    tag changes, for example, as in:,,,, etc. Eventually, we would release
    The next stable release would be
    Why do we do it like this? Because every release has a unique
    version number, it is easy for tools like package manager to tell
    which release is newer than another. The tag makes it easy for users
    to tell how stable the release is likely to be.


    <a id="PrivateTorNetwork"></a>
    <h3><a class="anchor" href="#PrivateTorNetwork">How do I set up my
    own private Tor network?</a></h3>

    If you want to experiment locally with your own network, or you're
    cut off from the Internet and want to be able to mess with Tor still,
    then you may want to set up your own separate Tor network.
    To set up your own Tor network, you need to run your own authoritative
    directory servers, and your clients and relays must be configured so
    they know about your directory servers rather than the default public
    Apart from the somewhat tedious method of manually configuring a couple
    of directory authorities, relays and clients there are two separate
    tools that could help. One is Chutney, the other is Shadow.
    <a href="https://gitweb.torproject.org/chutney.git">Chutney</a> is a
    tool for configuring, controlling and running tests on a
    testing Tor network. It requires that you have Tor and Python (2.5 or
    later) installed on your system. You can use Chutney to create a testing
    network by generating Tor configuration files (torrc) and necssary keys
    (for the directory authorities). Then you can let Chutney start your Tor
    authorities, relays and clients and wait for the network to bootstrap.
    Finally, you can have Chutney run tests on your network to see which
    things work and which do not. Chutney is typically used for running a
    testing network with about 10 instances of Tor. Every instance of Tor
    binds to one or two ports on localhost ( and all Tor
    communication is done over the loopback interface. The <a
    README</a> is a good starting point for getting it up and running.
    <a href="https://github.com/shadow/shadow">Shadow</a> is a network
    simulator that can run Tor through its Scallion plug-in. Although
    it's typically used for running load and performance tests on
    substantially larger Tor test networks than what's feasible with
    Chutney, it also makes for an excellent debugging tool since you can
    run completely deterministic experiments. A large Shadow network is on
    the size of thousands of instances of Tor, and you can run experiments
    out of the box using one of Shadow's several included scallion experiment
    configurations. Shadow can be run on any linux machine without root,
    and can also run on EC2 using a pre-configured image. Also, Shadow
    controls the time of the simulation with the effect that
    time-consuming tests can be done more efficiently than in an
    ordinary testing network. The <a
    href="https://github.com/shadow/shadow/wiki">Shadow wiki</a> and
    <a href="http://shadow.github.io/">Shadow website</a> are
    good places to get started.


    <a id="UseTorWithJava"></a>
    <h3><a class="anchor" href="#UseTorWithJava">How can I make my Java
    program use the Tor Network?</a></h3>

    The newest versions of Java now have SOCKS4/5 support built in.
    Unfortunately, the SOCKS interface is not very well documented and
    may still leak your DNS lookups. The safest way to use Tor is to
    interface the SOCKS protocol directly or go through an application-level
    proxy that speaks SOCKS4a. For an example and libraries that implement
    the SOCKS4a connection, go to Joe Foley's TorLib in the <a
    href="http://web.mit.edu/foley/www/TinFoil/">TinFoil Project</a>.

    A fully Java implementation of the Tor client is now available as <a
    href="http://www.subgraph.com/orchid.html">Orchid</a>. We still consider
    Orchid to be experimental, so use with care.


    <a id="WhatIsLibevent"></a>
    <h3><a class="anchor" href="#WhatIsLibevent">What is Libevent?</a></h3>

    When you want to deal with a bunch of net connections at once, you
    have a few options:
    One is multithreading: you have a separate micro-program inside the
    main program for each net connection that reads and writes to the
    connection as needed.This, performance-wise, sucks.
    Another is asynchronous network programming: you have a single main
    program that finds out when various net connections are ready to
    read/write, and acts accordingly.
    The problem is that the oldest ways to find out when net connections
    are ready to read/write, suck. And the newest ways are finally fast,
    but are not available on all platforms.
    This is where Libevent comes in and wraps all these ways to find
    out whether net connections are ready to read/write, so that Tor
    (and other programs) can use the fastest one that your platform
    supports, but can still work on older platforms (these methods are
    all different depending on the platorm) So Libevent presents a
    consistent and fast interface to select, poll, kqueue, epoll,
    /dev/poll, and windows select.
    However, On the the Win32 platform (by Microsoft) the only good
    way to do fast IO on windows with hundreds of sockets is using
    overlapped IO, which is grossly unlike every other BSD sockets
    <p>Libevent has <a href="http://www.monkey.org/~provos/libevent/">its
    own website</a>.

    <a id="MyNewFeature"></a>
    <h3><a class="anchor" href="#MyNewFeature">What do I need to do to get
    a new feature into Tor?</a></h3>

    For a new feature to go into Tor, it needs to be designed (explain what
    you think Tor should do), argued to be secure (explain why it's better
    or at least as good as what Tor does now), specified (explained at the
    byte level at approximately the level of detail in tor-spec.txt), and
    implemented (done in software).

    You probably shouldn't count on other people doing all of these steps
    for you: people who are skilled enough to do this stuff generally
    have their own favorite feature requests.


    <a id="AnonymityAndSecurity"></a>
    <h2><a class="anchor">Anonymity And Security:</a></h2>

    <a id="WhatProtectionsDoesTorProvide"></a>
    <h3><a class="anchor" href="#WhatProtectionsDoesTorProvide">What
    protections does Tor provide?</a></h3>

    Internet communication is based on a store-and-forward model that
    can be understood in analogy to postal mail: Data is transmitted in
    blocks called IP datagrams or packets. Every packet includes a source
    IP address (of the sender) and a destination IP address (of the
    receiver), just as ordinary letters contain postal addresses of sender
    and receiver. The way from sender to receiver involves multiple hops of
    routers, where each router inspects the destination IP address and
    forwards the packet closer to its destination. Thus, every router
    between sender and receiver learns that the sender is communicating
    with the receiver. In particular, your local ISP is in the position to
    build a complete profile of your Internet usage. In addition, every
    server in the Internet that can see any of the packets can profile your

    The aim of Tor is to improve your privacy by sending your traffic through
    a series of proxies. Your communication is encrypted in multiple layers
    and routed via multiple hops through the Tor network to the final
    receiver. More details on this process can be found in the <a
    href="https://www.torproject.org/about/overview">Tor overview</a>.
    Note that all your local ISP can observe now is that you are
    communicating with Tor nodes. Similarly, servers in the Internet just
    see that they are being contacted by Tor nodes.

    Generally speaking, Tor aims to solve three privacy problems:

    First, Tor prevents websites and other services from learning
    your location, which they can use to build databases about your
    habits and interests. With Tor, your Internet connections don't
    give you away by default -- now you can have the ability to choose,
    for each connection, how much information to reveal.

    Second, Tor prevents people watching your traffic locally (such as
    your ISP) from learning what information you're fetching and where
    you're fetching it from. It also stops them from deciding what you're
    allowed to learn and publish -- if you can get to any part of the Tor
    network, you can reach any site on the Internet.

    Third, Tor routes your connection through more than one Tor relay
    so no single relay can learn what you're up to. Because these relays
    are run by different individuals or organizations, distributing trust
    provides more security than the old <a href="#Torisdifferent">one hop proxy
    </a> approach.

    Note, however, that there are situations where Tor fails to solve these
    privacy problems entirely: see the entry below on <a
    href="#AttacksOnOnionRouting">remaining attacks</a>.


    <a id="CanExitNodesEavesdrop"></a>
    <h3><a class="anchor" href="#CanExitNodesEavesdrop">Can exit nodes eavesdrop
    on communications? Isn't that bad?</a></h3>

    Yes, the guy running the exit node can read the bytes that come in and
    out there. Tor anonymizes the origin of your traffic, and it makes sure
    to encrypt everything inside the Tor network, but it does not magically
    encrypt all traffic throughout the Internet.

    This is why you should always use end-to-end encryption such as SSL for
    sensitive Internet connections. (The corollary to this answer is that if
    you are worried about somebody intercepting your traffic and you're
    *not* using end-to-end encryption at the application layer, then something
    has already gone wrong and you shouldn't be thinking that Tor is the problem.)

    Tor does provide a partial solution in a very specific situation, though.
    When you make a connection to a destination that also runs a Tor relay,
    Tor will automatically extend your circuit so you exit from that circuit.
    So for example if Indymedia ran a Tor relay on the same IP address as
    their website, people using Tor to get to the Indymedia website would
    automatically exit from their Tor relay, thus getting *better* encryption
    and authentication properties than just browsing there the normal way.

    We'd like to make it still work even if the service is nearby the Tor
    relay but not on the same IP address. But there are a variety of
    technical problems we need to overcome first (the main one being "how
    does the Tor client learn which relays are associated with which
    websites in a decentralized yet non-gamable way?").


    <a id="AmITotallyAnonymous"></a>
    <h3><a class="anchor" href="#AmITotallyAnonymous">So I'm totally anonymous
    if I use Tor?</a></h3>

    First, Tor protects the network communications. It separates where you
    are from where you are going on the Internet. What content and data you
    transmit over Tor is controlled by you. If you login to Google or
    Facebook via Tor, the local ISP or network provider doesn't know you
    are visiting Google or Facebook. Google and Facebook don't know where
    you are in the world. However, since you have logged into their sites,
    they know who you are. If you don't want to share information, you are
    in control.

    Second, active content, such as Java, Javascript, Adobe Flash, Adobe
    Shockwave, QuickTime, RealAudio, ActiveX controls, and VBScript, are
    binary applications. These binary applications run as your user account
    with your permissions in your operating system. This means these
    applications can access anything that your user account can access. Some
    of these technologies, such as Java and Adobe Flash for instance, run in
    what is known as a virtual machine. This virtual machine may have the
    ability to ignore your configured proxy settings, and therefore bypass
    Tor and share information directly to other sites on the Internet. The
    virtual machine may be able to store data, such as cookies, completely
    separate from your browser or operating system data stores. Therefore,
    these technologies must be disabled in your browser to use Tor safely.
    That's where the <a
    href="<page projects/torbrowser>">Tor Browser
    Bundle</a> comes in. We produce a web browser that is preconfigured to
    help you control the risks to your privacy and anonymity while browsing
    the Internet. Not only are the above technologies disabled to prevent
    identity leaks, the Tor Browser also includes browser extensions like
    NoScript and Torbutton, as well as patches to the Firefox source
    code. The full design of the Tor Browser can be read <a
    In designing a safe, secure solution for browsing the web with Tor,
    we've discovered that configuring <a href="#TBBOtherBrowser">other
    browsers</a> to use Tor is unsafe.

    Alternatively, you may find a Live CD or USB operating system more to
    your liking. The Tails team has created an <a
    href="https://tails.boum.org/">entire bootable operating system</a>
    configured for anonymity and privacy on the Internet.

    Tor is a work in progress. There is still <a
    href="https://www.torproject.org/getinvolved/volunteer">plenty of work
    left to do</a> for a strong, secure, and complete solution.


    <a id="ExitEnclaving"></a>
    <h3><a class="anchor" href="#ExitEnclaving">What is Exit Enclaving?</a>

    When a machine that runs a Tor relay also runs a public service, such as
    a webserver, you can configure Tor to offer Exit Enclaving to that
    service. Running an Exit Enclave for all of your services you wish to
    be accessible via Tor provides your users the assurance that they will
    exit through your server, rather than exiting from a randomly selected
    exit node that could be watched. Normally, a tor circuit would end at
    an exit node and then that node would make a connection to your service.
    Anyone watching that exit node could see the connection to your service,
    and be able to snoop on the contents if it were an unencrypted
    connection. If you run an Exit Enclave for your service, then the exit
    from the Tor network happens on the machine that runs your service,
    rather than on an untrusted random node. This works when Tor clients
    wishing to connect to this public service extend their circuit
    to exit from the Tor relay running on that same host. For example, if
    the server at runs a web server on port 80 and also acts as a
    Tor relay configured for Exit Enclaving, then Tor clients wishing to
    connect to the webserver will extend their circuit a fourth hop to exit
    to port 80 on the Tor relay running on
    Exit Enclaving is disabled by default to prevent attackers from
    exploiting trust relationships with locally bound services. For
    example, often will run services that are not designed to
    be shared with the entire world. Sometimes these services will also
    be bound to the public IP address, but will only allow connections if
    the source address is something trusted, such as
    As a result of possible trust issues, relay operators must configure
    their exit policy to allow connections to themselves, but they should
    do so only when they are certain that this is a feature that they would
    like. Once certain, turning off the ExitPolicyRejectPrivate option will
    enable Exit Enclaving. An example configuration would be as follows:
    ExitPolicy accept
    ExitPolicy reject
    ExitPolicyRejectPrivate 0
    This option should be used with care as it may expose internal network
    blocks that are not meant to be accessible from the outside world or
    the Tor network. Please tailor your ExitPolicy to reflect all netblocks
    that you want to prohibit access.
    This option should be used with care as it may expose internal network
    blocks that are not meant to be accessible from the outside world or
    the Tor network. Please tailor your ExitPolicy to reflect all netblocks
    that you want to prohibit access.
    While useful, this behavior may go away in the future because it is
    imperfect. A great idea but not such a great implementation.


    <a id="KeyManagement"></a>
    <h3><a class="anchor" href="#KeyManagement">Tell me about all the
keys Tor uses.</a></h3>

    Tor uses a variety of different keys, with three goals in mind: 1)
    encryption to ensure privacy of data within the Tor network, 2)
    authentication so clients know they're
    talking to the relays they meant to talk to, and 3) signatures to
    sure all clients know the same set of relays.

    <b>Encryption</b>: first, all connections in Tor use TLS link
    so observers can't look inside to see which circuit a given cell is
    intended for. Further, the Tor client establishes an ephemeral
    key with each relay in the circuit; these extra layers of encryption
    mean that only the exit relay can read
    the cells. Both sides discard the circuit key when the circuit ends,
    so logging traffic and then breaking into the relay to discover the
    key won't work.

    Every Tor relay has a public decryption key called the "onion key".
    Each relay rotates its onion key once a week.
    When the Tor client establishes circuits, at each step it <a

    that the Tor relay prove knowledge of its onion key</a>. That way
    the first node in the path can't just spoof the rest of the path.
    Because the Tor client chooses the path, it can make sure to get
    Tor's "distributed trust" property: no single relay in the path can
    know about both the client and what the client is doing.

    How do clients know what the relays are, and how do they know that
    have the right keys for them? Each relay has a long-term public
    key called the "identity key". Each directory authority additionally
has a
    "directory signing key". The directory authorities <a
    href="<specblob>dir-spec.txt">provide a signed list</a>
    of all the known relays, and in that list are a set of certificates
    each relay (self-signed by their identity key) specifying their
    locations, exit policies, and so on. So unless the adversary can
    a majority of the directory authorities (as of 2012 there are 8
    directory authorities), he can't trick the Tor client into using
    other Tor relays.

    How do clients know what the directory authorities are? The Tor
    comes with a built-in list of location and public key for each
    authority. So the only way to trick users into using a fake Tor
    is to give them a specially modified version of the software.

    How do users know they've got the right software? When we distribute
    the source code or a package, we digitally sign it with <a
    href="http://www.gnupg.org/">GNU Privacy Guard</a>. See the <a
    href="<page docs/verifying-signatures>">instructions
    on how to check Tor's signatures</a>.

    In order to be certain that it's really signed by us, you need to
    met us in person and gotten a copy of our GPG key fingerprint, or
    need to know somebody who has. If you're concerned about an attack
    this level, we recommend you get involved with the security
    and start meeting people.


<a id="EntryGuards"></a>
<h3><a class="anchor" href="#EntryGuards">What are Entry

Tor (like all current practical low-latency anonymity designs) fails
when the attacker can see both ends of the communications channel. For
example, suppose the attacker controls or watches the Tor relay you
to enter the network, and also controls or watches the website you
visit. In
this case, the research community knows no practical low-latency design
that can reliably stop the attacker from correlating volume and timing
information on the two sides.

So, what should we do? Suppose the attacker controls, or can observe,
<i>C</i> relays. Suppose there are <i>N</i> relays total. If you select
new entry and exit relays each time you use the network, the attacker
will be able to correlate all traffic you send with probability
<i>(c/n)<sup>2</sup></i>. But profiling is, for most users, as bad
as being traced all the time: they want to do something often without
an attacker noticing, and the attacker noticing once is as bad as the
attacker noticing more often. Thus, choosing many random entries and
gives the user no chance of escaping profiling by this kind of attacker.

The solution is "entry guards": each Tor client selects a few relays at
to use as entry points, and uses only those relays for her first hop. If
those relays are not controlled or observed, the attacker can't win,
ever, and the user is secure. If those relays <i>are</i> observed or
controlled by the attacker, the attacker sees a larger <i>fraction</i>
of the user's traffic &mdash; but still the user is no more profiled
before. Thus, the user has some chance (on the order of <i>(n-c)/n</i>)
of avoiding profiling, whereas she had none before.

You can read more at <a href="http://freehaven.net/anonbib/#wright02">An
Analysis of the Degradation of Anonymous Protocols</a>, <a
href="http://freehaven.net/anonbib/#wright03">Defending Anonymous
Communication Against Passive Logging Attacks</a>, and especially
<a href="http://freehaven.net/anonbib/#hs-attack06">Locating Hidden

Restricting your entry nodes may also help against attackers who want
to run a few Tor nodes and easily enumerate all of the Tor user IP
addresses. (Even though they can't learn what destinations the users
are talking to, they still might be able to do bad things with just a
list of users.) However, that feature won't really become useful until
we move to a "directory guard" design as well.


    <a id="ChangePaths"></a>
    <h3><a class="anchor" href="#ChangePaths">How often does Tor change its paths?</a></h3>
     Tor will reuse the same circuit for new TCP streams for 10 minutes,
     as long as the circuit is working fine. (If the circuit fails, Tor
     will switch to a new circuit immediately.)
But note that a single TCP stream (e.g. a long IRC connection) will stay on
the same circuit forever -- we don't rotate individual streams from one
circuit to the next. Otherwise an adversary with a partial view of the
network would be given many chances over time to link you to your
destination, rather than just one chance.


    <a id="CellSize"></a>
    <h3><a class="anchor" href="#CellSize">Tor uses hundreds of bytes for
    every IRC line. I can't afford that!</a></h3>
     Tor sends data in chunks of 512 bytes (called "cells"), to make it
     harder for intermediaries to guess exactly how many bytes you're
     communicating at each step. This is unlikely to change in the near
     future -- if this increased bandwidth use is prohibitive for you, I'm
     afraid Tor is not useful for you right now.
The actual content of these fixed size cells is
<a href="https://gitweb.torproject.org/torspec.git/blob/HEAD:/tor-spec.txt">
documented in the main Tor spec</a>, section 3.
We have been considering one day adding two classes of cells -- maybe a 64
byte cell and a 1024 byte cell. This would allow less overhead for
interactive streams while still allowing good throughput for bulk streams.
But since we want to do a lot of work on quality-of-service and better
queuing approaches first, you shouldn't expect this change anytime soon
(if ever). However if you are keen, there are a couple of
<a href="<page getinvolved/volunteer>#Research">
research ideas</a> that may involve changing the cell size.


    <a id="OutboundConnections"></a>
    <h3><a class="anchor" href="#OutboundConnections">Why does netstat show
    these outbound connections?</a></h3>
    Because that's how Tor works. It holds open a handful of connections
    so there will be one available when you need one.


    <a id="PowerfulBlockers"></a>
    <h3><a class="anchor" href="#PowerfulBlockers">What about powerful blocking
 An adversary with a great deal of manpower and money, and severe
 real-world penalties to discourage people from trying to evade detection,
 is a difficult test for an anonymity and anti-censorship system.
The original Tor design was easy to block if the attacker controls Alice's
connection to the Tor network --- by blocking the directory authorities, by
blocking all the relay IP addresses in the directory, or by filtering based
on the fingerprint of the Tor TLS handshake. After seeing these attacks and
others first-hand, more effort was put into researching new circumvention
techniques. Pluggable transports are protocols designed to allow users behind
government firewalls to access the Tor network.
We've made quite a bit of progress on this problem lately. You can read more
details on the <a href="<page docs/pluggable-transports>">
pluggable transports page</a>. You may also be interested in
<a href="https://www.youtube.com/watch?v=GwMr8Xl7JMQ">Roger and Jake's talk at
28C3</a>, or <a href="https://www.youtube.com/watch?v=JZg1nqs793M">Runa's
talk at 44con</a>.


    <a id="RemotePhysicalDeviceFingerprinting"></a>
    <h3><a class="anchor" href="#RemotePhysicalDeviceFingerprinting">Does Tor
    resist "remote physical device fingerprinting"?</a></h3>
 Yes, we resist all of these attacks as far as we know.
These attacks come from examining characteristics of the IP headers or TCP
headers and looking for information leaks based on individual hardware
signatures. One example is the
<a href="http://www.caida.org/outreach/papers/2005/fingerprinting/">
Oakland 2005 paper</a> that lets you learn if two packet streams originated
from the same hardware, but only if you can see the original TCP timestamps.
Tor transports TCP streams, not IP packets, so we end up automatically
scrubbing a lot of the potential information leaks. Because Tor relays use
their own (new) IP and TCP headers at each hop, this information isn't
relayed from hop to hop. Of course, this also means that we're limited in
the protocols we can transport (only correctly-formed TCP, not all IP like
ZKS's Freedom network could) -- but maybe that's a good thing at this stage.


    <a id="IsTorLikeAVPN"></a>
    <h3><a class="anchor" href="#IsTorLikeAVPN">Is Tor like a VPN?</a></h3>

    <b>Do not use a VPN as an <a href="http://www.nbcnews.com/news/investigations/war-anonymous-british-spies-attacked-hackers-snowden-docs-show-n21361">anonymity solution</a>.</b>
    If you're looking for a trusted entry into the Tor network, or if you want
    to obscure the fact that you're using Tor, <a
    href="https://www.torproject.org/docs/bridges#RunningABridge">setting up
    a private server as a bridge</a> works quite well.

    VPNs encrypt the traffic between the user and the VPN provider,
    and they can act as a proxy between a user and an online destination.
    However, VPNs have a single point of failure: the VPN provider.
    A technically proficient attacker or a number of employees could
    retrieve the full identity information associated with a VPN user.
    It is also possible to use coercion or other means to convince a
    VPN provider to reveal their users' identities. Identities can be
    discovered by following a money trail (using Bitcoin does not solve
    this problem because Bitcoin is not anonymous), or by persuading the
    VPN provider to hand over logs. Even
    if a VPN provider says they don't keep logs, users have to take their
    word for it---and trust that the VPN provider won't buckle to outside
    pressures that might want them to start keeping logs.

    When you use a VPN, websites can still build up a persistent profile of
    your usage over time. Even though sites you visit won't automatically
    get your originating IP address, they still know how to profile you
    based on your browsing history.

    When you use Tor the IP address you connect to changes at most every 10
    minutes, and often more frequently than that. This makes it extremely
    dificult for websites to create any sort of persistent profile of Tor
    users (assuming you did not <a
    href="<page download/download>#warning">identify
    yourself in other ways</a>). No one Tor relay can know enough
    information to compromise any Tor user because of Tor's <a
    href="<page about/overview>#thesolution">encrypted
    three-hop circuit</a> design.


    <a id="Proxychains"></a>
    <h3><a class="anchor" href="#Proxychains">Aren't 10 proxies
    (proxychains) better than Tor with only 3 hops?</a></h3>

    Proxychains is a program that sends your traffic through a series of
    open web proxies that you supply before sending it on to your final
    destination. <a href="#KeyManagement">Unlike Tor</a>, proxychains
    does not encrypt the connections between each proxy server. An open proxy
    that wanted to monitor your connection could see all the other proxy
    servers you wanted to use between itself and your final destination,
    as well as the IP address that proxy hop received traffic from.
    Because the <a
    Tor protocol</a> requires encrypted relay-to-relay connections, not
    even a misbehaving relay can see the entire path of any Tor user.
    While Tor relays are run by volunteers and checked periodically for
    suspicious behavior, many open proxies that can be found with a search
    engine are compromised machines, misconfigured private proxies
    not intended for public use, or honeypots set up to exploit users.


<a id="AttacksOnOnionRouting"></a>
    <h3><a class="anchor" href="#AttacksOnOnionRouting">What attacks remain
    against onion routing?</a></h3>
As mentioned above, it is possible for an observer who can view both you and
either the destination website or your Tor exit node to correlate timings of
your traffic as it enters the Tor network and also as it exits. Tor does not
defend against such a threat model.
In a more limited sense, note that if a censor or law enforcement agency has
the ability to obtain specific observation of parts of the network, it is
possible for them to verify a suspicion that you talk regularly to your friend
by observing traffic at both ends and correlating the timing of only that
traffic. Again, this is only useful to verify that parties already suspected
of communicating with one another are doing so. In most countries, the
suspicion required to obtain a warrant already carries more weight than
timing correlation would provide.
Furthermore, since Tor reuses circuits for multiple TCP connections, it is
possible to associate non anonymous and anonymous traffic at a given exit
node, so be careful about what applications you run concurrently over Tor.
Perhaps even run separate Tor clients for these applications.


    <a id="LearnMoreAboutAnonymity"></a>
    <h3><a class="anchor" href="#LearnMoreAboutAnonymity">Where can I
    learn more about anonymity?</a></h3>

    <a href="http://freehaven.net/anonbib/topic.html#Anonymous_20communication">Read these papers</a> (especially the ones in boxes) to get up to speed on anonymous communication systems.


    <a id="AlternateDesigns"></a>
    <h2><a class="anchor">Alternate designs:</a></h2>

    <a id="EverybodyARelay"></a>
    <h3><a class="anchor" href="#EverybodyARelay">You should make every
Tor user be a relay.</a></h3>

    Requiring every Tor user to be a relay would help with scaling the
    network to handle all our users, and <a
    href="#BetterAnonymity">running a Tor
    relay may help your anonymity</a>. However, many Tor users cannot be
    relays &mdash; for example, some Tor clients operate from behind
    firewalls, connect via modem, or otherwise aren't in a position
where they
    can relay traffic. Providing service to these clients is a critical
    part of providing effective anonymity for everyone, since many Tor
    are subject to these or similar constraints and including these
    increases the size of the anonymity set.

    That said, we do want to encourage Tor users to run relays, so what
    really want to do is simplify the process of setting up and
    a relay. We've made a lot of progress with easy configuration in the
    few years: Vidalia has an easy relay configuration interface, and
    uPnP too. Tor is good at automatically detecting whether it's
reachable and
    how much bandwidth it can offer.

    There are five steps we need to address before we can do this

    First, we need to make Tor stable as a relay on all common
    operating systems. The main remaining platform is Windows,
    and we're mostly there. See Section 4.1 of <a
    development roadmap</a>.

    Second, we still need to get better at automatically estimating
    the right amount of bandwidth to allow. See item #7 on the
    <a href="<page getinvolved/volunteer>#Research">research section of
    volunteer page</a>: "Tor doesn't work very well when relays
    have asymmetric bandwidth (e.g. cable or DSL)". It might be that <a
    href="<page docs/faq>#TransportIPnotTCP">switching
    to UDP transport</a> is the simplest answer here &mdash; which alas
    not a very simple answer at all.

    Third, we need to work on scalability, both of the network (how to
    stop requiring that all Tor relays be able to connect to all Tor
    relays) and of the directory (how to stop requiring that all Tor
    users know about all Tor relays). Changes like this can have large
    impact on potential and actual anonymity. See Section 5 of the <a
    href="<svnprojects>design-paper/challenges.pdf">Challenges</a> paper
    for details. Again, UDP transport would help here.

    Fourth, we need to better understand the risks from
    letting the attacker send traffic through your relay while
    you're also initiating your own anonymized traffic. <a
    href="http://freehaven.net/anonbib/#back01">Three</a> <a
    <a href="http://freehaven.net/anonbib/#torta05">research</a> papers
    describe ways to identify the relays in a circuit by running traffic
    through candidate relays and looking for dips in the traffic while
    circuit is active. These clogging attacks are not that scary in the
    context so long as relays are never clients too. But if we're trying
    encourage more clients to turn on relay functionality too (whether
    <a href="<page docs/bridges>">bridge relays</a> or as normal
relays), then
    we need to understand this threat better and learn how to mitigate

    Fifth, we might need some sort of incentive scheme to encourage
    to relay traffic for others, and/or to become exit nodes. Here are
    <a href="<blog>two-incentive-designs-tor">current
    thoughts on Tor incentives</a>.

    Please help on all of these!


<a id="TransportIPnotTCP"></a>
<h3><a class="anchor" href="#TransportIPnotTCP">You should transport all
IP packets, not just TCP packets.</a></h3>

This would be handy, because it would make Tor better able to handle
new protocols like VoIP, it could solve the whole need to socksify
applications, and it would solve the fact that exit relays need to
allocate a lot of file descriptors to hold open all the exit

We're heading in this direction: see <a
href="https://trac.torproject.org/projects/tor/ticket/1855">this trac
ticket</a> for directions we should investigate. Some of the hard
problems are:

<li>IP packets reveal OS characteristics. We would still need to do
IP-level packet normalization, to stop things like TCP fingerprinting
attacks. Given the diversity and complexity of TCP stacks, along with <a
fingerprinting attacks</a>, it looks like our best bet is shipping our
own user-space TCP stack.
<li>Application-level streams still need scrubbing. We will still need
user-side applications like Torbutton. So it won't become just a matter
of capturing packets and anonymizing them at the IP layer.
<li>Certain protocols will still leak information. For example, we must
rewrite DNS requests so they are delivered to an unlinkable DNS server
rather than the DNS server at a user's ISP; thus, we must understand
the protocols we are transporting.
(datagram TLS) basically has no users, and IPsec sure is big. Once we've
picked a transport mechanism, we need to design a new end-to-end Tor
protocol for avoiding tagging attacks and other potential anonymity and
integrity issues now that we allow drops, resends, et cetera.
<li>Exit policies for arbitrary IP packets mean building a secure
IDS. Our node operators tell us that exit policies are one of the main
reasons they're willing to run Tor. Adding an Intrusion Detection System
to handle exit policies would increase the security complexity of Tor,
and would likely not work anyway, as evidenced by the entire field of
and counter-IDS papers. Many potential abuse issues are resolved by the
fact that Tor only transports valid TCP streams (as opposed to arbitrary
IP including malformed packets and IP floods), so exit policies become
even <i>more</i> important as we become able to transport IP packets. We
also need to compactly describe exit policies in the Tor directory,
so clients can predict which nodes will allow their packets to exit
and clients need to predict all the packets they will want to send in
a session before picking their exit node!
<li>The Tor-internal name spaces would need to be redesigned. We support
hidden service ".onion" addresses by intercepting the addresses when
they are passed to the Tor client. Doing so at the IP level will require
a more complex interface between Tor and the local DNS resolver.


<a id="HideExits"></a>
<h3><a class="anchor" href="#HideExits">You should hide the list of Tor
relays, so people can't block the exits.</a></h3>

There are a few reasons we don't:

<li>We can't help but make the information available, since Tor clients
need to use it to pick their paths. So if the "blockers" want it, they
can get it anyway. Further, even if we didn't tell clients about the
list of relays directly, somebody could still make a lot of connections
through Tor to a test site and build a list of the addresses they see.

<li>If people want to block us, we believe that they should be allowed
do so.  Obviously, we would prefer for everybody to allow Tor users to
connect to them, but people have the right to decide who their services
should allow connections from, and if they want to block anonymous
they can.

<li>Being blockable also has tactical advantages: it may be a persuasive
response to website maintainers who feel threatened by Tor. Giving them
the option may inspire them to <a href="<page docs/faq-abuse>#Bans">stop
and think</a> about whether they really want to eliminate private access
to their system, and if not, what other options they might have. The
time they might otherwise have spent blocking Tor, they may instead
spend rethinking their overall approach to privacy and anonymity.


<a id="ChoosePathLength"></a>
<h3><a class="anchor" href="#ChoosePathLength">You should let people choose
their path length.</a></h3>
 Right now the path length is hard-coded at 3 plus the number of nodes in
 your path that are sensitive. That is, in normal cases it's 3, but for
 example if you're accessing a hidden service or a ".exit" address it could be 4.
 We don't want to encourage people to use paths longer than this -- it
 increases load on the network without (as far as we can tell) providing
 any more security. Remember that <a
 the best way to attack Tor is to attack the endpoints and ignore the middle
 of the path
 And we don't want to encourage people to use paths of length 1 either.
 Currently  there is no reason to suspect that investigating a single
 relay will yield  user-destination pairs, but if many people are using
 only a single hop, we make it more likely that attackers will seize or
 break into relays in hopes
 of tracing users.
 Now, there is a good argument for making the number of hops in a path
 unpredictable. For example, somebody who happens to control the last
 two hops in your path still doesn't know who you are, but they know
 for sure which entry node you used. Choosing path length from, say,
 a geometric distribution will turn this into a statistical attack,
 which seems to be an improvement. On the other hand, a longer path
 length is bad for usability. We're not sure of the right trade-offs
 here. Please write a research paper that tells us what to do.


<a id="SplitEachConnection"></a>
    <h3><a class="anchor" href="#SplitEachConnection">You should split
    each connection over many paths.</a></h3>

 We don't currently think this is a good idea. You see, the attacks we're
 worried about are at the endpoints: the adversary watches Alice (or the
 first hop in the path) and Bob (or the last hop in the path) and learns
 that they are communicating.
If we make the assumption that timing attacks work well on even a few packets
end-to-end, then having *more* possible ways for the adversary to observe the
connection seems to hurt anonymity, not help it.
Now, it's possible that we could make ourselves more resistant to end-to-end
attacks with a little bit of padding and by making each circuit send and
receive a fixed number of cells. This approach is more well-understood in
the context of high-latency systems. See e.g.
<a href="http://freehaven.net/anonbib/#pet05-serjantov">
Message Splitting Against the Partial Adversary by Andrei Serjantov and
Steven J. Murdoch</a>.
But since we don't currently understand what network and padding
parameters, if any, could provide increased end-to-end security, our
current strategy is to minimize the number of places that the adversary
could possibly see.


    <a id="MigrateApplicationStreamsAcrossCircuits"></a>
    <h3><a class="anchor" href="#MigrateApplicationStreamsAcrossCircuits">You
    should migrate application streams across circuits.</a></h3>
    <p>This would be great for two reasons. First, if a circuit breaks, we
    would be able to shift its active streams onto a new circuit, so they
    don't have to break. Second, it is conceivable that we could get
    increased security against certain attacks by migrating streams
    periodically, since leaving a stream on a given circuit for many hours
    might make it more vulnerable to certain adversaries.</p>

    <p>There are two problems though. First, Tor would need a much more
    bulky protocol. Right now each end of the Tor circuit just sends the
    cells, and lets TCP provide the in-order guaranteed delivery. If we
    can move streams across circuits, though, we would need to add queues
    at each end of the circuit, add sequence numbers so we can send and
    receive acknowledgements for cells, and so forth. These changes would
    increase the complexity of the Tor protocol considerably. Which leads
    to the second problem: if the exit node goes away, there's nothing we
    can do to save the TCP connection. Circuits are typically three hops
    long, so in about a third of the cases we just lose.</p>

    <p>Thus our current answer is that since we can only improve things by
    at best 2/3, it's not worth the added code and complexity. If somebody
    writes a protocol specification for it and it turns out to be pretty
    simple, we'd love to add it.</p>

    <p>But there are still some approaches we can take to improve the
    reliability of streams. The main approach we have now is to specify
    that streams using certain application ports prefer circuits to be
    made up of stable nodes. These ports are specified in the "LongLivedPorts"
    <a href="#torrc">torrc</a> option, and they default to</p>
    <p>The definition of "stable" is an open research question, since we
    can only guess future stability based on past performance. Right now
    we judge that a node is stable if it advertises that it has been up
    for more than a day. Down the road we plan to refine this so it takes into
    account the average stability of the other nodes in the Tor network.</p>


    <a id="LetTheNetworkPickThePath"></a>
    <h3><a class="anchor" href="#LetTheNetworkPickThePath">You should
    let the network pick the path, not the client</a></h3>

    <p>No. You cannot trust the network to pick the path for relays could
    collude and route you through their colluding friends. This would give
    an adversary the ability to watch all of your traffic end to end.</p>


    <a id="UnallocatedNetBlocks"></a>
    <h3><a class="anchor" href="#UnallocatedNetBlocks">Your default exit
    policy should block unallocated net blocks too.</a></h3>

 No, it shouldn't. The default exit policy blocks certain private net blocks,
 like, because they might actively be in use by Tor relays and we
 don't want to cause any surprises by bridging to internal networks. Some
 overzealous firewall configs suggest that you also block all the parts of
 the Internet that IANA has not currently allocated. First, this turns into
 a problem for them when those addresses *are* allocated. Second, why should
 we default-reject something that might one day be useful?
Tor's default exit policy is chosen to be flexible and useful in the future:
we allow everything except the specific addresses and ports that we
anticipate will lead to problems.


    <a id="BlockWebsites"></a>
    <h3><a class="anchor" href="#BlockWebsites">Exit policies should be
    able to block websites, not just IP addresses.</a></h3>

 It would be nice to let relay operators say things like "reject
 www.slashdot.org" in their exit policies, rather than requiring
 them to learn all the IP address space that could be covered by the site
 (and then also blocking other sites at those IP addresses).
There are two problems, though. First, users could still get around these
blocks. For example, they could request the IP address rather than the
hostname when they exit from the Tor network. This means operators would
still need to learn all the IP addresses for the destinations in question.
The second problem is that it would allow remote attackers to censor
arbitrary sites. For example, if a Tor operator blocks www1.slashdot.org,
and then some attacker poisons the Tor relay's DNS or otherwise changes
that hostname to resolve to the IP address for a major news site, then
suddenly that Tor relay is blocking the news site.


    <a id="BlockContent"></a>
    <h3><a class="anchor" href="#BlockContent">You should change Tor to
    prevent users from posting certain content.</a></h3>

    <p> Tor only transports data, it does not inspect the contents of the
    connections which are sent over it. In general it's a very hard problem
    for a computer to determine what is objectionable content with good true
    positive/false positive rates and we are not interested in addressing
    this problem.
Further, and more importantly, which definition of "certain content" could we
use? Every choice would lead to a quagmire of conflicting personal morals. The
only solution is to have no opinion.


    <a id="SendPadding"></a>
    <h3><a class="anchor" href="#SendPadding">You should send padding so it's
    more secure.</a></h3>

    Like all anonymous communication networks that are fast enough for web
    browsing, Tor is vulnerable to statistical "traffic confirmation"
    attacks, where the adversary watches traffic at both ends of a circuit
    and confirms his guess that they're communicating. It would be really
    nice if we could use cover traffic to confuse this attack. But there
    are three problems here:

    Cover traffic is really expensive. And *every* user needs to be doing
    it. This adds up to a lot of extra bandwidth cost for our volunteer
    operators, and they're already pushed to the limit.
    You'd need to always be sending traffic, meaning you'd need to always
    be online. Otherwise, you'd need to be sending end-to-end cover
    traffic -- not just to the first hop, but all the way to your final
    destination -- to prevent the adversary from correlating presence of
    traffic at the destination to times when you're online. What does it
    mean to send cover traffic to -- and from -- a web server? That is not
    supported in most protocols.
    Even if you *could* send full end-to-end padding between all users and
    all destinations all the time, you're *still* vulnerable to active
    attacks that block the padding for a short time at one end and look for
    patterns later in the path.

    In short, for a system like Tor that aims to be fast, we don't see any
    use for padding, and it would definitely be a serious usability problem.
    We hope that one day somebody will prove us wrong, but we are not


    <a id="Steganography"></a>
    <h3><a class="anchor" href="#Steganography">You should use steganography to hide Tor

    Many people suggest that we should use steganography to make it hard
    to notice Tor connections on the Internet. There are a few problems
    with this idea though:

    First, in the current network topology, the Tor relays list <a
    href="#HideExits">is public</a> and can be accessed by attackers.
    An attacker who wants to detect or block anonymous users could
    always just notice <b>any connection</b> to or from a Tor relay's
    IP address.


    <a id="Abuse"></a>
    <h2><a class="anchor">Abuse:</a></h2>

    <a id="Criminals"></a>
    <h3><a class="anchor" href="#Criminals">Doesn't Tor enable criminals
to do bad things?</a></h3>

    For the answer to this question and others, please see our <a
    href="<page docs/faq-abuse>">Tor Abuse FAQ</a>.


    <a id="RespondISP"></a>
    <h3><a class="anchor" href="#RespondISP">How do I respond to my ISP
about my exit relay?</a></h3>

    A collection of templates for successfully responding to ISPs is <a


   <a id="HelpPoliceOrLawyers"></a>
   <h3><a class="anchor" href="#HelpPoliceOrLawyers">I have questions about
   a Tor IP address for a legal case.</a></h3>

   Please read the <a
   href="https://www.torproject.org/eff/tor-legal-faq">legal FAQ written
   by EFF lawyers</a>. There's a growing <a
   directory</a> of people who may be able to help you.

   If you need to check if a certain IP address was acting as a Tor exit
   node at a certain date and time, you can use the <a
   href="https://exonerator.torproject.org/">ExoneraTor tool</a> to query the
   historic Tor relay lists and get an answer.


  <!-- END MAINCOL -->
  <div id = "sidecol">
#include "side.wmi"
#include "info.wmi"
  <!-- END SIDECOL -->
<!-- END CONTENT -->
#include <foot.wmi>