Problems using an OpenPGP smartcard for SSH with gpg-agent

I have been using an OpenPGP smartcard for encryption, signing and authentication for over a year now and I’ve found it to be really useful as a root of trust. I have all my systems locked down to only allow public key authentication as a 2 factor security mechanism. While the Free Software Foundation Europe have a good guide about setting up a OpenPGP smartcard using subkeys and offline backups  its unfortunately still not very straight forward to get the card set up.

Recently the EEPROM on my first card died and I had to replace the card. However after setting up the new card with the respective subkeys I  consistently encountered an error from gpg-agent where it was looking for the previous card to be inserted during SSH authentication:

Please insert the card with serial number xxxxxxxxxxxxxxxxx

Please remove the current card and insert the one with serial number xxxxxxxxxxxxxxxxx

Via the magicsauce of strace I eventually determined that gpg-agent was attempting to load the key and card data from a file in ~/.gnupg/private-keys-v1.d/ which referenced the original smartcard. Resolving this issues was as simple as removing the key file in that directory, logging out and logging back into the user account, and finally running the following commands with the card inserted to reload the desired key into the agent.

$ gpg --card-status
$ gpgkey2ssh [KEYID]
$ ssh-add -l

At this stage  ssh-add -l should list your correct card serial number and you will again be able to authenticate over SSH with the card.

Coinbase – Owning a Bitcoin Exchange Bug Bounty Program

When I first started analyzing the Coinbase website I had a quick look over the site layout and the functionality/attack surface available for potential exploitation. I quickly determined it was running Ruby on Rails based on the encoding of the “_coinbase_session” cookie. This was supported by the fact Coinbase’s founder Brian Armstrong had a lot of Ruby snippets on his Github Gist and some more Ruby questions on his Stack Overflow account.

1. Reflected XSS.

Previously I have had some successes finding XSS vulnerabilities in Flash .swf files on some sites. I quickly saw references to a file, https://coinbase.com/flash/ZeroClipboard.swf in the main CSS file. I recalled reading an advisory about this swf file before, but on first tests it did not appear to be exploitable. This .swf file is typically bundled with ZeroClipboard10.swf. In this case it was also uploaded but not referenced. Bingo! We have found a reflected XSS vulnerability on Coinbase with a known vulnerability in third party code (CVE-2013-1808).

Flash-based XSS on Coinbase.comI reported the vulnerability to Coinbase but @Ciaranmak, who referred me to the Coinbase bug bounty program had reported it independently a few hours before. The Coinbase team still sent me 1 BTC for it.

2. Persistent XSS on Merchant Checkout Pages.

After reporting the first XSS vulnerability, I continued searching their website. Merchants have the ability to set up checkout pages, which they can direct users and easily receive bitcoin payments. At first, it looked like they were checking if the callback URLs were valid and began with http:// or https:// and were blocking my basic attempts to redirect users to javascript: addresses.
Checking for XSS in redirect URL'sXSS Blocked by Filters

Unfortunately for Coinbase, it looked like they made a mistake with their regular expression to sanitize this input and were only checking that http:// or https:// occurred anywhere in the provided URL. I simply had to put a comment after my malicious javascript code containing “http://” and it was stored in the database.

Evading URL filterSuccessful filter evasion!

I created a malicious checkout page as a proof-of-concept. It looks perfectly normal, but when a user either completes payment or cancels the checkout process and wants to return to the original page, the malicious javascript is executed in the context of their Coinbase session and could potentially transfer their entire BTC balance to the attacker’s Bitcoin address. Coinbase have taken steps to mitigate the impact of any XSS vulnerabilities by setting their session cookies to be HttpOnly, which cannot not be read with Javascript, preventing an attacker from high jacking the users session

Malicious Checkout PageRedirect URL containing malicious javascript

Persistent XSS affect all customers of a merchant

Persistent XSS affect all customers of a merchant

I reported this issue to the Coinbase team and they had it fixed in a couple of hours.

3. Insecure OAuth Application Approval

After giving the site a look over for XSS/injection vulnerabilities I moved on to look at what else on the site was open for exploitation. Coinbase presents a standard REST API for their system which allows developers to interact with user accounts. The API allows users to authenticate to their own account with an API key, or use OAuth2 to grant authorization for a 3rd party app to interact with their Coinbase account. Egor Homakov has posted some interesting attacks against common implementations of the OAuth2 protocol on his blog. One of his posts, “The Most Common OAuth2 Vulnerability” gave me some ideas of what to look for.

It turns out that when a user is directed to Coinbase to authorize an application to access their account, Coinbase generates a token which is submitted along with the request when the user clicks the authorize button. Unfortunately they weren’t confirming that the token was valid for that user. As a result a attack could be performed a user is forced to submit an authorization requests generated for the attacker in the context of their own account, giving the attacker’s Coinbase OAuth application complete control of their account and transactions.

Here is a video of the Proof-of-Concept I created to demonstrate just how serious and transparent this attack was. A logged in Coinbase user who viewed a malicious page would give complete API access to their account to the attacker. Not only could the attacker withdraw all their Bitcoin balance, if the victim had a bank account linked to their Coinbase account the attacker could use their bank balance to buy more bitcoins and steal those too, completely oblivious to the user!

The source code for this proof of concept is available on my GitHub account.

Insecure OAuth Redirect URI in the Coinbase Mobile App

After finding that OAuth bug I was interested to find some more information about their backend systems and components to try and exploit it some more. I googled some of the error messages returned by the OAuth2 implementation such as “Client authentication failed due to unknown client, no client authentication included, or unsupported authentication method” and it was clear they were using the Ruby based DoorKeeper. This is confirmed, as the Coinbase co-founder, Fred Ehrsam has created an issue on the repo.

Helpfully we now have some source code deployed by Coinbase. I had a look at the default implementation of validation for redirect URL’s to determine if there are any vulnerabilities but the default implementation checks for complete matches. This rules out many of the third party open redirect OAuth security issues affecting Facebook which has been disclosed recently.

Coinbase OAuth authorization is mainly used for their own Android app. Their app is open source and the code is available on Github. Rather than using the standard redirect of “urn:ietf:wg:oauth:2.0:oob”, the developers had instead setup their app using a redirect URL of “http://example.com/coinbase-redirect” and catching requests with that URL prefix in the app before they were sent. The vulnerability is that if a Coinbase mobile app user views the OAuth authorization form in their browse the were automatically redirect, and they would broadcast their private OAuth token across the network in plaintext to an untrusted endpoint “example.com”, run by the IANA. Check out the fix commit to see where their mistake was.

This might not seem very exploitable but imagine that your are in a coffee shop and would like to pay with Bitcoin. Conveniently the coffee shop has free WiFi. At point-of-sale you are asked by the friendly cashier to scan a QR code containing the Bitcoin payment info. The QR code actually goes to the Coinbase app authorization page, but as you have already authorized the Coinbase android app it just forwards you right on to the redirect url at “http://example.com/coinbase-redirect”. It would be trivial for that wireless network to intercept the request to example.com contain the OAuth token and forward you to the actually payment page, where you pay for your coffee and leave satisfied.  As the victim you would have almost no way of knowing a third party now has your access tokens and can now withdraw your balance at any time!

Here’s a video demonstrating the insecure redirect. The code is also on my Github.

Chaining Vulnerabilities Together like a Real Attacker

Attackers won’t just exploit one weakness in isolation, but instead chain them together to affect a more complete attack on your sites, systems and users. Coinbase had an issue a few months back where merchant email addresses were being disclosed as a hidden field on their checkout pages which were then indexed by Google. Coinbase have a post on their blog about this. An attacker subsequently scraped these pages and then used the email addresses as targets for a phishing email.

To chain together the disclosed vulnerabilities, an attacker could have first sent payment requests to all the merchant emails disclosed on the checkout pages. At the time, Coinbase was using a white list for the payment message which allowed users to include links which were display when a user views the transaction request in their Coinbase account.  This link would be pointed to the Flash based reflected XSS containing a javascript payload to update the merchant’s redirect URL’s contain another Javascript payload. This second payload would also try and update their merchants redirect URLs.

The XSS payloads could have be written to act in a self replicating manner, like the Samy worm, attempting to compromise the accounts of any customers or merchants who make payments to a compromised account. With merchants like OKCupid and Reddit using Coinbase for their bitcoin  payments it is easy to see how a chained attack like this could affect a lot of Coinbase customers and be very profitable for the attackers.

Conclusion

I had a lot of fun in the few hours I spent looking at Coinbase while procrastinating from exam study. They have done a lot of things right in regards to CSP, HttpOnly session cookies and two-step authentication but are let down with the integration of third party components such as Doorkeeper and ZeroClipboard. It is difficult to get any reasonably complex site completely secure and even sites doing more than $15 million USD per month in Bitcoin transaction volume can have a number of critical issues which a blackbox attacker could discover. I would recommend for all web developers to check out the guides on the OWASP website which cover all the key areas where security problems can occur in web apps.

Quite a few people are listed on the Coinbase White Hat page. I’d be very interested in hearing from some of you to get an idea of what kind of vulnerabilities I missed to try and impove my  testing methodology.

The Coinbase team have been very responsive and worked over the weekend to fix the OAuth account takeover vulnerability, having it patched a few hours after I disclosed it to them.  I’d just like to thank them again for their very nice rewards and for running a great service which helps to develop the Bitcoin economy.

Trawling Tor Hidden Service – Mapping the DHT

Update 2013-08-15: I have been really enthused by reactions I received to this blog post. It has been referenced from Forbes, Gawker and the Daily Mail and a number of people have been in contact about tracking the DHT for themselves. I would recommend the IEEE S&P paper, “Trawling for Tor Hidden Services: Detection, Measurement, Deanonymization” which presents the same issues allowing the DHT to be trawled. They also present some very serious attacks allowing an adversary to locate hidden services with practical resources. It is well worth checking out if you have an interest in Tor.

Tor hidden services have got more media attention lately as a result of some notorious sites like the Silk Road marketplace, an online black market. On a basic level, Tor hidden services allow you to make TCP services available while keeping your server’s physical location hidden via the Tor anonymity network.

TL;DR

  • Tor hidden service directories (HSDir’s) receive a subset of hidden service look-ups from users, allowing them to map relative popularity/usage of hidden service.
  • An adversary with minimal resources can carry out complete DoS attacks of Tor hidden services by running malicious Tor hidden service directories and positioning them in a particular part of the router list.
  • Many look-ups for Tor hidden services go to the incorrect hidden service directories which negatively affects the initial time to access the site.
  • Hidden services such as are popular, sites such as the Silk Road marketplace receive more than 60,000 unique user sessions a day.

Introduction

For users to access a hidden service they must first retrieve a hidden service descriptor. This is a short signed message created by the hidden service approximately every hour contain a list of introduction nodes and some other identifying data such as the descriptor id (desc id). The desc ID is based on a hash of some hidden service information and it changes every 24 hours. This calculation is outlined later in the post. The hidden service then publishes its updated descriptor to a set of 6 responsible hidden service directories (HSDir’s) every hour. These responsible HSDir’s are regular node on the Tor network which have up-time longer than 24 hours and which have received the HSDir flag from the directory authorities. The set of responsible HSDir’s is based on their position of the current descriptor id in a list of all current HSDir’s ordered by their node fingerprint. This is an implementation of a simple DHT (Distributed Hash Table).

A client who would like to retrieve the full HS descriptor will calculate the time based descriptor id and will request it from the responsible HSDir’s directly.

Descriptor is published to a set of HSDir's

Descriptor is published to a set of HSDir’s

Once the client has a copy of the hidden service descriptor they can attempt to connect to one the introduction point and create a complete 7 hop circuit to the hidden service.

This is a very brief explanation which the Tor Project has outlined much more clearly on their hidden service protocol page but it should provide enough information to understand the following information.

The Problems

  • Any one can set up a Tor node (HSDir) and begin logging all hidden service descriptors published to their node. They will also receive all client requests allowing them to observe the number of look-ups for particular hidden services.
  • The list of responsible HSDir’s for a hidden service is based on the calculated descriptor ID and an ordered list of HSDir fingerprints. As the descriptor ID’s are predictable, and the node fingerprint is controlled by an adversary. The can position themselves to be the responsible directories for a targeted hidden service and subsequently perform a DoS attack by not returning the hidden service descriptor to clients. There is nothing the hidden service can do about this attack. They must rely on the Tor Project authority directories to remove the malicious HSDir’s from the consensus.

Information Gathering from the DHT

Hidden services must publish their descriptors to allow clients to reach them. These descriptor id’s are deterministic but over time any HSDir should have an equal chance of receiving the descriptor for any particular hidden service as part of the DHT. As the descriptor is replicated across 6 HSDir’s, any single responsible HSDir should receive up to 1/6 of all client look-ups for that hidden service providing a good means of estimating hidden service popularity.

This data can be made more accurate by running more Tor HSDir’s which have identity digest‘s in the responsible range, to the point where all the responsible HSDir’s are controlled by the observer.

There are currently about 1400 nodes with the ‘HSDir’ flag running. For the past 2 months I have run 4 Tor nodes and logged all hidden service requests that they have received. Each hidden service publishes to 6 HSDir’s every day, and will publish to 360 or approximately 25% of HSDir’s in a 2 month period. As I am running 4 nodes I statistically should have received a copy of every hidden service that was online for those 60 days. While the distribution will not be perfect, my data should contain a representative set of hidden service activity. Here are some stats from those 4 nodes:

  • Received 1.3 million requests for 3815 unique hidden services my nodes were responsible for.
  • Got 16.03 million requests for descriptors my nodes were not currently responsible for.
  • Received published descriptors for 40,500 unique hidden services
  • Received requests from clients for 25,600 unique descriptor id’s.

Even these basic stats point out a number of issues with the way hidden services currently function. 16.03 million or 92.5% of all descriptor requests my nodes received were for hidden services I did not currently have descriptors for. This could be a result of the clients having an out-of-date network consensus and not choosing the correct HSDir’s as responsible. It could also be a result of an out of sync time which causes the clients to look for the wrong/old descriptor id’s. Either way, a significant amount of time is being wasted on descriptor look-up which slows down the time when first accessing a hidden service.

Some More Stats..

The following table contains data on requests my nodes received for some well known Tor based marketplaces. There are also requests for related phishing sites. I have confirmed that some users were directed to these phishing pages from links on the “The Hidden Wiki” (.onion). The number of requests is what my nodes observed. Descriptor look-ups from clients will be divided at random between the 6 responsible HSDir’s and clients will keep trying the remaining HSDir’s until the descriptor is found or all HSDir’s have been checked. As a result the total number of requests received by the following sites per day may be up to 6 times more than the figures below, but these still offer a relative guide to popularity.

Phishing site with a directory listing via forced browsing

Phishing site with a directory listing via forced browsing

Text file containing phished credentials

Text file containing phished credentials

It seems a lot of people, especially scammer’s running phishing sites on Tor hidden services don’t know a lot about web security which leads to sites like this.

Please users, if you enter your credentials on a phishing page, and it doesn’t log you into the site. Don’t try again!

I also observed a large number of requests to the command and control servers of the “Skynet” Tor based botnet which got attention after an AMA with the botnet owner on Reddit. Contact him @skynetbnet on Twitter for more info.

Many of the “Skynet” onion addresses above and other popular addresses are running Bitcoin mining proxies. They generally responded with a basic authentication request for “bitcoin-mining-proxy”. The other services may be Tor based bitcoin mining pools or part of Skynet and/or other botnets. It should be straight forward to find these sites by scanning the service id’s from the raw data on Github.

DoS Attacks on Tor Hidden Services

Tor hidden service desc_id‘s are calculated deterministically and if there is no ‘descriptor cookie’ set in the hidden service Tor config anyone can determine the desc id‘s for any hidden service at any point in time.This is a requirement for the current hidden service protocol as clients must calculate the current descriptor id to request hidden service descriptors from the HSDir’s. The descriptor ID’s are calculated as follows:

descriptor-id = H(permanent-id | H(time-period | descriptor-cookie | replica))

The replica is an integer, currently either 0 or 1 which will generate two separate descriptor ID’s, distributing the descriptor to two sets of 3 consecutive nodes in the DHT. The permanent-id is derived from the service public key. The hash function is SHA1.

time-period = (current-time + permanent-id-byte * 86400 / 256) / 86400

The time-period changes every 24 hours. The first byte of the permanent_id is added to make sure the hidden services do not all try to update their descriptors at the same time.

identity-digest = H(server-identity-key)

The identity-digest is the SHA1 hash of the public key generated from the secret_id_key file in Tor’s keys directory. Normally it should never change for a node as it is used for to determine the router’s long-term fingerprint, but the key is completely user controlled.

A HSDir is responsible if it is one of the three HSDir’s after the calculated desc id in a descending lists of all nodes in the Tor consensus with the HSDir flag, sorted by their identity digest.  The HS descriptor is published to two replica‘s (two set’s of 3 HSDir’s at different points of the router list) based on the two descriptor id’s generated as a result of the ‘0’ or ‘1’ replica value in the descriptor id hash calculation.

I have implemented a script calculating the descriptor ID’s for a particular hidden service at an arbitrary time and it is available on my Github account. I have also created a modified version of ‘Shallot‘ which can be used to generate keys with an identity key in a specified range. The is more usage information on its Github page.

The Attack

The code listed above could be used to generate identity keys and identity digests for an adversary’s HSDir nodes so that they’ll be selected as 6 of the HSDir’s for a targeted hidden service. These adversary controlled hidden service directories could simple return no data (404 Response) to a client requesting the targeted hidden service’s descriptor and in turn prevent them from finding introduction nodes. As there are no other sources for this hidden service descriptor it would be impossible for a user to set up a circuit a complete circuit to the hidden service and there would be a complete denial of service until the descriptor id changes.

For an adversary to continue this attack over a longer time-frame, they would need to set up their nodes for the upcoming desc id’s of the targeted hidden service more than 24 hours in advance to make sure they will have received the HSDir flag.

An adversary would need to run 12-18 nodes to keep up a complete, persistent DoS on the targeted hidden service. Six nodes would be the “responsible HSDir’s” and the other nodes would be running with identity digests in the range of the upcoming desc id’s, to gain the HSDir flags after 24 hours of up-time. An adversary can cut the resources needed by running two Tor instances/nodes per IPv4 IP or by running the Tor nodes on compromised servers on high, unprivileged ports.

These attacks a quite a real, practical threat against the availability of Tor hidden services. For whatever reason (extortion, censorship etc.) adversary can perform complete DoS attacks with minimal resources and there are no actions hidden service owner can do to mitigate, besides switching to descriptor cookie based authentication or multiple private address. The Tor project can try deal with these attacks by removing known malicious HSDir’s from the network consensus but I don’t see a straight forward way to identify these malicious nodes.

Unfortunately there are no easy solutions to this problem at the moment. I can foresee adversaries employing these attacks in the wild against popular hidden services.

Conclusions

Tor hidden services were originally implemented as some a simple feature on top of the Tor network and unfortunately they haven’t received the attention and love the deserve for such a popular feature. There are discussions under way to re-implement hidden services to alllow them to scale more efficiently. There are also people looking at reducing the ability of HSDir’s to sit and gather data on onion address and look-ups like I have done, by implementing a PIR protocol. A good summary of work that needs to be done is available in a blog post on the Tor Project’s website. I’d urge any developers with an interest to join the tor-dev mailing list and see if there is something  you can contribute! A lot of work is needed.

Anyone interested in learning more about the issues with the current Tor hidden service implementation please check out the presentation “Trawling for Tor Hidden Services: Detection, Measurement, Deanonymization” at the IEEE S&P conference this Monday by Alex Biryukov, Ivan Pustogarov and RalfPhilipp Weinmann will probably provide a much more in-depth, formal investigation and I too look forward to reading it. We have been researching similar areas so I’m very interested in their approach and results.

All raw data and my modified Tor clients are available from my Github repo. This also contains scripts for calculating hidden service descriptors and generating OR private keys with fingerprints in a particular range. This data includes all descriptor requests and hidden service ID’s my nodes observed. Please check it out if you are interested in analyzing a random subset of services on the Tor network.

I’d like to thank @mikesligo for having a heated debate in the pub with me about hidden services and getting me interested in how they work. I’d also like to thank @CiaranmaK for reading a draft of this post and pointing out some corrections.

Thank you for reading my first blog post. I’ll have to work on presenting things better as the information in this post is a bit all over the place. Please let me know if you have any questions or feedback in the comments below.

Hello world’ or ‘1’=’1

Welcome to my new website! This is the mandatory, ambitious first post which proceeds the later sporadic activity as the enthusiasm gradually dies away. I don’t have a big picture for this site as of yet.

I registered the domain donncha.is primarily to host my email as I transition away from using third-party email providers. Hosted email is fine, but if your receiving a “free” service, your the product. Your also vulnerable to the whim’s of a provider who may block your account and lock you out of your online life for the slightest reason. I have decide to get an Icelandic domain as they are implementing strong protections for freedom of speech on the internet thanks to the influence of the IMMI, and they’re also easier to get than .ie domains!

Predominantly this blog will just be a place for me to publish projects I’m working on at the moment. Until now I have limited myself to 140 characters and haven’t been able to get idea’s across as I’d like. I also have various half-finished projects left lying around. I’m hoping that actually having a place to publish and get some feedback on my projects will give me the motivation I need complete them. I should have my first proper post up in a day or two outlining some work I have done analyzing the Tor hidden service DHT. Stay tuned!