Rethinking Payments

Your friend sends you a link. It’s a heart-warming story of a community pulling together, and they need your help. Or maybe it’s this new device straight out of science fiction, and you can make it a reality by backing it. Or it’s a local artist with a bold new idea who needs support to bring their dreams to life. Point is, it’s on Indiegogo, and you want to give them money.

So you head over to the payment flow, and see this. Whoa.

rcf

There’s a lot going on over many pages, and the code is even more confusing!

Payments is the beating heart of crowdfunding, and the controller that supports these pages has been around since Indiegogo was founded in 2007. Needless to say, features were added, experiments were run, and bugs were fixed until the code had become so clogged with technical debt that even minor changes could send the whole system into cardiac arrest.

Indiegogo’s fearless leaders, recognizing this development bottleneck, tasked the Iron Bank, our payments team, with performing bypass surgery. Our mandate was to create a clean, responsive, modular foundation for future features and experiments, then stitch it into place to take over for the old code.

One fundamental idea behind the pay flow, as we call it, was a series of UI components that were mostly pure and stateless, allowing them to be reused, reasoned about, and tested with relative ease. It also made things easier to discuss with design. One of the low-level components we created was the input field, with some nice animation to keep users informed as they type:

pfinput

We used a few base-level components to create larger components for each of the key sections that Design had distilled from discussions with users and internal teams. The payment module demonstrates the flexibility of this approach.

paymentmodule

To make users’ lives easier, we added some polish, like detecting which card icon to show. We also added spaces or slashes to fields to reduce ambiguity.

Building up components in this way also made it easier to think about responsive design. The old code had dramatically different code paths for desktop and mobile payments, and though the user interfaces hid some of the differences, it meant double the work for any features in this area. Since a key priority for the the new version was to have one code path that supported multiple view sizes, we ensured that the components were flexible when it came to container widths so they look good from iPhone 4 to iMax.

With the UI looking pretty solid, we began linking it up with the backend. The old system had lots of implicit dependencies and business logic buried throughout it, so a large portion of our effort was tracking these down and writing tests to make sure no functionality was lost.

An interesting feature of the new system is that it unifies the process for two payment services, PayPal and Stripe. When you click submit, Indiegogo asks Stripe to process the payment, and Stripe responds right away, allowing us to redirect the user to the thank you page (they also send us a webhook later confirming a payment’s success). However, Paypal redirects the user to PayPal to complete the payment, then redirects back to Indiegogo’s thank you page (and sends a webhook confirming success). To have one code path that supports both processes, we have the server return a URL in both cases for the client to redirect. Then we consider the webhooks the true response from each payment processor. Abstracting away the differences between processors makes it much easier to add features that affect both, and keeps the code and tests leaner and more DRY.

Having learned from past releases that bugs and missing requirements always seem to crop up at the last minute, we held two bug testing parties, one midway through when the UI was mostly done, and one a week or two before launch, with the whole company. Having everyone try to break the new system had the added benefit of preparing everyone for the upcoming launch, so they could speak knowledgeably to users about the changes. Many bugs were caught early on, but thanks to the two parties the launch itself was rock-solid.

We’d hoped that users would find the new experience easier to complete, allowing more of them to make it all the way through. At worst, we were even willing to accept a slightly worse conversion rate given how much easier the underlying code would be to build upon. As it turned out, users loved the new way to pay, with desktop contributions rising 14% and mobile contributions doubling!

Given the excellent user response, a solid code foundation for new feature development, and an almost bug-free launch, we declare the patient again in good health. See for yourself, and look for more new crowdfunding features from Indiegogo!

How we get coverage on parallelized test builds

We lacked a window into our code coverage, specifically to answer these questions:

  • Which files are covered and which are vulnerable?
  • How is our coverage changing over time?
  • Are there areas of the codebase that need extra testing?

It’s possible to manually generate coverage with RubyMine, the IDE that many of us use, but to get the full picture requires running each spec that might be related to your target file or method—in a codebase with as many files and tests as Indiegogo’s, it’s a time-consuming proposition.

SimpleCov

The SimpleCov gem watches when RSpec runs tests to see which lines of code it hits, and how many times. The gem is smart enough to understand which lines are code that was executed (represented in the resulting JSON as a positive integer), missed (a 0), or not relevant (null). The resulting JSON from a SimpleCov report might look like this:

(From these results, SimpleCov generates pretty HTML reports with percentages of lines covered by file and directory.)

The gem worked wonderfully for running a few local tests, but we wanted to see app-wide coverage, so we added SimpleCov to our CircleCI test build. Circle uses multiple VMs to parallelize large test suites, allowing ours to complete in a reasonable amount of time. The downside of this parallelization was that because each VM only runs a subset of all our tests, each VM’s coverage report is similarly incomplete. Specifically we saw results claiming there were uncovered lines of code when those lines had in fact been covered by tests that ran on a different VM. Unfortunately these incomplete coverage reports that were too inaccurate to be of much use.

Coveralls

That’s where Coveralls came in. Coveralls is a cloud-based service that hoovers up incomplete coverage reports you send it, merges them, and displays the results with stylish formatting. It’s a paid service for private repos, but pretty inexpensive and easy to set up.

We hoped to display coverage grouped by scrum team, but this wasn’t possible given the customization options in Coveralls. In addition, we also encountered performance issues due to the scope and breadth of our code base, so it became clear that we should opt for a more specifically-tailored solution.

CircleCI

Given the desired features and scope of the task, a better option was to work directly with the service that runs our full test suite, CircleCI. We’d wanted to do this from the start, but after seeing the benefits of visible, accurate, and complete coverage reports with Coveralls, we decided it was worth the effort to make SimpleCov work within CircleCI. We reached out to CircleCI for advice and they told us about new features (since announced) around SSH access and scripting the VMs. To that end, we added the SimpleCov gem to our test build and created a rake task for code coverage to run after our tests had completed.

Our proposed solution followed these steps:

  1. determine which container is the last to finish
  2. that container aggregates and merges the partial reports
  3. then it generates & saves the complete report as a Circle artifact

The first step was easy—copy the reports from each VM.

When you have at least one report from each VM, you know you’re on the last VM to finish (because reports only appear after that box has finished its tests).

Merging the SimpleCov reports was the trickiest part to get working–existing documentation and the gem’s structure didn’t help much, but eventually we were able to verify the merged report was accurate by checking it against what we were seeing in Coveralls.

Finally, we added coverage groups, so our scrum teams can look at aggregate coverage for sections of the code that interest them. A little friendly competition to increase coverage might be in order!

The new system is extensible, free, and reliable—a vast improvement over the “run it in your IDE” days. Now that this infrastructure is in place, we are thinking about other improvements like a commit-specific coverage group, so it’s obvious whether the files that were touched in a given commit are well-tested.

We’ve considered packaging this code into a gem so other CircleCI users can benefit from targeted, merged coverage. Comment below if you’d be interested!

Implementing StrongParams in Complex Rails Applications with the ParamsPatrol Gem

Staying up-to-date with the latest revisions of the libraries and existing frameworks in your Rails deployment can be challenging. Here at Indiegogo, we’ve been using Rails 3.2 for quite some time, and have been looking to upgrade to version 4. Upgrading our version of Rails would give us numerous improvements, but several API changes (including security upgrades) were necessary to make the switch. And with these additions came the potential to introduce breaking changes to our web app. How could we make the changes that we wanted, while minimizing the risk of delivering a broken experience?

When doing any kind of development work, whether releasing a new feature or upgrading a library version, finding ways to mitigate risk is always beneficial. With a large complex change such as this, a great strategy is to break a problem down into smaller portions that can be released independently. This gives several advantages; work can be paralyzed across multiple teams and can be paused more easily if something more pressing occurs.  Additionally, bugs can be more easily correlated to the code that introduced them.

A large difference between Rails 3 and 4 was the way that parameters are whitelisted when making database calls (also known as mass assignment protection). In controllers, it is very common to see a line like this:

But what if we received a request that tried to inject their own parameters? Such as

We need a way to ensure not just any field can be written into a database field, but only the ones we specify. Rails 3 would handle this by stating what fields were allowed in the Campaign model, and it was up to us to whitelist what was OK, and what wasn’t.

In Rails 4, more of this protection is handled at the controller level, delegating less responsibility to the model. Doing this work in isolation would be a great step before our final version upgrade. Luckily, the Rails development team made this easy for us. The mass assignment protection was wrapped in its own separate gem that we could use, appropriately named Strong Parameters. All we needed to do was whitelist the parameters passed in to the controller.

Our first problem arose when we realized we’d need to make these changes to every one of our controllers at once. At the time we had over 30, and some were not so simple. We’d need a way to break this problem down even further. Upon further inspection we found how the gem was loading itself into all the controllers:

Therefore, if we could remove this line we could then just

on each controller on a case by case basis. Then we could change one at a time and slowly roll our changes out.  We then forked the gem and commented out the line. Our teams were able to slowly protect our controllers over the course of several weeks.

As we started pushing our changes we ran into another issue. Some of our controllers were very large and were taking requests from various areas across the site. It wasn’t always clear what places these requests would come from and what parameters they could contain. As careful as we were, sometimes we wouldn’t include a parameter in the white list and cause an exception to be (erroneously) raised. Our teams were quick to fix these issues, but it was breaking functionality. How could we avoid this?

What if we didn’t raise an error, and instead logged when these events happened? The user wouldn’t see any difference in the experience and we could learn what parameters we had missed. Our team went to work on overriding methods on the fork of our gem and sending a signal to our favorite service, Airbrake. After pushing our changes to a new set of controllers we patiently waited. Lo and behold, we received an airbrake error in the first day. Requests were coming in from a new feature that we hadn’t anticipated, yet the feature still worked. Our team could easily log it a bug and continue with their current development work without interruption. Smooth sailing.

It is our assumption that this feature would be useful for other Rails developers as well. Therefore we took our modifications to the StrongParameters gem and made it into our own gem, ParamsPatrol. Please feel free to share with others and make a merge request if you’d like to add any features. Happy upgrading.

ASCII Tables for Clearer Testing

I often find it difficult to understand a test. This can happen when the test was written by someone else, or even when I wrote it myself mere months ago. Usually the heart of the test is simple and each line of code is easily understood, but the context and setup can be extensive and located far away.

I’ve started using ASCII tables to define the initial test data. These tables are both concise and easy to understand, and that leads to simpler tests that are easier to maintain.

For context, at Indiegogo we use rspec, jasmine, and other testing frameworks in the Behavior-Driven Development category. Our tests will often specify nested contexts that enumerate possible states before adding our expectations. A typical test looks something like this:

With this structure you know every possible state is tested.

Unfortunately, these structures often explode into huge files with thousands of lines of code. It’s pretty difficult to know the expected state for a line of code when the test data is initialized in multiple places hundreds of lines away.

Now, consider an alternative organization for the example above using ASCII tables:

The table provides a concise and easily-understood presentation of the various states and the expected result.

This is not a new idea. Ward Cunningham’s Fit: Framework for Integrated Test and Bob Martin’s FitNesse both show state and expectations in tables, but making tables with ASCII characters means that you can embed these in existing test files, yielding many of the benefits of Fit and FitNesse without adopting the whole framework.

Introducing The ATV Gem

We initially added tables as here documents to some spec files and parsed them with locally-defined methods. Now we have extracted these table-parsing methods and created an open source project on GitHub, as well as the ATV (ASCII Table Values) ruby gem.

Building Tests With ATV

ATV returns your data as strings. Your code can use those strings however you want:

You can insert dynamic values using string interpolation:

With a little meta-programming you can include methods that are rspec assertions:

One challenge with expressing rspec assertions in tables is keeping the rspec failure message meaningful. Normally that message is generated from the describe, contexts and example descriptions.

A possible remedy uses the table to dynamically create the complete rspec example, including the description. Another approach is to add a failure message to your rspec assertion, as we did above.

Here a method is used to load the data (via ATV) and assign the resulting instances to instance variables so that specific, individual object instances can be accessed in the example:

Tables That Are Not ASCII Tables

Some of my coworkers were inspired by the tables concept but wanted something different. Their approach uses white space to organize a hash for easy reading:

Their solution is concise, easy to understand and easy to create. It’s also actual ruby code making it easier customize. Obviously ASCII tables are not the only way to organize your data.

How do you create these ASCII tables?

The benefit of these tables is easy to see but they may be difficult to create. I suspect most modern editors have modes that ease the creation and maintenance of these tables. I prefer to use the built-in table editor that comes with emacs Org Mode, or you can use the terminal-table gem to create your tables.

RubyMine’s column selection mode helps but I’m hoping this offering will inspire you to create a RubyMine plugin that is column aware like the table editor in org mode. If you do, please share.

Are ASCII tables right for your test?

Scale is important when deciding if you should use an ASCII table or some other approach for establishing test data. An ASCII table is probably overkill if you are initializing just a few variables. Likewise, if your test is initializing 2 or more attributes with two or more states then consider what the initialization would be like if summarized in a table. Also consider organizing your data using arrays and hashes with white space to show structure.

Accessory gems, part II: Local Gemfiles

The Bundler Ruby gem does its job well: it ensures that your application sees only the gems listed in its Gemfile. Sometimes, though, you might feel like it’s doing its job too well. You might want to add a gem to your app temporarily, perhaps for monitoring or debugging, or your app might have a gem in its Gemfile that isn’t needed in every environment and that you don’t want in yours. Bundler just says no. If a gem is in your app’s Gemfile your app will see it; if not, it won’t. If you want to add a gem to or remove it from your app, you have to add it to or remove it from your Gemfile and

.

Just editing your app’s Gemfile when you need to add or remove a gem has a couple of drawbacks:

  • It’s cumbersome. If you need to make the same change to your Gemfile regularly, that will get old fast. Besides, doing things manually and repeatedly is not what programming is all about.
  • It dirties your version control. You should be able to commit quickly and effortlessly, without having to pick through modified files to exclude changes not meant for production. And the fewer chances you have to mistakenly commit a change that you meant to be local-only, the better off we all are.

How can we give our Gemfiles some flexibility without those hassles? Bundler gives us a couple of openings: we can use a different name for our customized Gemfile to keep it out of the way of version control, and, since Gemfiles are just Ruby scripts, we can write code in our customized Gemfile to reuse our base Gemfile without copying and modifying it.

First, let’s give our customized Gemfile a new name. In part I, when we solved a related problem involving Gemfiles, we learned that we can tell Bundler to look for an app’s Gemfile under a different name by setting the environment variable

. If we name our local Gemfile

and set

to that, Bundler will use

for the lock file name. We can tell our version control to ignore those files (e.g. for git, add them to our

) and never have to think about them again when committing. So

is a good way to have a customized Gemfile without version control hassles.

If you usually work on a single project, you can set

in your shell environment and forget about it. You might also want to configure your editor to set

before running your app or its tests, for example in RubyMine’s run/debug configuration defaults. If you work on some projects that do have a Gemfile.local and some that don’t, you can set

on the command line, as in the following example:

You could also set and unset

with direnv, or do it in scripts or shell aliases.

Second, let’s see how we can reuse one Gemfile in another. Here’s a

that includes a base Gemfile and then adds and removes gems.

is a method in Bundler’s DSL. It reads the file with the given name,

s its contents and handles errors the way that Bundler does elsewhere. It’s an internal method, so it might go away; if it does,

will work pretty much the same way.

Adding gems is simple: just use the

method as you would in a standalone Gemfile.

Removing gems is a little hackier. Peeking at Bundler’s DSL again, we see that the

method accumulates dependencies in the

array. To forget a gem that was in the base Gemfile, just delete it from that array. (This example deletes the debugger gem, which is in my app’s Gemfile for the convenience of our CLI diehards but breaks RubyMine’s debugger.)

That covers everything I’ve needed to do in my Gemfile.local.

Finally, let’s solve a problem that we just introduced: how to preserve all of the dependencies from the base Gemfile. If we just

, Bundler will build a new

with a potentially different set of dependencies than those in

. Fortunately, if we copy

to

before we

, Bundler does exactly what we need: it reads the versions from

, adds and removes the gems that we added and removed in

, and writes a new

!  To keep up with updates to the base

and

, each time those files change we need to

the base Gemfile, copy

to

, and then

our

. That’s a bit of a pain, but it’s not so bad with a shell alias:

Whenever you check out a new version of the base Gemfile and Bundler complains that a gem is missing, just run

instead of

like you would ordinarily do. (This example assumes you’ve set

in your shell and need to set it back to the default to

.) Alternatively, you could bundle, copy and bundle in a git hook.

Thanks to Ruby and Bundler’s open design, your app’s Gemfile isn’t the straitjacket that it first appears to be. With a little configuration you can keep your application’s carefully chosen and tested dependencies and still customize it to meet the needs of particular environments.

Choosing OAuth 2.0

Last June was an exciting time for the engineering team here at Indiegogo. In just over a month, we would release our first client, a sleek iOS app. We sat around a conference table trying to decide how we were going to persist user sessions. “We’ve begun work on extending the legacy one-time-use token system for mobile login”, everyone cringed at the thought. Though it seemed simple enough, extending the existing token system was becoming infeasible. The token system had once been built to allow logins through email links. We had to add basic functionality like revocation and the granting of multiple tokens. But not only was the code difficult to extend, our work seemed to be opening new vulnerabilities as well.

We sat around the table lamenting pulling it off in time for the release date. But then Dan suggested OAuth 2.0, and I wondered: “What does Facebook login have to do with our token system?”

Many may think that OAuth 2.0 is just a way to log in to one site with credentials from elsewhere. The OAuth 2.0 RFC calls this Twitter/Facebook login the “Authorization Code” flow. As we discovered, the Authorization Code flow is only one of four flows in OAuth 2.0.

When a trusted client (a mobile app) needs to make authenticated requests to a resource (an API), one should use the Resource Owner Password Credentials flow. This perversely labeled flow is actually rather straightforward, in it:

  1. The app requests the username and password from the user (or “resource owner”)
  2. The app then passes the credentials to an authorization server (e.g. our API)
  3. The app responds with a token which can be used instead of a password in further requests

The primary benefit of using this flow is that we can keep users logged in without storing their credentials on the phone. The iOS app just retains an access token which regularly expires, and which can be revoked if compromised.

Additionally, using an established OAuth 2.0 flow provides many other benefits,

  • It is robust: countless security and web engineers have scrutinized and tested the spec for several years.
  • It is comprehensive: the authors considered a diverse set of use cases and created flows to cover them.
  • It is extensible: when new use cases become popular, users of OAuth 2.0 will augment the spec to handle them.
  • It is convenient: there is already a proven implementation of OAuth 2.0 in most languages. Most implementations provide basic tasks like token revocation, generation, and renewal.
  • It’s proven: while OAuth 2.0 is still in request for comments, it is already used in countless established applications. If anyone discovers serious problems with the protocol, the community will swiftly resolve them. The same cannot be said of our legacy token system.

After our discussion, it was pretty clear that we were to build an OAuth 2.0 provider. Building our provider, oauthservice, was a snap thanks to a Rails gem, Doorkeeper.

As we consider any iOS app to be public, as the binary can be downloaded and inspected by anyone, no baked-in secrets are in fact secret. So in our implementation, we chose to drop the optional client ID and secret from the flow. This means that anyone can use our token granting API, but without user credentials there’s not much a malicious user can do. Additionally, we do all communication over SSL, so credentials are never sent in the clear.

In the end, we were glad not to roll our own token system. With OAuth 2.0, someone already did the hard work. And with all our saved time, we wrote this fun blog post!

Accessory gems, part I: Private Gemfiles

A Ruby application’s Gemfile does a lot for you: not only does it ensure that your application runs with exactly and only the gems that it needs, but it is a single place where you can look to find out what those gems are, and it makes it easy to install those gems with a single command. That easy installation, though, is a bit of an attractive nuisance. It’s tempting to add a gem to your Gemfile not because your application actually needs it, but only so the gem is installed when you

and will be there when you need it. Gemfiles can become cluttered with gems that are needed when working on a project, but not in the running application.

These accessory gems aren’t just distracting. A gem’s functionality might not be appropriate in production (if it logs additional, perhaps sensitive, information, for example). If an accessory gem monkey-patches something that your application depends on, it can cause bugs or other problems. Even if an accessory gem doesn’t actually change your application’s functionality, loading the gem’s code increases the time your application takes to start up, which slows development, testing and deployment. And most gems depend on other gems, each of which can cause any of the same problems as gems which are actually in your Gemfile. What’s more, if an accessory gem depends on a gem that your application needs anyway, your Gemfile then has an extra requirement on the depended-on gem’s version, which can prevent you from upgrading the depended-on gem when you need to.

So, what to do about those extra gems? If a gem’s code is actually used within the running application, the best you can do is to include it only in the appropriate Bundler groups, and if the gem or its dependencies still cause problems you’ll just have to stop using it or deal with the problems some other way. But if all the gem does is provide executables, as deployment tools like Chef or static analysis tools like rubocop do, you have a couple of other options.

You could move the accessory gem to a new project altogether. That solution makes the most sense when there is code that can move to that new project too. For example, some applications want their Chef cookbooks in a separate project, since cookbooks can be large and might be developed by a different team or released on a different schedule than the application itself. A separate project is cumbersome, however, and when the application and its deployment have to change at the same time it takes extra effort to ensure that the right versions of each get to production in the right order.

Or you can have it both ways: just move accessory gems to their own Gemfiles within the same project. The other files that those gems need can be changed in sync with your application, but the gems themselves and their dependencies won’t affect your application at runtime. It does mean a little extra effort to install and invoke the accessory gems, but that’s easy to hide with a little scripting.

Let’s look at an example. The POODLE attack on SSLv3 motivated many of Indiegogo’s third-party service providers to sunset their support of that protocol. Several HTTP client gems that our main web application uses defaulted to SSLv3, so we needed to upgrade them to newer versions that defaulted to TLS. But each of those gems was used by several other gems, each with its own version requirement. Eventually we’ll have to upgrade or replace all of those HTTP-client-gem-using gems, but that’s a big job, and it will be easier for some gems if we wait for their maintainers do the job for us. Fortunately, some of those gems didn’t need to be in our application’s Gemfile, and moving them into their own Gemfiles simplified the main application’s gem dependency tree enough that we could just update the HTTP-client gems that it uses to SSLv3-free versions.

The biggest drag on our application’s SSL/TLS usage was our Capistrano deploy, which uses Chef to find the instances to be deployed to. The application itself doesn’t use Capistrano and Chef, of course, so off they go to their own private Gemfile, in

:

Now, how to make it easy for everyone to use the gems even though they’re in their own Gemfiles? We just provide a little wrapper script (‘binstub’) for each gem’s executable. Bundler will generate binstubs for you, but we want our binstubs to do a little more than Bundler’s, so we write them ourselves. To point Bundler to the private Gemfile, we use the

environment variable. To save users the trouble of first installing and then bundle-execing, we just run

in the binstub before we

. In

:

Running the script the first time creates

(

always creates a lock file whose name matches the name of the

), and we put it in version control just like a regular

.

To deploy, we now just need to

deploy instead of

. And we can’t forget, because Capistrano is no longer in the main Gemfile and if we mistakenly

Bundler will tell us that.

There is one situation in which you need to be a little extra careful if you have multiple Gemfiles in a project. If you only work on one project for a given Ruby installation, or if you use RVM gemsets, you might be in the habit of using

to remove gems that you no longer use. If some gems are in their own Gemfiles, Bundler won’t know about them, and

will remove them. On the other hand, if all of your accessory gems’ binstubs

for you, the accessory gems will be reinstalled when they’re needed.

Private Gemfiles are, then, a nice way to knock a few chips off of your monolithic Rails application. But there is another scenario that we haven’t handled. What if we want to use the same Gemfile in different environments, but we don’t want one of the gems in it, or we want to add a gem without having to copy and modify it? For the answer to that one, see our next episode, on local Gemfiles.

Reflecting on Shellshock

by Jeremy Carey and Dan Auerbach

Earlier this week a vulnerability nicknamed Shellshock was made public that exposed a critical flaw in the widely-used Bash shell, threatening endpoint security for a huge segment of servers and personal devices. By exploiting an error in how Bash parses functions defined inside environment variables, the security bug gives attackers who are able to pass environment variables to a Bash shell the ability to execute arbitrary code.

While Bash itself is not an Internet-facing service, many Internet-facing services such as web servers and servers running SSH call Bash internally and are thus vulnerable to Shellshock. Security researchers have yet to determine the full ramifications of this bug, but given how ubiquitous Bash is, this vulnerability is likely to have lasting and serious security consequences for the Internet at large.

Upon immediate investigation of the issue, we have not found any instances of our systems being exploited by this bug. In an effort to be proactive, we have completed an audit of all of our systems to ensure that vulnerable versions of Bash are not being used. We pride ourselves in responding quickly to any security issues such as Heartbleed (which happened earlier this year) and Shellshock, both because securing our user data is of utmost importance, and because being a good Internet citizen means abiding by security best practices so that we can build a safe ecosystem together. This is why we use HTTPS transport encryption for all of our traffic, and proactively work to secure the user data entrusted to us.

Shellshock also reminds us that some of the common tools that people rely on the most and that are built from free and open source software can themselves be a vector of vulnerability. As a community, we need to ensure that this software gets regular scrutiny from security researchers. We hope that others will use our platform to fund such security research efforts. This is a cause we can get behind and will gladly support.