Backing & Hacking

    • A Bug-Free, Downtime-Free, Major-Version Upgrade of Elasticsearch

      Some parts of your software stack can be tricky to upgrade. In our case, we upgraded to Elasticsearch 0.9 over two years ago, and since then it became unsupported, had a CVE announced that affected developer machines, and our Java 6 runtime had several CVEs. On top of all that, search is a complicated feature and difficult to test.

      The Experiment

      We decided to bite the bullet. But what was the upgrade path? We approached the upgrade as an experiment, with the following hypotheses:

      • ES 1.7 searches would be faster and more stable/reliable than on ES 0.9
      • A Java 8 runtime would also give us a performance boost over Java 6

      As part of our philosophy of continuous delivery, we also required there be zero downtime during the switch.

      The Method

      • Launch a new ES 1.7 cluster with the same settings and number of nodes
      • Index data into both 0.9 and 1.7 clusters
      • Switch our search features to 1.7, one by one
      • Test our hypotheses by comparing response times and mismatches, using Github's scientist gem

      The scientist gem calls itself a "Ruby library for carefully refactoring critical paths." It's similar to feature flags, but adds metrics and can run multiple code paths in the same context.

      An experiment with scientist for our ES upgrade looked like this:


      def response
        experiment = EsSearch::UpgradeExperiment.new name: "es-search-upgrade-faqs"
      
        # Control: ES 0.9
        experiment.use { request(...) }
      
        # Candidate: ES 1.7
        experiment.try { with_new_elasticsearch_client { request(...) } }
      
        # Store search term from mismatches in Redis  
        experiment.context(search_term: @search_term)
      
        # Clean the mismatched results that we store in Redis
        experiment.clean { |results| extract_ids_from_results(results) }
      
        # Tell scientist how to compare the results
        experiment.compare do |control, candidate|
          extract_ids_from_results(control) == extract_ids_from_results(candidate)
        end
      
        experiment.run
      end
      

      The Results

      We were able to switch some search features over to ES 1.7 very quickly. Our FAQ search is infrequent, but the experiment results were enough to show that ES 1.7 was slightly slower on average:

      Response Times in millseconds: Candidate - Control (negative is better)
      Response Times in millseconds: Candidate - Control (negative is better)

      But on the bright side, we didn't see any mismatches between ES 0.9 and ES 1.7 results!

      We found more issues with other features, such as our project search tool. Performance was often slightly better:

      Response Times in milliseconds: Candidate - Control (negative is better)
      Response Times in milliseconds: Candidate - Control (negative is better)

       But we saw mismatches in about 15% of the results:

      Number of Mismatched Results in ES 1.7 against ES 0.9
      Number of Mismatched Results in ES 1.7 against ES 0.9

      As it turned out, when we looked into the mismatches, the results contained the same results — they just occasionally had slightly different orders! The change in sorting was an acceptable difference for us.

      Another search feature's experiment mysteriously showed occasional mismatches. After investigating, we found that it stemmed from some missing documents in the ES 1.7 cluster. These documents had been rejected during our bulk indexing because of a limit on the bulk index threadpool size in ES 1.7. Ironically, that limit had been added just one patch version above the old ES 0.9 version we were running. :D

      Lessons Learned

      After we completely switched our search features over to ES 1.7, we found that our two hypotheses were wrong: ES 1.7 running on Java 8 didn't perform better than ES 0.9 on Java 6. The difference was marginal though, so being on the latest supported version was worth the upgrade. 

      If we use the scientist gem again in the future, it’ll probably be with a smaller set of changes, since correctly analyzing the results of an experiment can take time. If you need to do something similar, this gem is worth checking out. We're very happy that this upgrade was done with no disruption and we're now on a current version of Elasticsearch.

      4 comments
    • Our SQL Style Guide

      From beginners working towards their first commits to experts trying to ease into a new codebase, style guides represent valuable investments in helping your team work together.

      Since much of our Data Team's day-to-day work involves querying Redshift using SQL, we've put time into refining a query style guide. Many of the recommendations in our guide have been unapologetically lifted from previous guides we've encountered at past jobs, but much of it also stems from things we've discovered collaborating on thousands of queries.

      Here's a sample on how to format SELECT clauses:

      SELECT

      Align all columns to the first column on their own line:


      SELECT
        projects.name,
        users.email,
        projects.country,
        COUNT(backings.id) AS backings_count
      FROM ...

      We've got other sections on FROM, JOIN, WHERE, CASE, and how to write well formatted Common Table Expressions.

      Checkout the full guide here.
      1 comment
    • This is the story of analytics at Kickstarter

      If you’ve built a product of any size, chances are you’ve evaluated and deployed at least one analytics service. We have too, and that is why we wanted to share with you the story of analytics at Kickstarter. From Google Analytics, to Mixpanel, to our own infrastructure, this post will detail the decisions we’ve made (technical and otherwise) and the path we’ve taken over the last 6 years. It will culminate with a survey of our current custom analytics stack that we’ve built on top of AWS Kinesis, Redshift, and Looker. 

      Early Days 

      In late 2009, the early days of Kickstarter, one of the first services we used was Google Analytics. We were small enough that we weren’t going to hit any data caps, it was free, and the limitations of researching user behavior by analyzing page views weren’t yet clear to us.

      But users play videos. Their browsers send multiple asynchronous JavaScript requests related to one action. They trigger back-end events that aren’t easily tracked in JavaScript. So to get the best possible understanding of user behavior on Kickstarter, we knew we would have to go deeper and start looking beyond merely looking at which URLs were requested.

      While GA provided some basic tools for tracking events, the amount of metadata about an event (i.e., properties like a project name or category) that we could attach was limited, and the GA Measurement Protocol didn’t exist yet so we couldn’t send events outside the browser.

      Finally, the GA UI became increasingly sluggish as it struggled to cope with our growing traffic, and soon our data was being aggressively sampled, resulting in reports based on extrapolated trends. This was particularly problematic for reports that had dimensions with many unique values (i.e., high cardinality), which effectively prevented us from analyzing specific trends in a fine-grained way. For example, we’d frequently run into the dreaded (other) row in GA reports: this meant that there was a long tail of data which GA sampling could detect but couldn’t report on. Without knowing a particular URL to investigate, GA prevented us from truly exploring our data and diving deep.

      Enter Mixpanel

      In early 2012, we heard word of a service called Mixpanel. Instead of tracking page views, Mixpanel was designed to track individual events. While this required manually instrumenting those events (effectively whitelisting which behavior we wanted to track), this approach was touted as being particularly useful for mobile devices where the page view metaphor made even less sense.

      Mixpanel’s event-driven model provided a solution to the problems we were encountering with Google’s page views: we could track video plays, signups, password changes, etc., and those events could be aggregated and split in exactly the same way page views could be.

      Even better, we wouldn’t have to wait 24-48 hours to analyze the data and access all our reports — Mixpanel would deliver data in real time to their polished web UI. They also allowed us to use an API to export the raw data in bulk every night, which was a huge selling point when deciding to invest in the service.

      In May of that year we deployed Mixpanel, and focused on instrumenting our flow from project page to checkout. This enabled us for the first time to calculate such things as conversion rates across project categories, but also to tie individual events to particular projects, so we could spot trends and accurately correlate them with particular subsets of users or projects.

      Pax Mixpanela

      For many years, Mixpanel served us incredibly well. The data team, engineers, product managers, designers, members of our community support and integrity teams, and even our CEO used it daily to dive deep on trends and analyze product features and engagement.

      As our desire to better analyze the increasing volume of data we were sending the service grew, we found their bulk export API to be invaluable – we built a data pipeline to ingest our Mixpanel events into a Redshift cluster. We were subsequently able to conduct even finer-grained analysis using SQL and R.

      The flexibility of Mixpanel’s event model also allowed us to build our own custom A/B testing framework without much additional overhead. By using event properties to send experiment names and variant identifiers, we didn’t have to create new events for A/B tests. We could choose to investigate which behaviors a test might affect after the fact, without having to hardcode what a conversion “was” into the test beforehand. This overcame a frequent limitation of other A/B testing frameworks that we had evaluated.

      Build vs. Buy

      As Kickstarter grew, we wanted more and more from our event data. Mixpanel’s real-time dashboards were nice, but programmatically accessing the raw data in real time was impossible. Additionally, we wanted to send more data to Mixpanel without worrying about a ballooning monthly bill.

      By 2014, granular event data became mission-critical for Kickstarter’s day-to-day existence. Whereas previously event level data was considered a nice-to-have complement to the transactional data generated by our application database, we began depending on it for analyzing product launches, supplying stats for board meetings, and for other essential projects.

      At this point we started reconsidering the Build vs. Buy tradeoff. Mixpanel had provided incredible value by allowing us to get a first-class analytics service running overnight, but it was time to do the hard work of moving things in-house.

      A Way Forward

      As we loaded more and more data into our cluster thanks to Mixpanel’s export API, Redshift had become our go-to tool for our serious analytics work. We had invested significant time and effort into building and maintaining our data warehouse – we were shoving as much data as we possibly could into it and had many analysts and data scientists using it full time. Redshift itself had barely broken a sweat, so it felt natural to use it to anchor our in-house analytics.

      With Redshift as our starting point, we had to figure out how to get data into it in close-to-real-time. We have a modest volume of data – tens of millions of events a day – but our events are rich, and ever-changing. We had to make sure that engineers, product managers, and analysts had the freedom to add new events and add or change properties on existing events, all while getting feedback in real time.

      Since the majority of our analytics needs are ad-hoc, reaching for a streaming framework like Storm didn’t make sense. However, using some kind of streaming infrastructure would let us get access to our data in real time. For all of the reasons that distributed logs are awesome, we ended up building around AWS Kinesis, Kafka’s hosted cousin.

      Our current stack ingests events through an HTTPS Collector and sends them to a Kinesis stream. Streams act as our source of truth for event data, and are continuously written to S3. As data arrives in S3, we use SQS to notify services that transcode the data and load it into Redshift. It takes seconds to see an event appear in a Kinesis stream, and under 10 minutes to see it appear in Redshift.

      Here’s a rough sketch:

      This architecture has helped us realize our goal of real-time access to our data. Having event data in Kinesis means that any analyst or engineer can get at a real-time feed of their data programmatically or visually inspect it with a command-line tool we whipped up.

      Looker

      While work began on our backbone infrastructure, we also began seriously investigating Looker as a tool to enable even greater data access across Kickstarter. Looker is a business intelligence tool that was appealing to us because it allows people across the company to query data, create visualizations, and combine them into dashboards.

      Once we got comfortable with Looker, it dawned on us that we could use it to replicate much of Mixpanel’s reporting functionality. Looker’s DSL for building dashboards, called LookML and their templated filters provided a powerful way to make virtually any dashboard imaginable.

      This made it just as easy to access our data in Looker as it was in Mixpanel - anyone can still pull and visualize data without having to understand SQL or R.

      As we became more advanced in our Looker development we were able to build dashboards similar to Mixpanel's event segmentation report:

      Most significantly, we were able to take advantage of Kickstarter specific knowledge and practices to create even more complex dashboards. One of the ones we’re most proud of is a dashboard that visualizes the results of A/B tests:

      The Future

      Owning your own analytics infrastructure isn’t merely about replicating services you’re already comfortable with. It is about opening up a field of opportunities for new products and insights beyond your team’s current roadmap and imagination.

      Replacing a best-in-class service like Mixpanel isn’t for the faint of heart, and requires serious engineering, staffing, and infrastructure investments. However, given the maturity and scale of our application and community, the benefits were clear and worth it.

      If this post was helpful to you, or you’ve built something similar, let us know!

      2 comments
    • The Kickstarter Engineering and Data Team Ladder

      Over the last year, we've doubled the size of the Engineering and Data teams at Kickstarter. Prior to that growth, our teams’ structure was very flat, and titles were very generic. Now we've got folks who have differing levels of skills and experience, we need a structure to help us organize ourselves. We decided we should build an engineering ladder and define roles across the teams.

      Deciding to design and implement an engineering ladder can be tricky. It needs to start right, exert flexibility as we evolve, scale as we grow, and the process needs to be as consultative and inclusive as possible. Thankfully, earlier in the year, Camille Fournier, then CTO at Rent the Runway, shared her team's Engineering Ladder. It was enormously influential in guiding our thinking around how Engineering should be leveled and structured. (We should also thank Harry Heymann, Jason Liszka, and Andrew Hogue from Foursquare, who inspired Rent the Runway in the first place).

      We took the material and ideas we found in Fournier’s work and modified them to suit our requirements. We then shared the document with the team and asked for feedback and review. After lots of discussion and editing, we ended up with roles that people understood and were excited to grow into. We've now deployed the roles — and in the spirit of giving back to the community that inspired us to do this work, we wanted to share the ladder we created.


      Technical Data People
      Junior Software Engineer - -
      Software Engineer Data Analyst -
      Senior Software Engineer Data Scientist Engineering Manager
      Staff Engineer VP of Data Engineering Director
      Principal Engineer - CTO

      You can see the full details here.

      If you’re in the process of thinking through how you organize your team, we hope this can be of some help. And if you use this as a starting point for building your own ladder, and tailoring it to your own needs, we’d love to hear about it!

      1 comment
    • Kickstarter Data-Driven University

      The Kickstarter Data team’s mission is to support our community, staff, and organization with data-driven research and infrastructure. Core to that, we’ve made it our goal to cultivate increased data literacy throughout the entire company. Whether it’s knowing when to use a line chart or a bar plot, or explaining why correlation does not equal causation, we strongly believe that basic data skills benefit everyone: designers and engineers, product managers and community support staff, all the way up to our senior team.

      During my time working at LinkedIn on their Insights team, our leadership helped establish a program called Data-Driven University (DDU). DDU was a two-day bootcamp of best practices on working with data: tips on how to communicate effectively using data, how to use data to crack business problems, and how to match a visualization with the right story to tell. It was a transformative experience for me; I witnessed leaders of some of the largest business units discover techniques to help their teams make better decisions with data.

      When I joined Kickstarter’s Data team last year, I saw an opportunity to use the same approach with our own staff. Our intention was to create a series of courses that was open to everyone, not just a select few; hence, Kickstarter Data-Driven University (KDDU) was born.

      First, we surveyed the company on a number of voluntary data-related sessions taught by our team. Analyzing the themes in our survey response data led us to settle on offering three sessions: Data Skepticism (how to think critically using data), Data Visualization (how to effectively present data visually), and Data Storytelling (how to communicate compellingly with data).

      After several weeks of prep work, we held four classes (including an additional workshop on conducting A/B tests). The results were encouraging: more than 50% of the company attended at least one class, and our final Net Promoter Score was 73 (taken by survey after KDDU wrapped up), on par with the Apple Macbook. Not bad! We also heard positive feedback directly from our staff, such as the following:


      “Broke down complicated terms/jargon and offered real-use cases to help the audience better grasp how data is analyzed/presented.”


      The Data team had such a good time presenting KDDU internally that we volunteered to give the seminar two more times. So in July, we partnered with New York Tech Talent Pipeline (NYTTP) for their Beyond Coding program and gave a slightly modified version of KDDU to their new grads and students looking to build skills before entering the workforce.

      Today, we’re making those slides available for you to leverage with your own teams to help increase data skills and literacy:

      Here are some of our takeaways from teaching data skills to our colleagues:

      Keep it simple

      We could have talked about our favorite Data team subjects: our infrastructure, the nuances of Postgres 8.0.2, or our favorite R packages … but we knew we had to keep data approachable for a broader audience. We decided to focus on giving our audience a set of simple rules and principles that would help them work with data more effectively in their day-to-day.

      Know your audience

      We sent out a brief survey to see what topics our coworkers wanted to learn about most. This both made it easier to decide on which topics to present, and also meant we knew the topics we chose would be interesting to our audience.

      Within the individual presentations we focused on selecting examples that would resonate with our audience, highlighting trends from actual Kickstarter data, insights into past A/B tests we’ve run, and other familiar and relevant stats.

      Always be measuring

      As an old boss used to say, if you can’t measure it, you can’t manage it. So after we completed KDDU, we sent out a second brief survey, this one to collect feedback on the overall selection of courses and the individual lessons. This data has helped refine our approach for a second round of KDDU sessions that we’re considering offering as our company grows.

      We couldn’t be more excited to share our experience with you, and hope you find it valuable to increasing data-driven decision-making and skills at your organization!

      1 comment
    • Introducing mail-x_smtpapi: a structured email header for Ruby

      At Kickstarter we use SendGrid to deliver transactional and campaign emails, and use SendGrid's X-SMTPAPI header for advanced features like batch deliveries and email event notifications. Developing the first of these features went well — but the second and third features became entangled when we tried to share an unstructured hash that was unceremoniously encoded into an email header at a less-than-ideal time.

      Custom Mail Header

      Our solution was to add first-class header support to Ruby's Mail gem. This gave us a structured value object that we could write to from any location with access to the mail object, allowing our mailing infrastructure to remain focused and decoupled.

      Today we’re announcing our open source Mail extension, appropriately titled mail-x_smtpapi. With this gem you can write to a structured mail.smtpapi value object from anywhere in your mailing pipeline, including a Rails mailer, template helper, or custom mail interceptor.

      Example

      Here's a basic example from the gem's README to get you started. This variation from the Rails Guide gives you extra detail in SendGrid's email event notifications:


      class UserMailer < ActionMailer::Base
      
        def welcome(user)
          @user = user
      
          mail.smtpapi.category = 'welcome'
          mail.smtpapi.unique_args['user_id'] = @user.id
      
          mail(to: @user.email, subject: 'Welcome to My Awesome Site')
        end
      
      end

      Enjoy

      We hope you find this as useful as we did, or find inspiration here to develop header classes for your own custom uses. As always, we love feedback, especially in the form of pull requests or bug reports.

      If you take delight in discovering simple solutions to stubborn code, why not browse our jobs page? We're hiring!

    • Introducing cfn-flow: a practical workflow for AWS CloudFormation

      If you’re looking for a simple, reliable way to develop with AWS CloudFormation, check out cfn-flow on GitHub.

      As an Ops Engineer, I’m always seeking better ways to manage Kickstarter’s server infrastructure. It can never be too easy, secure, or resilient.

      I’ve been excited about AWS CloudFormation as a way to make our infrastructure provisioning simpler and replicable. Some recent greenfield projects provided a great opportunity to try it out.

      We quickly found we wanted tooling to consistently launch and manage CloudFormation stacks. And each project presented the same workflow decisions, like how to organize resources in templates, where to store templates, and when to update existing stacks or launch new ones.

      I built cfn-flow to reflect Kickstarter’s best practices for using CloudFormation and give developers a consistent, productive deploy process. Two especially helpful constraints of the workflow are worth highlighting:

      Red/black deploys

      cfn-flow embraces the red/black deployment pattern to gracefully switch between two immutable application versions. For each deployment, we launch a new CloudFormation stack then delete the old stack once we’ve verified that the new one works well. This is preferable to modifying long-running stacks because rollbacks are trivial (just delete the new stack), and deployment errors won’t leave stacks in unpredictable states. 

      Separate ephemeral resources from backing resources

      Since deployments launch and delete stacks, templates can only include ephemeral resources that can safely be destroyed. For our apps, that usually means a LaunchConfig, an AutoScalingGroup, and, optionally, an ELB with a Route53 weighted DNS record and an InstanceProfile. 

      Resources that are part of your service that do not change in each deployment are considered backing resources. These include RDS databases, security groups that let both new and old EC2 servers communicate, SQS queues, etc. We extract backing resources to a separate template that’s deployed less frequently. Backing resources are then passed as parameters to our app stack via our cfn-flow.yml configuration.

      cfn-flow is a command-line tool distributed as a RubyGem. You track your CloudFormation templates in the same directory as your application code, and use the cfn-flow.yml configuration file to tie it all together. Check out the cfn-flow README for details and examples.

      We’ve been using it for a few months with great success. It gives developers good, easy affordances to build robust services in AWS.

      I encourage anyone else interested in CloudFormation to give cfn-flow a try. If it’s not making your job easier, please file a GitHub Issue.
    • Introducing Telekinesis: A Kinesis Client for JRuby

      Kickstarter exists to help make it easier for people to create new things. And when it comes to code, there’s one very simple way to help others create — by sharing the things we’ve already built. That’s why, over the past month, we’ve been open-sourcing a new library each week. Today’s is called Telekinesis, and, well … we’ll let Ben explain it.

      At Kickstarter we use a variety of AWS services to help us build infrastructure quickly, easily, and affordably. Last winter, we started experimenting with Kinesis, Amazon’s hosted Kafka equivalent, as the backbone for some of our data infrastructure. After deciding that we needed a distributed log, we settled on using Kinesis based on cost and ease of operation.

      Kickstarter is all about Ruby, so it made sense for us to do our prototyping in Ruby. Since the Kinesis Client Library (KCL) is primarily built for Java, we quickly decided that building on top of JRuby was our best option. We already have some Java expertise in-house, so we also knew that running and deploying the JVM would be relatively straightforward. It’s been going so well that we haven’t looked back — despite Amazon’s announcement that they officially support Ruby through the Multilang Daemon.

      As part of open source month, we’re releasing Telekinesis, the library we’ve built up around the KCL. It includes some helpers to make using a Consumer from Ruby a little more idiomatic.


      require 'telekinesis/consumer'
      
      class MyProcessor
        def init(shard_id)
          $stderr.puts "Started processing #{@shard_id}"
        end
      
        def process_records(records, checkpointer)
          records.each do |r|
            puts String.from_java_bytes(r.data.array)
          end
        end
      
        def shutdown
          $stderr.puts "Shutting down #{@shard_id}"
        end
      end
      
      Telekinesis::Consumer::DistributedConsumer.new(stream: 'a_stream', app: 'my_app') do
        MyProcessor.new
      end
      

      It also includes a multi-threaded producer that we’ve been using in production for a couple months. Head on over to Github for a closer look.

      Looking for more of our tools? Just poke around Backing & Hacking, or see our open-source projects on GitHub! And if you're excited by what you see, you might be even more excited to know that we're hiring...

      2 comments
    • Introducing Kumquat: A Rails Template Handler for RMarkdown

      At Kickstarter our data team works extensively in R, using Hadley Wickham’s essential R package ggplot2 to generate data visualizations.

      Last year, we began developing internal reports using knitr, a tool to render R output and images into HTML and PDFs. The reports looked great, but we wanted a way to both automate them and also integrate them within the Kickstarter Rails application.

      Our requirements would be to build something to replace our daily-reporting infrastructure which could:

      • Get rendered and manipulated by Rails
      • Connect to our data warehouse, Redshift
      • Be capable of generating graphs via R and ggplot2

      So, as part of our month of Open Source, we’re announcing Kumquat, a Rails Engine designed to help integrate RMarkdown (and therefore anything R can produce) into Rails.

      At its core, Kumquat is a Rails template handler and email interceptor that uses R to send rich data reports.

      For example, consider a typical render call to a partial:
      render '_a_knitr_report'
      This partial is _a_knitr_report.Rmd, a regular RMarkdown file stored in your app:
      A Test Report for Kumquat
      ========================================================
      <...snip...>
      ```{r, fig.width=10, fig.height=8, echo=FALSE, message=FALSE}
      library(ggplot2)
      qplot(
        data = data.frame(x = runif(100), y = runif(100)),
        x = x,
        y = y
      )
      ```
      

      Which yields:

      For more technical details, head over to the README on GitHub.

      Slides!

      I presented an early version of Kumquat at a meetup at Kickstarter HQ in April. If you'd like more background about how and why I built Kumquat, check out my slides:

      2 comments
    • Open-Source Month, Week One: Caption Crunch

      Kickstarter exists to help make it easier for people to create new things. And when it comes to code, there’s one very simple way to help others create — by sharing the things we’ve already built. That’s why, all August long, we’ll be open-sourcing a new library each week. I'm David, and this time around, I’ll be your gracious host! (Here, let me take your coat.)

      We think creativity is for everyone, and part of living up to that belief is making sure our website works for everyone, too. We recently announced a feature that lets creators add subtitles and captions to their project videos, and watched creators use it to make their stories available to more and more people, across all sorts of different languages and levels of hearing — a win for accessibility and for bringing together a global community. Today, we’re excited to share a chunk of that feature with creative people all around the web: a tool we call Caption Crunch.

       project video thumbnail
      Replay with sound
      Play with
      sound
      • Off
        • Deutsch
        • English
        • Español (México)
        • Français
        • Italiano
        • 한국의
        • Polski

      Our site allows creators to type in their own subtitles and captions, but we wanted them to be able to import subtitle and caption files, too. (That’s helpful for creators who use a service to subtitle, caption, or translate their videos.) In order to import files, we need a parser to take the files apart, read them, and understand them.

      Glancing at RubyGems and GitHub, many parsers for subtitle files exist. However, their code either didn’t parse appropriately, didn’t have great test coverage, or had stylistic issues. We believe in supporting other open source projects, but in this case, we decided to make our own.

      Hence Caption Crunch, a Ruby parser for subtitle files. If you need to import caption files into your app, consider trying it out! Currently, only VTT files are supported, but we’ve designed the gem with extensibility in mind — the adapter enables you to add new types. You can code your own parser, and we’d love to see how you’ve tailored the gem to your own needs. Feel free to open an issue or pull request!

      Installation

      Add this line to your application's Gemfile:

      gem 'caption_crunch'

      And then execute:

      $ bundle

      Or install it yourself as:

      $ gem install caption_crunch

      Usage

      require 'caption_crunch'
      
      # returns a CaptionCrunch::Track instance
      track = CaptionCrunch.parse(File.new('sample.vtt'))
      # or
      track = CaptionCrunch.parse('WEBVTT')
      
      # track.cues is an array of CaptionCrunch::Cue instances
      track.cues.first.start_time
      track.cues.first.end_time
      track.cues.first.payload

      Using the CaptionCrunch.parse method, you can parse a subtitle file or string. The result of the parse is a bunch of Ruby CaptionCrunch::Cue objects within a larger CaptionCrunch::Track object. With them, you can insert the CaptionCrunch::Track and CaptionCrunch::Cue properties into your own database, or manipulate them however you need.

      Contributions

      Contributions are welcome! If you'd like to see a certain subtitle type, post an issue on GitHub.

      We hope the library is useful! Care for some tea on your way out?

      This is only the first of our weekly open-source libraries — don't forget to check Backing & Hacking for more, all August long. Or just see our open-source projects on GitHub! And if you're excited by what you see, you might be even more excited to know that we're hiring...
      3 comments
    • Joining RubyTogether

      At Kickstarter, we've built our platform on top of Ruby on Rails. It's the core technology we use, and we're very proud of the platform we've built with it. We're also contributors to the Ruby and Rails communities, both in terms of open-source contributions and engagement via conferences and talks. 

      This week, we're happy to announce we've also become members of RubyTogether. RubyTogether is dedicated to funding and helping the awesome volunteers who maintain the glue of the Ruby ecosystem: tools like Bundler, RubyGems, and the like.

      We're thrilled to be giving more back to the Ruby community, and we strongly encourage other large Ruby or Rails-based platforms to become members, too.

    • A/B Test Reporting in Looker

      One of the Data team’s priorities this year has been improving the Kickstarter A/B testing process. To this end, I’ve been been focused on making it easier to set up, run, and analyze experiments. This will make it more likely we'll use data from experiments to inform product design.

      Until recently, we monitored A/B tests in an ad hoc way. We use our event tracking infrastructure to log A/B test data, so while a test was running, a Product Manager or Data Analyst watched the number of users in the experiment's event stream until it reached the required sample size. At that point, we ran the numbers through an R script and sent out the results.

      Enter Looker

      Kickstarter recently adopted a business intelligence tool called Looker to support data reporting and ad hoc analysis. Looker connects directly to our Redshift cluster, which is where we store the raw event data from our A/B tests. This made me wonder whether we could use Looker to monitor experiments and report results.

      One feature of Looker we like is the ability to save and schedule queries, with the results delivered via email. If we could find a way to analyze A/B tests via SQL, then Looker could handle the rest.

      Back to School

      How can we do statistics in SQL without access to probability distributions? There are methods for generating normally distributed data in SQL, but this approach seems like overkill. We don't need to recreate a standard normal distribution on the fly. The values don't change.

      My aha moment was remembering my old statistics textbooks with look-up tables in the back. By adding look-up tables for probability distributions to Redshift, we can get good approximations of power, p-values, and confidence intervals for the typical A/B tests we run. Although this means we're simulating a continuous distribution with a discrete one, we don't rely exclusively on p-values to interpret our tests, so a difference of a few thousandths of a point won't make much difference.

      The Nitty Gritty

      As an example, I'm going to use a common type of test we run — a hypothesis test of the difference of two proportions. (If you'd like to learn more about the statistics behind this test, this is a good place to start).

      To make this concrete, let's say we're testing a new design of the Discover page, and we want to know whether it affects the number of users clicking through to project pages.

      To generate a test statistic for this type of test, we need a standard normal distribution. I generated a set of z-scores and their probabilities in R and loaded this into Redshift as standard_normal_distribution.

      The table looks something like this:

      z_score probability
      0 0.5
      0.0000009999999992516 0.50000039894228
      0.00000199999999939138 0.500000797884561
      0.00000299999999953116 0.500001196826841
      0.00000399999999967093 0.500001595769122
      0.00000499999999981071 0.500001994711402
      0.00000599999999995049 0.500002393653682
      0.00000700000000009027 0.500002792595963
      0.00000799999999934187 0.500003191538243
      0.00000899999999948164 0.500003590480523
      0.00000999999999962142 0.500003989422804
      0.0000109999999997612 0.500004388365084
      0.000011999999999901 0.500004787307365
      0.0000130000000000408 0.500005186249645
      0.0000139999999992924 0.500005585191925

      Now let's say we've already calculated the results of our experiment for two groups: control and experimental. For each group, we have the number of unique users [inline_math]n[/inline_math] who visited the Discover page, the number of unique users [inline_math]x[/inline_math] who clicked through to a project page, and the proportion [inline_math]p = x / n[/inline_math]. This can all be done with a query.

      In the sections below, I'll use the output of that query to calculate the sample proporution, standard error, and other sample statistics using subqueries called common table expressions (CTEs). If you aren't familiar with this flavor of SQL syntax, you can think of CTEs as forming temporary tables that can be used in subsequent parts of the query.

      Using a CTE, we calculate [inline_math]\hat{p}[/inline_math], the pooled proportion under the null hypothesis:
      ... ), p_hat AS (
        SELECT
          (control.x + experimental.x) / (control.n + experimental.n) AS p
        FROM
          control, experimental
      ), ...
      

      Next we calculate the pooled standard error under the null hypothesis:

      ... ), se_pooled AS (
        SELECT
          SQRT((p_hat.p * (1 - p_hat.p)) * (1 / control.n + 1 / experimental.n)) AS se
        FROM
          control, experimental, p_hat
      ), ...
      

      This allows us to calculate an exact z-score from the data:

      ... ), z_exact AS (
        SELECT
          ABS((control.p - experimental.p) / se_pooled.se) AS z
        FROM
          control, experimental, se_pooled
      ), ...
      

      Then we find the nearest z-score in our standard normal look-up table and use that to calculate a p-value:

      ... ), z_nearest AS (
        SELECT
          standard_normal_distribution.z_score AS z_score
        FROM
          standard_normal_distribution, z_exact
        ORDER BY ABS(standard_normal_distribution.z_score - z_exact.z) ASC
        LIMIT 1
      ), p_value AS (
        SELECT
          (1 - standard_normal_distribution.probability) * 2 AS p
        FROM
          z_nearest
        INNER JOIN standard_normal_distribution ON z_nearest.z_score = standard_normal_distribution.z_score
      ), ...
      

      Having a p-value is a good start, but we also want to generate confidence intervals for the test. While we're at it, we'd also like to conduct a power analysis so the test results only display when we've reached the minimum sample size.

      To do that properly, we need some details about the test design: the significance level, the power, and the minimum change to detect between the two variants. These are all added to the query using Looker's templated filters, which take user input and add them as parameters.

      Unfortunately, Looker cannot simply add an arbitrary value (e.g. 0.05) to any part of a query. To get around this, I filter a table column with user input and then use the resulting value.

      For example, in the following section, the query takes user input as significance_level, matches it against the probability column of the standard_normal_distribution table (after some rounding to ensure a match), and saves that value as alpha:

      ... ), significance_level AS (
        SELECT
          ROUND(probability, 3) AS alpha
        FROM
          standard_normal_distribution
        WHERE
          {% condition significance_level %} ROUND(probability, 3) {% endcondition %}
        LIMIT 1
      ), ...
      

      Note Looker's syntax for what it calls templated filters:

      WHERE
        {% condition significance_level %} ROUND(probability, 3) {% endcondition %}
      

      If the user input is 0.05 for the significance_level filter, Looker converts this to:

      WHERE
        ROUND(probability, 3) = 0.05
      

      See the appendix below for the entire query.

      Admittedly, doing all this in SQL is kind of preposterous, but it means that we can add it to Looker as the engine of an A/B Test Dashboard. The dashboard abstracts away all the calculations and presents a clean UI for taking user input on the parameters of the test design, allowing people without any special engineering or data expertise to use it. Now that it's built into Looker, it's part of our larger data reporting infrastructure.

      Filters on the dashboard take user input on details about the test design
      Filters on the dashboard take user input on details about the test design

      After taking input about the test design, the dashboard calculates the number of users in each variant, their conversion rates, and the minimum sample size. If the sample size has been met, the dashboard also outputs a p-value and confidence interval for the test. The dashboard can be scheduled to run daily, and we can even set it up to email only when there are results to report.

      Emailed results
      Emailed results

      Now when we implement a new A/B test, we add it to Looker so we can get daily status emails, including statistical results when the test is complete. This can be done by someone on the Product or Engineering teams, freeing up Data team resources to focus on designing experiments well and running more complex tests.

      The Results

      This kind of dashboard pushes Looker to its limits, so naturally there are some drawbacks to doing A/B test reporting this way. It separates the implementation of the test from the implementation of the reporting, so there is some duplicated effort. Furthermore, it only works for specific types of tests where the math can be handled by a SQL query and a static probability distribution.

      On the other hand, we're happy that Looker is flexible enough to allow us to prototype internal data tools. The A/B Test Dashboard has automated what was a very manual process before, and it has reduced the dependency on the Data team for monitoring and reporting the results of common types of tests. This all means we can run more experiments to create a better experience for our users.

      Find this interesting?

      If this kind of thing gets data juices flowing, you should know we're hiring for a Data Scientist! Head over to the job description to learn more.

      Appendix

      Our query in full:
      WITH control AS (
        -- count x and n as floats
      ), experimental AS (
        -- count x and n as floats
      ), p_hat AS (
        SELECT
          (control.x + experimental.x) / (control.n + experimental.n) AS p
        FROM
          control, experimental
      ), se_pooled AS (
        SELECT
          SQRT((p_hat.p * (1 - p_hat.p)) * (1 / control.n + 1 / experimental.n)) AS se
        FROM
          control, experimental, p_hat
      ), z_exact AS (
        SELECT
          ABS((control.p - experimental.p) / se_pooled.se) AS z
        FROM
          control, experimental, se_pooled
      ), z_nearest AS (
        SELECT
          standard_normal_distribution.z_score AS z_score
        FROM
          standard_normal_distribution, z_exact
        ORDER BY ABS(standard_normal_distribution.z_score - z_exact.z) ASC
        LIMIT 1
      ), p_value AS (
        SELECT
          (1 - standard_normal_distribution.probability) * 2 AS p
        FROM
          z_nearest
        INNER JOIN standard_normal_distribution ON z_nearest.z_score = standard_normal_distribution.z_score
      ), se_unpooled AS (
        SELECT
          SQRT(((control.p * (1 - control.p)) / control.n) + ((experimental.p * (1 - experimental.p)) / experimental.n)) AS se
        FROM
          control, experimental
      ), significance_level AS (
        SELECT
          ROUND(probability, 3) AS alpha
        FROM
          standard_normal_distribution
        WHERE
          {% condition significance_level %} ROUND(probability, 3) {% endcondition %}
        LIMIT 1
      ), power AS (
        SELECT
          ROUND(probability, 3) AS beta
        FROM
          standard_normal_distribution
        WHERE
          {% condition power %} ROUND(probability, 3) {% endcondition %}
        LIMIT 1
      ), change_to_detect AS (
        SELECT
          ROUND(probability, 3) AS change_in_proportion
        FROM
          standard_normal_distribution
        WHERE
          {% condition minimum_change_to_detect %} ROUND(probability, 3) {% endcondition %}
        LIMIT 1
      ), z_alpha AS (
        SELECT
          standard_normal_distribution.z_score AS z
        FROM
          standard_normal_distribution, significance_level
        WHERE
          ROUND(standard_normal_distribution.probability, 3) = ROUND(1 - alpha / 2, 3)
        ORDER BY ABS(standard_normal_distribution.probability - (1 - alpha / 2)) ASC
        LIMIT 1
      ), z_beta AS (
        SELECT
          standard_normal_distribution.z_score AS z
        FROM
          standard_normal_distribution, power
        WHERE
          ROUND(standard_normal_distribution.probability, 3) = ROUND(beta, 3)
        ORDER BY ABS(standard_normal_distribution.probability - beta) ASC
        LIMIT 1
      ), confidence_interval AS (
        SELECT
          (experimental.p - control.p) - (z_alpha.z * se_unpooled.se) AS lower,
          (experimental.p - control.p) + (z_alpha.z * se_unpooled.se) AS upper
        FROM
          control, experimental, se_unpooled, z_alpha
      ), proportions AS (
        SELECT
          control.p AS p1,
          (control.p * change_in_proportion) + control.p AS p2
        FROM
          control, change_to_detect
      ), standard_errors AS (
        SELECT
          SQRT(2 * ((p1 + p2) / 2.0) * (1 - ((p1 + p2) / 2.0))) AS se1,
          SQRT((p1 * (1 - p1)) + (p2 * (1 - p2))) AS se2
        FROM
          proportions
      ), minimum_sample_size AS (
        SELECT
          (((z_alpha.z * se1) + (z_beta.z * se2))^2) / (p2 - p1)^2 AS n
        FROM
          z_alpha, z_beta, proportions, standard_errors
      )
      SELECT
        control.n AS step1_control,
        control.x AS step2_control,
        control.p AS rate_control,
        experimental.n AS step1_experimental,
        experimental.x AS step2_experimental,
        experimental.p AS rate_experimental,
        CASE WHEN control.n >= minimum_sample_size.n AND experimental.n >= minimum_sample_size.n 
             THEN confidence_interval.lower / control.p
             ELSE NULL
             END AS lower_confidence_interval,
        CASE WHEN control.n >= minimum_sample_size.n AND experimental.n >= minimum_sample_size.n 
             THEN (experimental.p - control.p) / control.p
             ELSE NULL
             END AS relative_change_in_rates,
        CASE WHEN control.n >= minimum_sample_size.n AND experimental.n >= minimum_sample_size.n 
             THEN confidence_interval.upper / control.p
             ELSE NULL
             END AS upper_confidence_interval,
        CASE WHEN control.n >= minimum_sample_size.n AND experimental.n >= minimum_sample_size.n
             THEN p_value.p
             ELSE NULL
             END AS p_value,
        minimum_sample_size.n AS minimum_sample_size
      FROM
        control, experimental, p_value, confidence_interval, minimum_sample_size;
      
      3 comments
    • We're going to Full Stack Fest

      Being an engineer at Kickstarter isn’t just about writing and deploying code, iterating on product features, and brainstorming new ideas. It’s also about being a part of the larger developer community: hosting meet-ups, sharing information, and finding ways to both learn from and support each other.

      One way we do that is by attending engineering conferences — and we’ve been busy this season! Here’s a little snapshot of what we’ve been up to: 

      Next up, we’ll be at Full Stack Fest in Barcelona. The weeklong programming event features a host of cool workshops and talks, plus MC Liz Abinante (@feministy). We’re also super excited that Full Stack is introducing live-captioning this year in an effort to make the conference more accessible. (On that note, did you know we launched our own subtitles and captions feature on Kickstarter recently?)

      If you want to learn more about Full Stack Fest, what they’re all about, and their take on building an inclusive conference, check out their website. Maybe we’ll see you in Barcelona!

      (Psst! Full Stack is still looking for sponsors. Full disclosure: we’re totally helping sponsor the live-captioning! ;)

      11 comments
    • Beyond Coding: A Summer Curriculum for Emerging Software Developers

      With nearly five open jobs for every available software developer, the need for qualified technical talent is higher than ever. Here in New York City alone, there are 13,000 firms seeking workers with highly sought-after tech skills: web development, mobile development, user-interface design, and more.

      So when Mayor Bill de Blasio called companies to action in support of the city’s Tech Talent Pipeline efforts, we teamed up with five other NYC-based companies — Crest CC, Foursquare, Tumblr, Trello, and Stack Overflow — to find ways to support emerging developers as they join the local tech ecosystem.

      Together, we’re introducing Beyond Coding, a new, free summer program that aims to equip emerging computer programmers with the skills to help them succeed at their first coding jobs. The goal? Provide New Yorkers who have a passion for technology with access to the mentoring, training, and support they need to really grow as developers. The curriculum is designed to address areas where junior-level technical talent might need an extra boost: it’ll cover professional networking, effective strategies for communicating technical ideas to a variety of audiences, how to prepare for an interview, and ways to gain programming knowledge outside the classroom. 

      Beyond Coding will be open to anybody in the NYC area who is currently looking for a job as a software developer (or something related), and has experience and knowledge of programming — but doesn’t have access to tools, resources, or a professional network of support. Eligible students will receive a formal certification of their course completion at the culmination of the 10-week program, and will be introduced to tech companies in New York City who are actively hiring junior-level developers. 

      Any students interested in participating in the Beyond Coding program this summer can register at beyondcoding.io. Employers interested in attending the hiring fair with certified students can email employers@beyondcoding.io for more information.

      2 comments
    • Meetup on April 8th: Open Source and Testing @ Kickstarter

      Please join us on Wednesday, April 8 at 6pm for a tour of two things we love at Kickstarter: open source and testing! RSVP via Eventbrite here.

      Light snacks and beverages will be provided — we're looking forward to a great night of tech chat!

      We'll be focusing on three topics:

      Kumquat: Rendering Graphs and Data from R into Rails (Fred Benenson – Head of Data)

      At Kickstarter our Data team works extensively in R, using Hadley Wickham’s world-class R package ggplot2 to generate data visualizations.

      Last year, we began developing internal reports using knitr, a tool to render R output and images into HTML and PDFs. The reports looked great, but we wanted a way to both automate them and also integrate them within the Kickstarter application.

      That’s why we built Kumquat: a Rails Engine designed to help integrate RMarkdown (and therefore anything R can produce) into Rails.

      I’ll go into more detail about how this works in practice and show some examples of how we’re using Kumquat at Kickstarter.

      Rack::Attack: Protect your app with this one weird gem! (Aaron Suggs – Engineering Lead)

      Modern web apps face problems with abusive requests like misbehaving users, malicious hackers, and naive scrapers. Too often, they drain developer productivity and happiness.

      Rack::Attack is Ruby Rack middleware to easily throttle abusive requests.

      At Kickstarter, we built it to keep our site fast and reliable with little effort. Learn how Rack::Attack works through examples from kickstarter.com. Spend less time dealing with bad apples, and more time on the fun stuff.

      Testing Is Fun Again (Rebecca Sliter – Engineer)

      Writing automated tests for a greenfield application is great. You define the behavior, code up the feature, and the tests pass!

      Testing becomes more challenging as an application grows. Updating behavior and corresponding test coverage can be confusing. Adding new tests can cause the test suite to run more slowly. These issues can be attributed to test architecture design. I'll provide examples of different test architectures to show how even big applications can have fun, fast tests.

      The Details

      Wednesday, April 8th
      6pm - 8:30pm
      58 Kent Street, Brooklyn, NY
      RSVP via Eventbrite here

      Spaces are limited and available on a first come first served basis.

      Note: our headquarters are located in Greenpoint, Brooklyn at 58 Kent STREET (at Franklin).

      There is a nearby Kent AVENUE, so please make sure to put in the correct address when heading here!

      The closest subway is the G train stop at Greenpoint Avenue. If you're headed west down Greenpoint Ave, take a right onto Franklin, then a left onto Kent Street.

      Our facilities are equipped with elevators, as well as several gender-neutral restrooms. ASL (American Sign Language) interpretation will be provided for this event. If you have any accessibility requests, please contact renee@kickstarter.com.

      2 comments
    • Functional Swift Conference

      Swift has given iOS developers a glimpse into a world that we may otherwise have never seen: the world of functional programming. Eighty developers took the time on a cold December day to discuss the technical and philosophical aspects of functional programming, and how we can incorporate these ideas into our everyday work. The conference was hosted in our beautiful theater, with six featured speakers giving 30-minute talks on everything from thinking functionally to using monads to generalize your code.

      If this kind of thing piques your interest, we're hiring for iOS!

      Natasha Murashev — The Functional Way

      Brian Gesiak — Functional Testing

      Justin Spahr-Summers — Enemy of the State

      Andy Matuschak — Functioning as a Functionalist

      John Gallagher — Networking with Monads

      Brandon Williams — Functional Programming in a Playground

      3 comments
    • Pull Requests: How to Get and Give Good Feedback

      We follow a typical GitHub flow: develop in feature branches off master, open pull requests for review and discussion, then merge and deploy. My favorite part of this process is the pull request. I think that the way a team interacts with pull requests says a lot about the healthiness of its communication. Here's what we've learned at Kickstarter about getting and giving good feedback.

      Giving Good Feedback

      Before you begin, consider that someone has potentially spent a lot of time on this code, and you are about to begin a semi-public conversation where you search for problems and suggest ways that the code could be better. You want to be polite, respectful, courteous, and constructive.

      All of these things can be accomplished by being curious.

      Start by searching for places that don't make sense or that act in surprising ways. When you find one, ask a question! Maybe you found a bug, or a way to simplify. In this case, asking allows the submitter to participate in the discovery. On the other hand, maybe the feature has a good reason that you overlooked. In this case, asking gives the submitter an opportunity to explain, and gives you a chance to learn something new.

      Suppose that your question was a misunderstanding, and it's all settled now. Or suppose that you figured out the answer on your own. That's not the end! You can still contribute: think about how the code was different than you expected, and suggest how it might be clarified for the next person.

      The best part of this approach is that anyone can do it at any skill level. You'll either end up contributing constructively or learning something new. Maybe both! There's no bad outcome.

      Getting Good Feedback

      When preparing a pull request, remember that this is not an adversarial situation, it's not a step that you outgrow, and it's not a hurdle on the way to production. Imagine instead that you are a writer submitting a first draft to a panel of editors. No matter your skill level, you will benefit from fresh eyes.

      Before submitting, try to remove distractions. Skim through what you're about to submit. Does it have any obvious stylistic issues, or code left over from previous iterations? Did you make stylistic updates to unrelated code that could be merged and deployed separately? Now's your chance to clean up your branch and let people focus on the good stuff!

      Then write up a good description. Your goal in the description is to describe why you've done this work and how you've approached the solution. But that's not all: this is an opportunity to talk about where you'd like extra eyes and @-mention who you'd like to take a look.

      Once you've submitted and you're waiting for feedback, take yet another look through your code. In some cases you might seed the conversation by commenting on spots where you'd like to draw attention. But don't explain too much just yet! You don't want to spoil those valuable first impressions.

      By now coworkers are engaging with your pull request and beginning to ask questions. Be welcoming! Every question provides an opportunity to learn where your code caused someone to stumble. Remember that if you have to clarify in the pull request, you should additionally consider clarifying the code as well. Your goal is to commit code that not only accomplishes what it intends, but can be understood and maintained by your entire team.

      Teamwork

      When pull requests have healthy communication, they're moments that engineers can relish and look forward to. They provide an opportunity for participants to learn from each other and collaborate on making something of a higher quality than any single person might have achieved on their own. If you feel that pull requests have been missing something in your organization, try out some of these ideas!

    • Engineering Year in Review 2014

      Every year Kickstarter does a Year in Review post to look at what happened over the year. We highlight exciting projects and milestones, and show you the data from that year. This year the Kickstarter Engineering team has decided to do a mini-Year in Review. We think we had a pretty exciting year: we grew, we shipped awesome product (with a lot more to come in 2015), and we had a lot of fun doing it.

      So what did 2014 look like, by the numbers?

      • We started the year with 14 engineers and finished it with 20 engineers (including saying goodbye to three awesome folks).
      • We merged 725 pull requests on the Kickstarter platform.
      • We merged 45 pull requests on our payments processing application.
      • We deployed the Kickstarter platform 9,460 times.

      We also deployed some amazing features, changed the way we worked, and did a lot of other cool stuff. Here's a (small) sample:

      • We rolled out a major update to the look of the Project Page.
      • We hosted Engineering Meetups and the Functional Swift Conference.
      • We created Engineering leads, cross-functional teams, and hired a VP of Engineering.
      • We moved off Amazon and onto Stripe for our payments.
      • We started regular Engineering Hack Days and held another company-wide Hack Day.
      • We added analytics to the emails we send folks.
      • We partnered with Flatiron School to provide resources to students without college diplomas.
      • We created a dedicated Data team.
      • We made it easier for you to discover projects on Kickstarter.
      • We rolled out to a bunch of new countries (including introducing the Fastly CDN to improve international performance and mitigate DDOS attacks).
      • On the security front, we introduced two-factor authentication and converted to 100% SSL to keep y'all safer when using Kickstarter.
      • We extended our GitHub for Poets program to more of the company and had one of our Community team reach 100 commits!
      • We open-sourced our Lunch Roulette algorithm.
      • The White House used some of our code for their Year in Review!

      We've got lots more planned for 2015 and we look forward to sharing it with you! Happy New Year!

      4 comments
    • Growth of Kickstarter's Team and Tech

      I talked recently with Steve from RubyNow about how Kickstarter's Engineering efforts have grown since the first commit in September 2008. We discussed planning for the unknown, managing technical debt, transitioning from small to medium team sizes, and why a (mostly) monolithic Rails app has worked well for us so far. Give it a listen at the RubyNow blog!

    • E-Mail Analytics at Kickstarter

      I spent the summer working as an intern at Kickstarter through the HackNY program in New York City. When I started, Kickstarter was gathering minimal data about e-mail activity on its platform. E-mail is a key part of the lifecycle of a Kickstarter project — it's a crucial part of project updates and backer surveys, for example. 

      My project for the summer was to create a system that Kickstarter engineers could use to gather more granular e-mail data. The goal was to better understand how users are interacting with the e-mails Kickstarter sends.

      Kickstarter uses a cloud-based SMTP provider called SendGrid to manage the delivery of e-mails to users. I built an e-mail analytics system for Kickstarter that integrates with SendGrid's webhooks. Webhooks enable SendGrid to send us data about e-mail events from their service, such as click, open or delivery events. That data is received by Kickstarter, where it is parsed and sent to our data stores. 

      For example, if the recipient of a backer reward survey opens that e-mail, SendGrid is notified of that event. SendGrid then posts data about that event to our callback URL. When Kickstarter receives this data, our system adds additional properties to the data and then sends it along to the analytics service Mixpanel and our data store at Redshift for our further analysis.

      This is one of the first graphs I made using our new e-mail data, which shows the volume of Kickstarter e-mail events by type.
      This is one of the first graphs I made using our new e-mail data, which shows the volume of Kickstarter e-mail events by type.

      We wanted more information about the processed e-mail messages than was offered by SendGrid's event notifications alone. So, we decided to add an X-SMTPAPI header to our e-mails. This header contains data about the associated e-mail, such as a project's id number. SendGrid saves any data encoded into this header, for the purpose of sending it to us as a part of the regular event notification data.

      The data that is sent in the X-SMTPAPI header consists of what SendGrid calls Unique Arguments and Categories. The Unique Arguments can be any data that consists of key-value pairs. The category is a string we send containing the name of the type of email being sent.

      So, we can send Unique Arguments associated with a given email that look like this:


      And we can send the Category for the email like this:

      You might be wondering when we're actually building the X-SMTPAPI header that gives SendGrid all of this useful data. To do that, I built an interceptor class. An interceptor is a feature of Rails ActionMailer, which gives you models and views to use for sending e-mail. An interceptor gives you a way to perform actions on e-mail messages before they are handed off to delivery agents. The interceptor class that I built creates and populates an X-SMTPAPI header for any e-mail marked as needing it. The interface for that class looks like this:


      Once this header is built, the interceptor sends the modified mail message on to the delivery agent as normal. 

      Adding an e-mail to this analytics process amounts to one method call in its mailer method. The method tags an e-mail as needing analytics. The interceptor then adds the X-SMTPAPI header containing our unique arguments and category data to any e-mail with the analytics tag, before finally sending it to SendGrid for delivery to the recipient. 

      So, to sum everything up, this is how the e-mail analytics system I started building this summer works:

      • An e-mail is marked as requiring analytics in its mailer method. 
      • The e-mail is intercepted before it is handed to delivery agents, using a special interceptor class that builds the X-SMTPAPI header. 
      • SendGrid sends the e-mail.
      • SendGrid receives event notifications about the e-mail (delivery, click and open), which are then posted to our callback URL by their webhooks. 
      • Our callback URL parses the event notification data and hydrates it with new, richer data. 
      • The hydrated e-mail event data is sent to Mixpanel and Redshift for further analysis.

      I came into this project with very little knowledge of the complexities of e-mail delivery and tracking. This new feature touches many parts of Kickstarter's larger e-mail system, so having the chance to build something that could interface with that in the simplest way possible was a fun challenge. It's also exciting to have built a resource that enables Kickstarter to understand what kinds of e-mails are most meaningful to users — by analyzing open rates, for example. I'm happy that I succeeded in seeing this feature through to completion!

      7 comments
    • Building our new engineering job ad

      At Kickstarter we try to be thoughtful about culture and hiring. As a recent addition to the team I took up the task of drafting a new job ad for engineers. We decided our new ad needed to satisfy three criteria:

      • It should reflect our culture and values.
      • It should clearly explain the job we're offering, the person we're seeking and the environment you can expect at Kickstarter.
      • It should make the hiring process clear and transparent.

      One of my colleagues had previously suggested this job ad from the team at Hacker School as an example of an exceptional job posting that had really resonated with him. 

      I decided to adapt the Hacker School posting including reusing much of their walkthrough of the interview process. Our interview process is identical but their explanation of the process is elegant, simple and clearly sets expectations.

      *UPDATE* - When we posted the job ad and this blog post we didn't do an awesome job of acknowledging our reuse of some of Hacker School's job posting directly. The mistake was totally mine and I want to sincerely apologize for that omission. We've updated this blog post and the job posting to acknowledge that debt and linked to the original content.

      Our job ads are part of the Rails application that powers the Kickstarter platform so they are added to the site via a Git Hub pull request. I wrote the draft ad and created a pull request for it.

      Pretty much immediately the team began to review it much like they review any other code on our site. Like code reviews, the responses covered style, content and implementation (I managed not to mess up any code which was excellent as it was also my first commit at Kickstarter). And the review wasn't just limited to the Engineering team, our Director of HR also reviewed the pull request.

      In the review, the team immediately hit on the areas which we feel most strongly about like diversity.

       And inclusivity and accessibility.

      As well as ensuring we make clear the type of engineers and skills we're seeking.

      Throughout the process we iterated (and rebased) the pull request as people provided feedback and input. As a team we're pretty proud of the final product. We think both that the content and the process provide an accurate reflection of what makes Kickstarter Engineering successful and an awesome place to work. 

      1 comment
    • Refactoring for performance

      Frequent Kickstarter visitors may notice that pages load a bit faster than before. I'd like to share how we refactored an expensive MySQL query to Redis and cut over 100ms off typical load times for our most prolific backers.

      If you're logged in, there's probably a little green dot next to your avatar in the top right of the page. That's your activity indicator, letting you know there's news about projects you've backed. There's a similar interface in our iPhone app that displays the number of unseen activity items.

      We implemented that feature years ago in the most straightforward way. We have a MySQL table that joins users to each activity item in their feed with a boolean field to indicate if the user has seen the item.

      That worked fine when our site was smaller. But as Kickstarter grew to millions of users, some with thousands of activity items, the query performance degraded. Performance was especially poor for our most prolific backers.

      We've had prior successes using Redis for quick access to denormalized data, so it seemed like a natural solution.

      At first, we weren't sure what Redis data structures to use to store all the counters. Should we use a simple string key per user, or one big hash for all users? With a little research about the hash-max-ziplist-entries config and some benchmarking, we realized that putting counters in hashes in groups of 512 yielded great memory usage with negligible CPU impact.

      Here's the annotated code my colleague Lance Ivy and I implemented. It's an example of the clear, maintainable, testable Ruby code we like to write.

      Here's the interface:


       indicator = Indicator::UserActivity.new(user_id)
      
      # When a new activity is added to a user's feed: 
      indicator.increment
      
      # When a user views their unseen activity: 
      indicator.clear
      
      # Does the user have any unseen activity? 
      indicator.set?
      
      # How many unseen activities does the user have? (for the iOS app)
      indicator.value

      Deploying

      While the code change was small, the indicator is rendered on virtually every page. Here's how I rolled out Redis indicators in small, easily verifiable parts:

      • Start writing new activity to Redis, while reading from MySQL
      • Backfill activity counts for Kickstarter staff from MySQL to Redis
      • Enable only staff users to read from Redis
      • Backfill all user counts to Redis (took ~2 days)
      • Enable reading from Redis for 10/50/100% of users in 3 quick deploys
      • Play Killer Queen

      Graphs

      So did this refactor make the site faster? Most definitely. Let's look at some graphs.

      I'll focus on the homepage where returning users often land. Here's average homepage render time before and after the refactor:

      Mean homepage performance for all users. Click to embiggen.
      Mean homepage performance for all users. Click to embiggen.

      We shaved ~60ms off the average load time by eliminating the Activity#find call.

      Note that the three deploy lines between 11:40 AM and 12:10 PM are when we enabled Redis indicators for 10/50/100% of users.

      The NewRelic graph is a mean of both logged in and logged out users (who of course don't have any activity). It's not particularly indicative of a user's experience. Here's a more detailed graph of homepage performance for just logged in users.

      Homepage performance for logged in users. Click to embiggen.
      Homepage performance for logged in users. Click to embiggen.

      Not only did we improve performance, but we greatly reduced the variance of load time. The site is faster for everyone, especially prolific backers.

      5 comments
    • Followup: Kickstarter Engineering Meetup

      photo by @chenjoyv
      photo by @chenjoyv

      Last Thursday we held a meetup at our office to discuss maintaining positive, inclusive engineering cultures, and it went swimmingly! Our team really enjoyed meeting people from other companies in the area and exchanging ideas about work environments, tools that we use, and everything in between. 

      Here's a brief summary of the talks given with links to their slides:

      Thanks to everyone who attended. Hope we see you again!

    • Kickstarter Engineering Meetup - Aug 21

      We’re hosting an engineering meetup at our HQ in Greenpoint! We want to discuss how to create and sustain positive, inclusive engineering cultures. We believe the conversation should include a diverse range of perspectives.

      Who: All are welcome to attend! We have three speakers lined up, each giving a 10-15 minute presentation:

      What: We’ll talk about the processes and tools that enable positive engineering cultures. Following the presentations, Kickstarter staff will facilitate loosely organized break-out discussions to make it easy for everyone to participate. We’ll provide snacks, beer and non-alcoholic beverages.

      When: Thursday, August 21, from 7 to 9pm

      Where: Kickstarter, 58 Kent St, Brooklyn. You can take the East River Ferry to Greenpoint or take the G train to Nassau Ave. and then walk or take the MTA shuttle bus to Greenpoint.

      How: Please RSVP here.

      Accessibility: The Kickstarter office is wheelchair-accessible. A professional transcriptionist will transcribe the presentations in realtime. The Kickstarter office has three non-gendered single-occupancy bathrooms.

      Kickstarter is committed to ensuring a safe, harassment-free environment for everyone. Please contact us if you have any questions or concerns.

      3 comments
    • Lunch Roulette

      Kickstarter's culture is a core part of who we are as a company and team. Our team hails from a hugely diverse set of backgrounds – Perry was working as a waiter at Diner when he met Yancey, most of our engineers studied liberal arts (myself included – Philosophy), and our community team is made up of former and current projectionists, radio hosts, teachers, funeral directors, chefs, photographers, dungeon masters, artists, musicians, and hardware hackers. Last year, we had the idea to facilitate monthly lunch groups as a way to see if we could accelerate the kind of inter-team mixing that tends to happen in the hallways and between our normal day to day work.

      In addition, groups would be encouraged to go for a walk, find a new place in the neighborhood to have lunch, and Kickstarter would pick up the tab.

      Shannon, our office manager at the time, and now our Director of HR, had the unenviable job of coming up with all of these lunch groups. The idea was to make them pseudo-random, so that staff wouldn't end up having lunch with the person they sat next to every day, and that, ideally, they'd meet people they'd never normally interact with as part of their day to day responsibilities.

      And, as our headcount has grown – we've hired half of Kickstarter between February 2013 and now – we also hoped that these lunches could introduce new staff to old.

      But Shannon quickly discovered that creating multiple sets of semi-random yet highly-varied lunch groups was not a trivial task!

      One of the biggest issues with keeping groups interesting was moving a person from one group to another meant a cascade of changes which were tedious, and sometimes impossible to reconcile by hand.

      So, after spending an entire weekend churning out six possible sets of a dozen groups of 4 people each, Shannon took me up on my offer to help build a formal algorithm to help automate what we had been calling Lunch Roulette.

      We put together a meeting and sketched out some constraints that a minimally viable Lunch Roulette generator would have to satisfy:

      • Lunch groups should be maximally varied – ideally everyone in a group should be from a different team
      • Groups should avoid repeating past lunches
      • We should be able to define what it means for a group to be varied
      • It should output to CSV files and Google Docs

      After a couple weeks of hacking together an algorithm in my spare time, I arrived at something that actually worked pretty well – it'd take a CSV of staffers and spit out what it thought were a set of lunch groups that satisfied our conditions.

      We've been using it for over 6 months to suggest hundreds of lunch groups and have been pretty happy with the results, and today I'm open sourcing it. But first, a little more about the algorithm.

      The Fun Part is How Lunch Roulette Works

      Lunch Roulette creates a set of lunches containing all staff, where each group is maximally varied given the staff's specialty, their department, and their seniority.

      It does this thousands of times, and then ranks sets by their overall variety. Finally, the set of lunch groups with highest total variety wins.

      Command Line App

      Lunch Roulette is a command line application that always requires a CSV file with staff "features", such as their team and specialty and start date. It is run using the ruby executable and specifying the staff via a CSV file:

          ruby lib/lunch_roulette.rb data/staff.csv
      

      Features are things like the team that a staffer is on, or the day they started. These features can be weighted in different ways and mapped so that some values are "closer" to others.

      Along with specifying the various weights and mappings Lunch Roulette users, configurable options include the number of people per group, the number of iterations to perform, and the number of groups to output:

          Usage: ruby lunch_roulette_generator.rb staff.csv [OPTIONS]
              -n, –min-group-size N           Minimum Lunch Group Size (default 4)
              -i, –iterations I               Number of Iterations (default 1,000)
              -m, –most-varied-sets M         Number of most varied sets to generate (default 1)
              -l, –least-varied-sets L        Number of least varied sets to generate (default 0)
              -v, –verbose                    Verbose output
              -d, –dont-write                 Don't write to files
              -h, –help                       Print this help
      

      A Dummy Staff

      So that you can run Lunch Roulette out of the box, I've provided a dummy staff (thanks to Namey for the hilariously fake names) dataset in data/staff.csv:

        user_id,name,email,start_date,table,team,specialty,previous_lunches
        4,Andera Levenson,andera@cyberdyne.systems,10/12/2011,3,Operations,,"1,10"
        48,Brittani Baccus,brittani@cyberdyne.systems,12/16/2013,3,Product Manager,,"6,11"
        59,Campbell Russell,campbell@cyberdyne.systems,11/25/2010,2,Community,,"1,5"
        35,Carolina Credo,carolina@cyberdyne.systems,6/6/2010,2,Communications,,"12,1"
        36,Colin Rigoni,colin@cyberdyne.systems,11/18/2013,4,Community,,"12,6"
        44,Collen Molton,collen@cyberdyne.systems,12/17/2013,2,Executive,,"0,10,1"
        12,Cornelius Samrov,cornelius@cyberdyne.systems,9/5/2011,2,Product Manager,Backend,"12,4"
        21,Damion Gibala,damion@cyberdyne.systems,2/16/2013,2,Engineering,,"2,13"
        60,David Graham,david@cyberdyne.systems,12/3/2013,4,Product Manager,Backend,"3,6"
      

      The use of a CSV input as opposed to a database is to facilitate easy editing in a spreedsheet application (a shared Google Doc is recommended) without the need for migrations or further application bloat. This allows non-engineers to add new staff, collaborate, and add new columns if needed.

      Accordingly, the date format of MM/DD/YYYY is specific to common spreadsheet programs like Google Docs and Excel.

      Note: There's a difference between team and specialty. While you and another person might have the same specialty, you might be on different teams. By default Lunch Roulette puts precedence on preventing two people with the same specialty from having lunch, since you probably work closer together than people on the same team. The previous_lunches column contains a double quoted comma-delimited list of previous lunches each having their own ID. If no previous lunches have taken place, then ids will be generated automatically (see the CSV Output section below for more info). All users need to have a user_id to help Lunch Roulette, but this can be an arbitrary value for now.

      Configuring Lunch Roulete

      Mappings

      At the minimum, Lunch Roulette needs to know how different individual features are from each other. This is achieved by hardcoding a one dimensional mapping in config/mappings_and_weights.yml:

        team_mappings:
          Community Support: 100
          Community: 90
          Marketing: 80
          Communications: 70
          Operations: 50
          Product: 40
          Design: 30
          Engineering: 20
          Data: 0
        specialty_mappings:
          Backend: 0
          Data: 20
          Frontend: 30
          Mobile: 50
          Finance: 100
          Legal: 120
        weights:
          table: 0.6
          days_here: 0.2
          team: 0.9
          specialty: 0.1
        min_lunch_group_size: 4
      

      Lunch Roulette expects all employees to have a team (Community, Design, etc.), and some employees to have a specialty (Data, Legal), etc.

      Caveat Mapper

      These mappings are meant to provide a 1-dimensional distance metric between teams and specialities. Unfortunately, the results can come out a little arbitrary – e.g., why is Community so "far" away from Engineering? I have some notes below about how I might fix this in future versions, but this approach seems to work well enough for now given the intent of Lunch Roulette. Having put a lot of thought into the best strategy for quantizing how teams and colleagues may differ, I'll say that almost all solutions feel unpalatable if you think about them too hard.

      Weights

      You should also specify the weights of each feature as a real value. This allows Lunch Roulette to weight some features as being more important than others when calculating lunch group variety. In the supplied configuration, team is weighted as 0.9, and is therefore the most important factor in determining whether a lunch group is sufficiently interesting.

        weights:
          table: 0.6
          days_here: 0.2
          team: 0.9
          specialty: 0.1
      

      It's not strictly necessary to keep the weights between 0 and 1, but doing so can keep scores more comprehensible. Finally you can specify the default minimum number of people per lunch group:

        min_lunch_group_size: 4
      

      When the number of total staff is not wholly divisible by this number, Lunch Roulette randomly assigns remaining staff to groups. For example, if a staff was comprised of 21 people, and the mimimum group size was 4, Lunch Roulette would create four groups of four people, and one group of five people.

      Determining Mappings

      The weights that Lunch Roulette uses to calculate team variety are specified in the config/weights_and_mappings.yml file. Team and specialty mappings effectively work as quantizers in the Person class, and if you add new features, you'll have to modify it accordingly.

      For example, Community may be culturally "closer" to the Communications team than the Engineering team. I highly recommended that you tweak the above mappings to your individual use. Remember, the more you define similarity between teams and specialties the easier it is for Lunch Roulette to mix people into varied lunch groups. Seniority is calculated by subtracting the day the employee started from today, so staff that start earliest have the highest seniority. Not all staff are required to have values for all features. In particular, if each staff has a specialty, Lunch Roulette may have a difficult time creating valid lunch sets, so it's recommended that no more than 30-40% have specialties.

      Previous Lunches and Validations

      Before Lunch Roulette calculates a lunch group's variety, the LunchSetclass attempts to create a set of lunches that pass the validations specified in the class method valid_set. For a staff of 48 people with a minimum group size of 4, a set would contain a dozen group lunches. Out of the box, there are three validations Lunch Roulette requires for a set to be considered valid:

      • The set cannot contain any group where 3 or more people have had lunch before
      • The set cannot contain more than one executive (a dummy previous lunch with the id of 0 is used here)
      • The set cannot contain anyone with the same specialty (remember, specialties are different than teams)

      In most scenarios with at least one or two previous lunches, it is impossible to create a valid lunch set without at least one group having one pair of people who have had lunch before.

      Choosing a Set

      Remember, the set with the most heterogeneous lunch groups win. variety is first calculated within groups, and then across sets. The set with the highest variety wins.

      Group variety

      Once a valid lunch set is created Lunch Roulette determines the variety of each group within the set thusly:

      1. Choose a feature (e.g. we will try to mix a lunch group based on which teams people come from)
      2. Choose a person
      3. Normalize the value of that person's quantized feature value against the maximum of the entire staff
      4. Do this for all people in the current group
      5. Find the standard deviation of these values
      6. Multiply this value by the configured weight
      7. Repeat this process for all features
      8. The group score is the sum of these numbers

      The resulting average is a representation of the how different each member of a given group is from each other across all features and can be seen in the verbose output:

        Tony Reuteler (Design, Table 2),
        Campbell Russell (Community, Table 2),
        Idella Siem (Product Manager, Table 2),
        Fred Pickrell (Community, Table 3)
        Emails:
          tony@cyberdyne.systems,
          campbell@cyberdyne.systems,
          idella@cyberdyne.systems,
          fred@cyberdyne.systems
        Sum Score: 0.4069
        Score Breakdown: {
          "table"=>0.075,
          "days_here"=>0.04791404160231572,
          "team"=>0.28394541729001366,
          "specialty"=>0.0
        }
      

      The higher the sum, the more varied that particular lunch group.

      Set variety

      Since all sets will have the same number of groups in them, we can simply sum the average scores across all the groups and generate at a per-set score. This represents the average variety across all groups within a set and is used to compare sets to each other.

      Formally

      I was interested in how Lunch Roulette could be represented formally using math, so I asked my colleague Brandon – Kickstarter's math-PhD-refugee-iOS-dev – for some help. After some beers and a whiteboard session, we arrived at at decent generalization of what's happening under the hood. It should be noted that any errors in the following maths are entirely my fault and any brilliance should be entirely ascribed to Brandon's patient insights.

      Let [inline_math]S[/inline_math] be the set of staff and [inline_math]\mathscr{U}[/inline_math] be the set of all partitions of [inline_math]S[/inline_math] into [inline_math]N[/inline_math] groups. Since [inline_math]\mathscr U[/inline_math] is very large, we narrow it down by throwing out lunches that we consider boring. For example, no lunch groups with 3 or more people who have had lunch before, etc.

      Then we are left with [inline_math]\mathscr U'[/inline_math], which is the subset of [inline_math]\mathscr U[/inline_math] of valid lunches.

      We define a "feature" of the staff to be a integer-valued function on [inline_math]S[/inline_math], i.e. [inline_math]f : S \rightarrow \mathbb Z[/inline_math].

      For example, the feature that differentiates teams from each other might assign 15 to someone on the Community, and 30 to someone on the operations team.

      It's important to note that this number doesn't represent anything intrinsic about the team: it's merely an attempt at mapping distance (albeit one-dimensionally) between teams. Future versions of Lunch Roulette should probably switch this to a function returning a vector of values encoding multi-dimensional characteristics about teams (e.g. [inline_math][0,1,0,1,1,1,0][/inline_math]).

      Let's fix [inline_math]M[/inline_math] such features, [inline_math]f_i : S \rightarrow \mathbb Z[/inline_math]. For a given feature [inline_math]f[/inline_math], let us define: [inline_math]||f|| = \max\limits_{s \in S} f(s)[/inline_math]

      We need [inline_math]||f||[/inline_math] so that we can normalize a given feature's value against the maximum value from the staff.

      It is also useful to apply weights to features so that we can control which features are more important. Let [inline_math]W_i \in [0,1][/inline_math] be the set of weights for each feature [inline_math]i=1, \ldots, M[/inline_math].

      Then we maximize the variety among lunch groups thusly: [math] \max\limits_{\mathscr{G}\in\mathscr{U}'}\sum\limits_{G\in\mathscr{G}}\sum\limits_{i=1}^{M}\sigma\left(\dfrac{f_i(G)}{||f_i||}\right)\cdot W_i [/math] In english, that's: for each feature inside each group, we normalize a person's value to the maximum value found in the staff, then calculate the standard deviation ([inline_math]\sigma[/inline_math]).

      Then, all [inline_math]\sigma[/inline_math] are added up for the group and then multiplied by the weight given to the feature to achieve an overall variety metric for a given group. The sum of those [inline_math]\sigma[/inline_math]s inside a set represents its overall variety.

      Here's the inner loop (representing the innermost [inline_math]\sum[/inline_math] above) in LunchGroup which calculates a given group's score:
          def calculate_group_score
          h = features.map do |feature|
            s = @people.map do |person|
              person.features[feature] / config.maxes[feature].to_f
            end.standard_deviation
            [feature, s * config.weights[feature]]
          end
          @scores = Hash[*h.flatten]
        end
        
      Lunch Roulette does this thousands of times, then plucks the set with the highest overall score (hence the [inline_math]\max_{}[/inline_math]) and saves that group to a CSV.

      I've since discovered that lunch roulette is a version of a "Maxmimally Diverse Grouping Problem", and it seems like some researchers from The University of Valencia in Spain have built some similar software to Lunch Roulette in Java using a couple different methods for maximizing diversity.

      Gut Testing Lunch Roulette

      If specified, Lunch Roulette will output the top N results and/or the bottom N results. This is useful for testing its efficacy: if the bottom sets don't seem as great as the top sets, then you know its working! This will output 2 maximally varied sets, and two minimally varied sets:

        ruby lib/lunch_roulette.rb -v -m 2 -l 2 data/staff.csv
      

      If you wanted to get fancy, you could set up a double blind test of these results.

      CSV Output

      Unless instructed not to, Lunch Roulette will generate a new CSV in data/output each time it is run. The filenames are unique and based off MD5 hashes of the people in each group of the set. Lunch Roulette will also output a new staff CSV (prefixed staff_ in data/output) complete with new lunch IDs per-staff so that the next time it is run, it will avoid generating similar lunch groups. It is recommended that you overwrite data/staff.csv with whatever version you end up going with. If used with the verbose option, Lunch Roulette will dump a TSV list of staff with their new lunches so you can paste that back into Google Docs (pasting CSVs with commas doesn't seem to work).

      Take It For a Spin

      I've open sourced Lunch Roulette and it's available on GitHub under a MIT license.

      Thanks

      Lunch Roulette went from a not-entirely serious side project into something much more interesting and now, I hope, something possibly useful for others. But I couldn't have done it without Shannon Ferguson having done all of the work manually, and Brandon Williams helping me with the math.

      Please consider forking it and letting us know if you use it.

      Happy Lunching!

      6 comments
Loading small a25feb0222a994468bf211976c47036a664ab4d3280c072d19a21d8d8eae8434
Please wait