An Interview with Adam LaFountain

You remember Vine. It’s the six-second, looping video app that captivated micro-storytellers, musicians, and advertisers alike and catalyzed a world of digital creativity.

Watch a Vine

Vine was hugely popular for three short years until Vine’s parent company, Twitter, shut it down — 15,768,000 Vines later — in October of 2016.

It’s likely that millions of people experienced Vine’s moment in the sun, but few know that Vine may never have seen daylight had it not been for some inspired maneuvering by Twitter’s infrastructure team and, specifically, their former network architect, Adam LaFountain. The story of Adam and his team is instructive for anyone who is faced with the performance perils and financial pitfalls of serving big network demands out of the public cloud.

Adam joined Twitter back in 2010, the same year the social media giant expanded its product to include retweets, inline images and video viewing, which put enormous strain on Twitter’s existing platform. At the time, Twitter was running out of a single data center on the West coast, outsourcing management and administration of those servers to a third party.

Adam’s role, initially as a team of one, was to scale Twitter’s infrastructure to meet the needs of its rapidly growing business and to move control of that infrastructure in-house. The move led to higher reliability and better performance — all at a lower cost to Twitter. And, for Twitter users, the move meant fewer unwanted visits from the Fail Whale.

Three years later, Twitter acquired Vine, which at the time had not been released to the public. The acquisition came with one big obstacle: the entirety of Vine’s massive video traffic was being served out of a single AWS deployment in Ashburn, Virginia.

“That basically meant that if you were on the East Coast of the country, you could have a great experience with Vine. But if you were on the West Coast, much less overseas, you often got buffering delays or slow connect times or just a really terrible product experience,” explains Adam. “Executives were excited to use the product, then they’d go somewhere supposedly well connected like London, and it wouldn’t work.”

As you can imagine, the displeasure and panic trickled down quickly, giving Adam and his team a clear directive: Solve this. Which is exactly what they did.

Adam did a quick math comparison: “If you only have, let’s say, 50 to 100 gigs of your own CDN [Content Delivery Network] traffic, it’s not very cost effective to build out a backbone and build out those POPs [Points of Presence], but at the traffic levels that Twitter had reached, suddenly it becomes much easier to make the business case to build as opposed to keep on buying. You can clearly see the cost savings.”

Adam’s quick math led Twitter to build out their own CDN, and, consequently, saved Twitter some serious dollars — tens of millions per year in OpEx.

The efficiency didn’t just save cash; it also significantly increased performance for Twitter’s core services. “Being able to move that workload onto a relatively lean platform made the product really successful on a global scale,” says Adam. And, by taking more control over its infrastructure, Twitter was well equipped to avoid similar integration issues in future acquisitions.

In the rapidly changing and unforgiving internet world, agility is key. And even though Vine may have not turned out to be commercially viable long-term, we can thank Adam for giving us a lesson in infrastructure stewardship and for those few years of mesmerizing micro-form entertainment.