Gábor Osztroluczki

Gábor Osztroluczki

fellow of Bridge Budapest, Prezi, San Francisco, 2016

Community Life and Meetups

Meetups are one after another in the city. The events covered the entire offer. There is yoga, IoT, cycling, artificial intelligence, computer infrastructure, music genres, cultural events… There is a meetup for everything.

I had the chance to go to several events. I might have chosen the good ones, but all of them were full.

The most inspiring to me was the one in the Eventbrite office. Why? Let’s see!

Because they showed a tool that

  • has open source code,
  • allows components to go live automatically,
  • allows automatic scaling,
  • makes the handling of container-based applications easy.

The open source code is there and the installation of the automatic component is done. So is automatic scaling. Now, what is a container-based application? If I say Docker, it gets clearer instantly.

There is a company which has to make micro services available in all time zones of the world in order to enable their entire service range to function. This is the purpose for which they use Docker. Yes, but as soon as they started to integrate a lot of containers into each other, complexity increased all of a sudden and they ran into the limits of containers, meaning that they could not communicate with each other smoothly. They solved this with the trick of placing VPNs between the containers. It takes a lot of time to produce the testing state which would then go live at all in case of a finished deployment with 40-50-110 layers.

So Docker was functional but they wanted something even better. Google has developed multiple management tools to manage its global server farm. This started with the stuff called Borg, which worked and still works and is robust but is not so pleasant to work with.

Thus they created Omega. This took a lot of things from Borg, and a lot of things were then taken back to Borg. As it turned out, Google itself contributed greatly to the preparation of the Linux kernel for containers.

We are after two iterations of learning. Amazon Web Services has been there for years, so has Heroku, and Microsoft has Azure. Google is sitting there on many millions of servers. They have the experience to handle that server quantity. The world should however be given a tool that Google can use to generate revenues from the further utilisation of its servers and the users can configure, control, use the computing capacity and storage place so made available. A control unit fitted with a customisable REST API would be necessary. So this is how Kubernetes was born.

It was a colleague of Google who showed us Kubernetes on the meetup at Eventbrite.

The presentation was of course prepared in advance, thus everything went smoothly. He demonstrated the abilities of Kubernetes on a simple laptop. It was interesting to hear the following discussion between the speaker and a member of the audience during the questions and answers sections:

“… we have been using Kubernetes in the live environment for quite some time and…”

And the answer was:

“…I would not use Kubernetes in the live environment for the time being…”

This also demonstrates that the casual atmosphere and the relatively limited number of attendants at a meetup can let one learn from a few hints that something is being worked on with Kubernetes at Google.

At the end of the presentation, after the speaker, a British guy wearing jeans and a T-shirt, left the podium, answered the questions of people surrounding him as if he would dream with this architecture every night. As if he was telling what’s what about the homework and how one should solve a quadratic equation.

This atmosphere and culture has a self-generating effect. Knowledge sharing allows the emergence of new functions in already existing products at an ever shorter iterations, and products/start-ups can spread as fast as wildfire. Having natural selection included in the equation, Bay Area churns out novelty.