Standardized containers are great, but I am most excited about the innovation in the ships!

Standardization. We are seeing another go-around thanks to the enthusiasm around Docker: An open platform for distributed applications for developers and sysadmins.

Having various parties come together to look at an open view of services at this layer of abstraction is great news for developers and operators. The black box effect of having a container of stuff with a declarative way of defining how to connect it into the ship is only a Good Thing.

I wonder if we are getting a little giddy and proud of ourselves prematurely though.

We Have Been Here Before

I believe that we have seen this pattern in the past. I remember an experience that parallels this in some ways, and that is when we got Servlets and EJBs in the Java ecosystem. Before that time there were various Java servers with competing APIs and features. You would have to choose between various application servers and write to their specifications. You were pretty much locked in. There were some work arounds, and a few people would separate their logic into the world of (what would later be called) POJOs and have adapters that plugged them in. Most didn’t.

Then Jason Hunter and friends put together Servlets and we had a standard way to declare how a component as described in a piece of code would talk to its container, and the container could then take the abstract and make it physical. For example, your code would talk to “TheProductDatabase” and in XML land you would map that to the actual Oracle Database you were using behind the scenes. The standard and the container helped make that happen (along with other abstractions such as JDBC in this case).

All seemed hunky dory. Many would build large scale enterprise systems in this manner. However, did you notice that there was a promise of shared components that never really materialized? Did you notice that most people didn’t migrate between multiple application servers that often?

Having the container standardized is fantastic, but it turns out that the ship that is transporting it still really matters. Companies such as WebLogic grew and then helped BEA grow as their ship was superior. WebSphere was known as “WebFear” yet over time enough was built on top of it (e.g. portal servers) that it wasn’t about deploying to WebSphere and was more about buying the solutions.

I was able to run a unique experiment back then. The business development team had a great idea to bring in a few bucs by showing how EJB could work across different application servers at the same time!

This lead to “TSS Announces Portability Re-Launch on WebLogic and Oracle9iAS”. What wasn’t talked about so much was the pain in making it work, and how to do so we made choices you wouldn’t really make in the real world.

To run on more than one system we went to the bare minimum. We hid a lot through sharing info at the Tangosol Coherence layer which either side could grab. This meant that we used “Bean Managed Persistence” at the entity layer. You probably didn’t have to work in the world of EJB (it proved to be a flop and overly complex for no reason other than politics) and you are happier for it. The main lesson from it all was that even with a fairly simple plugin model, getting something non-trivial working across the worlds had more side effects that you would guess.

I see that again with the world of Docker. We were in the world where Amazon was the gorilla, with very special purpose APIs for the cloud. If you couple to their systems then you are just that… coupled. As per usual, if you aren’t in the #1 spot as a vendor the answer is to standardize, and the most vocal effort is OpenStack (even though it is constantly maligned). The other approach is “if you can’t beat them, join them”, which is the path that Eucalyptus took.

A modern full stack has a ton of pieces to it these days. There are multiple front ends (mobile, Web, email, SMS, watches!) talking to backends with special systems for machine learning, search, messaging, analytics, and data stores. You have the benefit of “the cloud” which in some ways is amazing, but back at TheServerSide I was able to run the entire thing no problem on a few servers for failover. I look back very fondly to the simplicity of those times.

As Docker kicks in it enables more ships to be built with more competition. Choosing the best ship is key, as in the world of production you need to answer a lot of questions such as:

  • How do I keep on top of my various services?
  • What is the health of my service? (where service isn’t just a container)
  • How is my overall system doing?
  • Where are the bad apples and can I weed them out?
  • What is my solution for fault tolerance and scalable performance?
  • How do I auto-scale for performance and also cost?

I am happy to see those standard containers, but I really can’t wait to see the new ships that will be built, all with gadgets to help engineers deliver fantastic experiences to their customers.

I hope that some of the best technology wins this time. It rarely seems too, but the optimist in me wants to see it happen this time!

Leave a comment

Your email address will not be published. Required fields are marked *