Google I/O 2018: Integration in the Open


With Google I/O 2018 kicking into gear, I found myself looking back at my thoughts on last years event and thinking about what is changing and what is continuing with respect to the journey for our developer platforms and ecosystems.

Technology exists within the framework of what is going on in the world. On the one hand, the world gets better for more people every day, but on the other hand we also know how fragile things are, and how much pain people are in as we see growing inequality in areas. Within this context we are seeing technology companies look deeper into themselves, thinking more about what is being created. Are we building experiences that are meaningful for people? That is the question at hand, and the consumer keynote shows some of what Google is thinking about here.

The Integration Story

One of the foundations of integration is componentization, and we are seeing great strides here across our platforms.

Android’s new Jetpack libraries accelerate development

This year builds on last years Kotlin news, and shows our vision for mobile Android development. App bundling and dynamic delivery allows you to split up your app, giving users the ability to download the pieces they need. If you want to reach all of your potential users and deliver the best possible experience to them, you should be focusing on the vitals, and our tooling and platform is working on making life much easier for you here. With Actions and Slices, you can bring pieces of your app to new surfaces on Android to surface where users want them.

Android Studio 3.3 brings a slew of improvements and cuts a huge amount of time out of your development. Did you see Tor demo how quickly the emulator starts now? With Jetpack, we bring together the best of architecture components with the support library, allowing you to develop much faster, reach more users with the results, and all with top notch Kotlin support.

And this year Android really hits new form factors hard with great updates across Wear, TV, and Android Things hitting 1.0.

Spotify shows off their Desktop PWA

On Web, if you follow the Lighthouse (newly updated for I/O!) well-lit path, you will be able to deliver ephemeral experiences that integrate deeply with the host operating system. Performance is still the number one area of focus, but you will see that we are broadly looking to get rid of friction wherever it lay. Web packaging takes fast and makes it instant, and it is now simple to get your users signed up, signed in, and able to quickly pay.

While the first PWAs that drew attention were mobile, with desktop PWA support we get to see how your investment scales. Twitter has been able to take their “lite” mobile app and bring it to the desktop/tablet, Windows store, and with the Chrome desktop PWA affordances you get an amazing experience that marries the your own focused app with browser features such as URL sharing, find in page, casting, etc. This has allowed Spotify to integrate their web experience, for Starbucks to double the usage of their PWA.

Chrome DevTools continue to improve with end to end support for the latest in layout (grid), JavaScript (eager eval, async/await, etc), and even end to end debugging for your node services. With Lighthouse baked in and lighting the way, you can see how far the frameworks are coming along to help deliver the fast experiences the Web needs. Polymer 3 and lit-html allow you to build blazingly fast experiences that are close to the metal, and integrate so well with Redux and friends. Angular 6 joins the custom elements party and the Ivy runtime shows how much of a diet it has been on for great enterprise products. And AMP is here for an opinionated fast path with support for Stories and other new components that you can drop in and use. The Web is here for you, from rich content to complex apps that can tie into new ecosystems with WebAssembly.

Turn on the new Linux support in ChromeOS settings

You may notice more Pixelbooks at I/O this year. I have been using desktop PWAs a lot recently, as I have been living on a Pixelbook. I had tried to make the switch to ChromeOS several times, and this is the first time that it has stuck.

I used to feel constrained inside a browser, but this has flipped since I changed the way I used the system. On my desktop, I normally have a set of pin’d tabs and Ctrl/Command-# to jump to a particular main app (email, calendar, tasks, chat). On ChromeOS I pin those as apps and Alt+# to jump to them.

This frees up my browser. The “email” browser turns out to be my main source of tabs, and I see a new pattern forming. It turns out that a lot of “apps” are sources for browser tabs. I click on links in Gmail AND from Calendar AND Asana AND Chat AND Twitter. This drives an interesting world where the first tab is the main source of other experiences that can be tied to that app instance.

Having the desktop UI (three vertical dots) available brings the best of the browser to the app. The user agent is out of the way, but you have a quick path to find in page, font sizing, casting, autofill, and copying the URL for easy sharing…. and you can imagine so much more.

With the release of secure Linux apps via VM/container goodness, I am able to run my favorite developer tools (Visual Studio Code, Atom, Android Studio) as well as other key Linux apps (including other browsers for testing). It’s still early, but it’s already great to be able to develop here.

Linux and the Web is great, but what about access to the entire Android and Assistant ecosystem? It’s all here, and with the world class security of ChromeOS (and ability to login to a new machine and be right where you left off… which feels like my college days with XWindows!).

Reach the assistant through Web and Android

In fact, the Assistant, and Actions on Google in general, is another perfect example of integration. Web developers, Android developers, and backend developers should all be able to get their experiences in front of Assistant users. With Actions for these platforms, you can get a broad reach, from something quick and direct all the way to a full conversational experience across text and voice, and with RCS you can leverage the texting experience all the way to Android Auto.

ARCore: now with Sceneform, Augmented Images, and Cloud Anchors

AR and VR is yet another example, where the technology is available for you to use the world around you. I can’t wait for a future browser for the world (whether it be Earth or something else immersive), and that world is a lot closer with the new tools that we announced today:

  • Sceneform is a new 3D SDK for Java developers
  • Augmented Images allows you to associate AR content with the real world
  • Cloud Anchors bring a shared collaborative environment to life. Your apps, on iOS or Android, can manipulate space in a way that all of your users can see.

ML SDK across the platforms; from edge to cloud

The other piece of integration is the role of client and server. We continue to live in a disconnected world, with variable levels of compute and network power on our devices married to amazing levels of compute on the server. Developers need the ability to drive as much as possible on the edge where it makes sense, but also use the power of the data center where needed. A strong theme at I/O continues to be the role of machine learning and AI, and not only do we have a lot to share on the side of Google Cloud and TensorFlow, but we keep bringing that to the client with APIs such as ML Kit, able to reach all platforms via Firebase.

In fact, ML Kit is able to infer on device, but also use the Cloud to get more detail (e.g. maybe on device detects a “bridge” in a photo, but the Cloud comes back with a more detailed “Golden Gate Bridge”).

The APIs in the initial kit include:

  • Text recognition
  • Image labeling
  • Barcode scanning
  • Face detection
  • Landmark recognition
  • Smart reply

And you can expect a lot more to come!

Same great design system, themed your way

The last piece is the UI, and we have a major upgrade to the world of Material Design this year. People have noticed our own properties get large upgrades, such as the recent Gmail redesign, and this work is on the back of the new Material Theming approach. Companies have wanted to take the material design system, and really brand it to their needs, and with the new theming system it is much much easier to do just that.

The new system comes with new tools, such as the Theme Editor, which are one of a series of updates where we try to help you build rich experiences with more ease. All of the components are released as open source, for Android, iOS, Web, and now also for Flutter. Flutter announced beta 3 which brings large improvements, and could be a perfect fit for your cross platform needs.

Phew, this is only the tip of the iceberg on what we have in store for I/O this week. The teams have been working hard to build useful things, and share them throughout the week with great talks. We will be rushing to get the content up as soon we can after the livestream.

One last thing. Watching and listening in is great, but there is nothing like touching the technology and seeing where things really are. This is why I always dart over to the codelabs area and try everything out.

Google I/O is truly about the integration of everything that we have to offer across our ecosystems. In that spirit, it’s most fun to see how these things work together and build bridges.

https://mobile.twitter.com/kelseyhightower/status/993846267976470528

I have always been drawn to open ecosystems, so I love that openness is baked into our DNA. I love looking at the history of Android, Chromium, TensorFlow, Kubernetes, Flutter, Polymer, Angular, AMP, gVisor, and the myriad of over initiatives that have open source at the core.


“platforms integrate
together in the open
all for us humans” — Steven Colbert #io18

render() your consciousness

I recently woke up, remembering a dream that tied together the following:

  1. how does our consciousness surface a certain subset of the huge amount of input that is processed in the brain?
  2. how can we best handle all of the events in our computing systems, and render a UI that represents a valid state?
  3. how can we build a UI representation that helps users, with their consciousness, actually get something useful done?

It wasn’t shocking that I had this dream, given that I fell asleep listening to Anil Seth as a guest on Sam Harris’ podcast, which I turned on right after coding.

When we write programs that model the world, one of the areas that we often struggle with is the notion of representing time, and how state changes over time. It is a source of many bugs, and has lead many to look toward immutable state as a way to avoid foot guns. Rich Hickey famously explains his experience on the topic in Are We There Yet?


How our consciousness batches time

When you zoom in with concurrency and time, you get into the notion of something happening at the same time. At first blush this is a simple concept, but when you look deeper at how our consciousness deals with this, it is fascinating. As we get new tools to probe the brain, we are uncovering a lot of new information, such as learning that decisions have been made before you thought you made them.

Simple experiments show the layers of filtering and input changing that occurs. If you touch your finger to your nose, it always “feels” like the touch happens at the same time for both digit and nose, yet your arm is longer than your nose. Inputs from the nerves must reach the brain at slightly different times at therefore at some point they get put together as a batch.

It appears that our component systems are getting masses of input that we are processing, and there is competition for the right level of info to reach our consciousness. We can direct things at times (e.g. we can take control of our breathing, but fortunately don’t have to focus on it to always happen) but mostly we are observing and our consciousness is revealing what has happened.

Back to programming

This shares some similarity with painting the right UI for our users (and vsync). We can gather information from our components, batch together the state, and prioritize what to render. With the appropriate amount of complexity in our state, especially considering the difference between app state and UI state, it may make sense to use a system such as Redux to manage the batching, or maybe you prefer TJ’s state management library https://twitter.com/tjholowaychuk/status/957853652483416064.

Elsewhere, it is common to debounce, sometimes with exponential backoff, to make sure we aren’t wasting a lot of time and resources.

https://github.com/joshwcomeau/redux-vcr

Immutability also has huge side benefits. We can save snapshots that enable improved testing and time travel debugging. Also, any excuse to bring back a VCR is surely a win 😉

Using the past to change the future

Having snapshots of state is one thing, but what about using old states to help you in the now? While we often save past state snapshots, we don’t often use them in rich ways to help change future state, but your brain does just that. Experiment after experiment shows the importance of context.

This video shows how you can be primed to see color in a black and white photo. There are examples of this across our senses. You can listen to some audio that sounds like gibberish, hear key words, and listen to it again and suddenly you understand the gibberish.


Examples of illusions fill books, and are what magicians use to trick our senses. This all makes me wonder if we are going to see richer ML models that help the user based on the past context?

Good UI is magic

A lot of this trickery feels like magic and illusion. This is absolutely true, but great UI is jam packed with this type of illusion. We can use motion and progressive rendering to make it feel like things are happening, keeping the user engaged. We can make sure to prioritize which parts of the UI to update and delay on others.

To do this correctly, often requires you to take the time to map out the SLO of various pieces. For example, when you visit an e-commerce product page, show the core information asap, and wait on data below the fold such as reviews. You should also consider the freshness requirements across these elements too (e.g. the price has to be fresh, but you don’t need to block on that new review!).

I am having a lot of fun learning about how the brain works, and our psychology, and have a feeling like this knowledge is going to be useful for our industry as we get better at delivering UI that truly helps our users consciousness understand the view of the world that matters to them as they get done what they need to get done.

Oh, this sounds like a good excuse to watch Darren Brown explain the tricks of the trade: mentalism, cold reading, neuro-linguistic programming, cognitive illusions, and much more.

Isn’t building software products fun?

Gearing up the Web for 2018

It’s the time of year when you reflect and refocus. With the recent news of Edge and WebKit bringing service workers to their people, it is hard not to reflect on the long journey of bringing rich, some may say app-like, capabilities to the Web.

With 2017 sunsetting (in some ways: good riddance!), I thought back ten years to 2007. You may remember the simpler times where Arnold was the Governator in California, Mad Men and Flight of the Conchords were on the scene, and Steve Jobs showed us his new phone.

In the world of the Web, I was working on Google Gears.

Any excuse to get the zipper image back!

This was a pre-Chrome world where developers who were trying to bring bleeding edge desktop web apps (Gmail, Google Docs, etc) often reached to a browser plugin to deliver the functionality that they needed (Flash and Silverlight). We were seeing a resurgence in using HTML/CSS/ and JS to power the UI, using plugins to get access to some native capabilities — Remember XMLHTTPRequest was born as an IE ActiveX component.

Picture yourself with a Gmail that nailed spam and search so well that Eudora was tossed aside. One restriction though was the reliance of a tether to the Internet. A next obvious step was to take Gmail offline (including that great search!) and improve all of the perf (always an issue ;).

Gears brought us the modules to do just this, and it is fascinating to see which primitives we ended up with:

  • A Database module (powered by SQLite), which could store data locally
  • A WorkerPool module, which provided parallel execution of JavaScript code
  • A LocalServer module, which cached and served application resources (HTML, JavaScript, images, etc.)
  • A Desktop module, which let web applications interact more naturally with the desktop
  • A Geolocation module, which let web applications detect the geographical location of their users (Google Maps any one!!!)

How many of these are still needed to pull off a great web experience, worthy of the home screen addition from a loyal user, even as we have moved to mobile and beyond?

But a browser plugin wasn’t the ideal solution. You want the platform itself to grab onto good ideas and bake them in (hopefully learning from the implementations to come up with something much better). Also, 2007 remember….. with Safari on iPhone, and Jobs said “nope” to the plugin world (and thus the fall of Flash and Silverlight).

Let’s take a peak at the problems we were solving and what the solutions we now have available, with the new context of mobile, and where we can be heading in 2018!


Dealing with data

The Gmail team dealt with much pain on the bleeding edge as everyone worked out how to store (sometimes large amounts of) data correctly locally and sync it with the backend.

We quickly codified SQLite into the platform via WebSQL, which didn’t quite stick. There were issues with baking in an implementation into a standard, but I also wonder how much gas was lost from the effort because the world jumped into a “NoSQL” bandwagon. The many benefits of SQL have held the test of time, and with new innovations such as Spanner, I do sometimes wonder if it wouldn’t be kinda great to have SQL back in the client toolbox?

Instead we got the likes of DOMStorage and IndexedDB, as well as many solutions on top of our primitives. Today we see a lot of usage of Firebase (including the new Firestore which has offline support on the Web!) and there is also large mindshare in the GraphQL space, with great clients such as Apollo.

So, we have a lot of work going on with respect to data, both on the client and how to sync back to the backend, but I think we will see new patterns emerge in 2018 that will put everything together.

Working off thread

As we constructed richer applications through JavaScript, and accessing local capabilities like databases, we needed to be able to do this work away from the UI thread, else the apps became unresponsive. Thus, Gears brought us WorkerPool, which was morphed into the Web Workers standard.

Doing less work on the main thread continues to very much be a key bottleneck, especially in a world of low power CPUs! Phil Walton just posted on this from the vantage point of interactivity, and the lack thereof, and we have exciting work landing and in process.

Browser vendors are hitting this head on in their implementations, with large architecture changes (e.g. Quantum has a lot of this baked in) as well as tactical upgrades (e.g. image decoding happening elsewhere).

We are also seeing new standards that bake new abilities, from tweaks such as img decoding, and large additions such as the awesome Animation Worket that will truly change the game.

When it comes to new capabilities, we often follow the pattern of:

  • Step one: make it possible.
  • Step two: make it easier.
  • Step three: make it perform.

This is one reason I got excited about Surma’s exploration: COMlink, that gives you the same API across windows, iframes, Web Workers and Service Workers — speaking of which….

A Local Serv…ice

The browser was built as a client desktop application that sat on a nice fat university pipe. I remember that world fondly, where I had an Sun Ultra as my workstation, and I could also login to any machine on campus and have my X windows show up just they were.

The network was never perfect, but as it diversified, and our devices walked around with us, we needed to handle the lack of network progressively. The Gears LocalServer module came with a couple of cool features. One of them was the Gears JSON manifest file, somewhat akin to AppCache, that allowed for a declarative approach. Declarative APIs are great until you need to break out of the box. In that case you long for an imperative API that the declarative one builds on. LocalServer gave you a programatic API too, but it was much more restrictive than what we have today in the powerful Service Worker, where you get to fully own the networking layer. With this power comes great responsibility and is one that you need to think through up front, making sure that you have the correct kill switches in place, and processes to fully test.

Service Workers were a perfect example of the Extended Web in action. A powerful building block for us all to build on top of, including the web platform itself, as we have seen with Web Push and friends.

Native capabilities

When the Web is at its best, it allows you to reach the broadest set of users, but also allows them to experience it on their terms. This is why a UI change to make image picking better (often feeling more akin to other image picking on that particular device) can have a large impact. It is also why I have been a fan of extensions, allowing users to customize to break out of the A/B optimizing machines.

Gears had a Desktop module that allowed you to drop an icon for your web application that could now serve locally and store data via SQLite.

Home screen access is even more important in the world of mobile, so that is where we first saw this support via the Add To Home Screen API. This doesn’t mean that I don’t want it on my desktop too though, and I look forward to great desktop PWA support in 2018!

Gears drove some of these native capabilities, and in general I like to see developers being able to push the boundaries, and having the platform standardize the good ones later. This is one reason that I keen to see how developers use Trusted Web Activities, which gives you the ability to bridge native and Web in a new way.


Now we have seen how the specific Gears modules have baked themselves into the core platform. It is also interesting to look at the areas that weren’t a focus then, and how we are now going well beyond the issues of the day.

The Google Apps of the day were large codebases. Throngs of JavaScript were being flung, and Closure was the weapon of choice to keep the codebase sane. We got through the ES4 detour, and I am happy to see that with ES2015 we have put our requirements into the language versus comments.

Layout wasn’t as much fun back then. Without Flexbox, we had to result in a lot of tricks that required JavaScript, and we had to manually (read: use a lot of resources :/) check on what was in view versus using APIs from the Observer family (Check out Eric’s examples of the new ResizeObserver addition!).

These days we not only have Flexbox in the toolbox, but we also have the fantastic CSS Grid, and one fun way to master it is by “playing” with Dave’s Grid Critters.

https://mobile.twitter.com/trueadm/status/944908776896978946

When it comes to bringing capabilities wrapped up in native code, we now have WebAssembly at our finger-tips. I am particularly curious to see what gets wrapped and made available for the masses.

And what better way to do the wrapping that via Web Components. We have long dreamt of the ability to reuse rich UI components that were fully tested and had high quality accessibility support. It used to be frustrating to see an amazing YUI widget, that you couldn’t bring into Closure land. We are finally getting to the point where this is no longer an issue, and with SkateJS, Stencil, and Svelte joining Polymer and friends, we can share and compile like never before, and it is starting to feel like 2018 is the year of the Template Literal Libraries.

When you put this together, we can have the semantics of <something-cool> that could be wrapping rich UI and WebAssembly code under the hood.

We still have much to do, but the pieces are coming together to deliver rich experiences that can be constructed and combined by creators. In a world where we seem to spend more time consuming, I sincerely hope for 2018 to be a turning point for more of a read/write Web, and world.

Not only are we seeing the features come to light, we are seeing browser vendors coming together to implement them with haste, a huge difference from the time of Gears and ChromeFrame.

So, here’s to more green in 2018, something that I was excited about in the Chrome DevSummit keynote that I got to give with Ben:

I am curious to hear what you are looking for in 2018. It has been a lot of fun to read existing thoughts from the various end of year countdowns, and a big thanks to Mariko for inviting me to write this for http://web.advent.today/!