Slicing through the Web with seamless Portals

On native devices, I find myself naturally following the pattern of:

  • home screen
  • launch app
  • back to home screen
  • launch app
  • repeat

Sure, there are times in which I will bounce from app to app, but it often feels heavy, and most of the time it is an app showing some Web content.

On the Web this happens too (new tabs), but I really feel like I am surfing the Web when I am flowing through experiences…. tap to tap to tap. This can allow me to get a task done across multiple services in an enjoyable way.

You can also feel this when you see experiences that compose together. We get this naturally through SDKs that will embed on pages, which may use iframes or direct embedding. I remember the mashup generation, where there was an explosion of wiring things together. I loved playing with Yahoo! Pipes in this vein too, and feel like composable blocks can enable a lot of innovation.

However, the “C” in SLICE deserves more love. The flowing nature of the Web, and the way that pieces can be entwined is pretty special. It is what makes it feel like a real Web after all vs. a series of domains with launches as the entry point to them all.

This connectivity, sometimes loosely coupled where sites can be embedding or linking to you without you having to know, and sometime explicitly coordinated between a couple parties, gives us our commons.

One of the reasons that Alex Russell speaks so strongly on the performance of the Web is because of this commons and how we effect each other. Every time a user interacts with a web site it ticks some state on how they feel about the overall experience of the Web. As you surf through three connected sites, if the second one is really slow, what is the impact? How much does it matter than the other two were instant? This is subtly different to the general bar on a platform based on the quality of the individual experiences.

If there is a huge variety in interactions, will will be more likely to see more composition? I already feel like I used to browse around more, and we had classics such as web rings that are the perfect example of one site flinging you on to the next, building momentum. Now it feels like the motos operandi is often “keep users in my experience!” which I know is particularly due to dominant monetization models. This gets me thinking about attribution systems baked in that would incent links out again.

The Empty, White Page

The other side of the user experience, is the advent of the SPA. While it is hard to load it up without causing a huge initial load, once up and running, you the developer have full control over routing and navigating throughout your experience.

Stuart Langridge gave a great talk on this that is typically entertaining to boot. He talks about one of the key problems with flowing between experiences, in that we tend to break the linkage with a bright white empty screen when going between domains.

This is like being blinded as you surf, and makes you feel disconnected as you navigate.

What if we could setup lovely hand-offs between navigations? What if you could use these across your own site or set of sites without being forced into an SPA architecture or PJAX is just on your own?

https://storage.googleapis.com/web-dev-assets/hands-on-portals/portals_vp9.webm

This is why I am excited about Portals. We finally have a way that will fix up the seams in interesting overlapping ways. Each side of the navigation can talk to each other, enabling a lot of new innovation again in how your surf.

Finally, you can get fantastic UX, with a very loosely coupled architecture and codebase. This is huge, especially for large teams. We have built up so much machinery in the name of allowing a massive site with teams for each of /product, /checkout, /search, bundle, build, and ship separately. Or across subdomains, or different sites in a shared portfolio.

This is why I am excited on having us work together to build out Portals and I hope you have a play and give the community feedback. And partnered with web packaging, you can be flowing through the Web like Spiderman through Metropolis consuming packages from a close by CDN.

We are always building out capabilities to help you get the most from the native platforms your users are on, but I *really* love thinking about what makes the Web special and different, and I think there is so much more that be done through this notion of the Web and composition.

The difference between Platforms and Ecosystems

tl;dr: “What’s the difference between a platform and an ecosystem?” This simple question resulted in an ecosystem strategy to connect sub-ecosystems that work on the Web. What if we lean in and deeply connect our tools, services, frameworks, and platforms….. and align on the right incentives for a healthy web?

I work in a product area of Google that is called Platforms and Ecosystems, and I sometimes reflect on the question “what’s the difference between a platform and an ecosystem?”

There are many, but I have found myself diving into the differences in complexity of a rich ecosystem.

With platforms we often think in layers, each layer building on each other, and the interfaces mostly being at the boundaries. This is a nice abstraction when done right, as it isn’t leaking all over the place.

The layers view of a platform

An ecosystem often forms on top of a platform, or if it gets connected enough, you get combinations. Sub-ecosystems form that have their own fractal view of the world, and they may connect with multiple technical platforms (e.g. the React ecosystem touching Web, iOS, Android, Desktop, ….).

The Web is complex enough that it is one of the canonical examples of an ecosystem that has many sub-ecosystems. You can even slice “the web” itself into pieces including the technical client platform that is embedded in almost every native platform there is (WebView anyone?) as well as the connected open Web, where many technical stacks are available to run as well.

A week ago, I had the pleasure of spending some time with representatives from major global CMSes, and they are a fantastic example of sub-ecosystems.

We spoke of ecosystem loops, and how an ecosystem has so many more connection points that are not restricted to layers, and are multi-directional.

These connection points, when strong, can enable a healthy web ecosystem. This realization has changed the way that we work, thinking about we can truly enable healthy connections.

Foundationally, we obviously want to build web platform APIs that work for as many ecosystems as possible, whilst also push the web. We also work to build clear guidance on web.dev that helps developers understand what is possible on the web across key pillars and principles. We want to deliver new and better tools that help you create, debug, and deliver actionable insights to you (e.g. DevTools, Lighthouse, CrUX).

Speak the right language

Lighthouse Stack Packs can speak your language

One of the problems we see, is that there is often an impedance mismatch between the layer of abstraction that a developer uses (e.g. using a particular framework, set of libraries, or backend infra) and the feedback that tools and services surface.

We are on a journey across our portfolio to fix this, and the best example is probably Lighthouse:

  • Lighthouse plugins allow you to build on the platform to offer your own audits and rulesets, enabling you to enforce you own criteria or requirements. For example, a framework may have important linting tests, or an ecommerce platform may have retail oriented audits.
  • Lighthouse Stackpacks bring an understanding of the ecosystem into the existing core rules. Don’t give generic guidance on next generation images when you could give custom guidance for particular plugins or point the developer to a setting in the admin console.

Elsewhere we are thinking of similar evolutions:

  • web.dev offers framework specific guidance, but what if the base guidance changed based on your needs?
  • There are many Chrome extensions for developers which extend DevTools, but what if there was a strong plugin API where knowledge can be brought to the core tools?

I am excited to work with the ecosystem to enable this kind of intelligence for our collective developers.

Show me the incentives!

We can make life much better for developers, but we have also learned that this isn’t enough to fully elevate the web together.

We want to see the flywheel moving nicely on the above ecosystem loops, with users engaging with high quality content from developers and their sites and platforms.

As platforms, how do we enable this outcome? It’s hard, but we have learned a lot from the work that the ecosystem did to move to HTTPS.

Moar TLS!

It may be hard to think back to the days when HTTPS wasn’t the norm, and only 30% of traffic was secured. How could we make it the vast majority? I remember hearing thought such as: “it can’t be done!”, “people just don’t actually care enough!”, “it’s too hard!”

In retrospect, it’s interesting to reflect on the parts and pieces that I think collectively drove things forward:

#1 Knowledge and Insights
There was a lot of content on *why* this is important, and *how* to do it. I remember it being really quite hard to do, and also waking up in a sweat on a couple of occasions with my brain racing: “did my certificate expire?

#2 Tooling
The ecosystem jumped in to help make it MUCH easier. LetsEncrypt changed the game, and servers and hosts jumped in too.

#3 Demand
Now it’s easier, and it’s the right thing to do, but companies have huge backlogs. How do we make “the right thing to do” blindingly aligned to users and the company?

We need the right incentives, and this was the final puzzle piece. How do we reward developers that do the right thing? Some examples here are:

  • Browser UI surfacing quality: At first browsers changed their UI to highlight the good (e.g. showcase “secure”), and then…. over time…. deliberately…. switch to the point where the default is secure and we highlight the insecure
  • HTTPS required for API usage: WIth Service Workers, and powerful capabilities such as those being worked on via Project Fugu we need to make sure that we aren’t setting users up for attacks. This keeps our users secure, and also incentivizes developers to move to HTTPS to gain access to said APIs.
  • Acquisition: highly quality content is better for our users, so we want to make sure that Google Search is driving engagement to that content. When HTTP was added as a ranking factor we noticed an increase in adoption.
    It has been great to see the sum of the parts add up to the great adoption of HTTPS and we continue to push.

Moar Incentives!

How do we learn from HTTPS and bring this to the other areas of quality such as performance?

We will make sure that:

  • We will be clear about the metrics that we think represent quality
  • We will surface data and insights for developers and decision makers across our suite of products, and accessible for third party tooling solutions
  • We are very deliberate in carrots and sticks so everyone has time to plan
  • We will work with the sub-ecosystems so we can align on everything above so we row in the same direction. For example, if you are an established CMS provider, it isn’t solely important to track how many themes and plugins you have in your marketplace, but also note the quality of the output and how you can use said marketplace to share the same incentives (surfacing the right metrics etc).

How can we build connections that will strengthen us all? Do you see anything we should be doing? I am all ears.

The Order of the JSON

When I read Fitz’ tweet about ordered JSON my body shuddered, as my brain was flooded with a past experience that taught me a frightening lesson on technology being used by non-technical companies.

It was a moment that had me wonder:

  • How does world not break due to technology more often
  • How much time and money is wasted due to some ignorance at some part of the development cycle?

Ok, here goes. My frightening tale around the order of the JSON. I was working on a project for a large Fortune 500 (I will protect the name of the company to protect the innocent!).

The project at hand required integrating a modern front end (mobile apps and web) with a legacy system. This isn’t legacy in a “the engineering team really wants to rewrite it in the new shiny”, but in a “there is COBOL running back there somewhere isn’t there” kinda way.

Much of the system was so old that it was hard to find anyone who knew how it actually worked, and it’s maintenance had been outsourced to some of the typical IT outsourcing companies of the time.

At first all was well. We had created mock services that spoke the protocol and we were building against it. We were dealing with pretty simple REST calls with JSON, so it wasn’t like we were all SOAP-y.

We wanted to integrate with the real systems as soon as they were ready of course, and that’s when it got fun. I remember the first time we spoke to the system an got a terse error message:

ERROR 1000294

That was it. No more info. We asked the service folks for more context, and *two weeks later* we were told “Oh, the JSON payload that you sent us was in the wrong order“.

Huh? Why would you care what order we were sending the name value pairs for this?

We asked if they could fix this and be more flexible. They said they would get back to us.

It turned out their did a “LOE analysis” (level of effort, in case you haven’t had the privilege), and came back to tell us that they would fix it, but it would take 9 people 6 months to do complete. With a straight face. In a way that signalled that this is the cost of doing business and it happens all the time.

We couldn’t wait that long, and we thought about creating a custom emitter that would indeed order the JSON. We wouldn’t want to do it on the client, as what if something changed? We didn’t have faith that this “order” would be set in stone forever.

Ok, let’s handle it on the server then…. and have continuous testing to make sure that if anything changed we would know right away. Not the end of the world I guess.
But it still didn’t feel right. I got on a plane to visit the site where this was all going down.

I mentioned that it was hard to find people who knew how these systems worked. I have previously mentioned how QA engineers are underrated and they came to my aide again here. I hunted down the QA lead and he took me on a tour of how he tests the systems.

I got to learn that we were talking to a service running IBM DataPower Gateway which sat on top of WebSphere which sat on top of the COBOL.

He was showing me the DataPower GUI, and let me play around with it. After awhile of drilling around and going through settings I got to a hidden advanced setting that was a checkbox asking:

Validate order of JSON? [X]

I unchecked it. Ran a client to post the JSON to that instance. It worked just fine.

And, then I sat back and contemplated how this type of event was probably occurring daily, all over the world, and some consulting companies were getting millions of dollars.

Finally, I was so curious about where the root of this came from, and did a search for ordered JSON and ended up on this IBM javadoc.

Of course! I am sure you have never run into scenarios like this in you time in tech, have you?