It is easy to make the mistake of focusing on UI, and “what you can see”, when you think about the service that you are offering services to people.
The UI is the last mile of a great user experience. You could have an amazing backend, but if the last mile is unusable then it is all for naught, thus the focus on great frontends is vital. However, you can also have the most amazing frontend that powers an awful backend and it is also not going to do the job.
When the most recent mobile explosion occurred it unleashed new opportunities thanks to the capabilities of a mobile device in your hand with sensors galore. The form factor required businesses to adapt, and new models became possible. This last mile opened the mind to come up with a service such as Uber, where you could call a car to you wherever you are. Before hand it would be useful as a desktop service, but nowhere near as useful. In fact it could be very painful:
“hmm I am waiting by the curb and the car is 5 mins late…. do I run inside to my laptop to see what is happening?”
The last mile also heavily relied on the backend system that solves the travelling salesman problem to optimize the entire ecosystem so a user gets a car quickly, and a driver doesn’t have to wait too long between rides.
When I think about the type of college course that I would want to send my kids through, they wouldn’t focus solely on the front-end, but instead walk them through the entire process opening their eyes to what it takes to deliver a truly fantastic experience for everyone in the ecosystem. Today, this means that they need to understand what is possible no only with the capabilities on a range of devices, but also on backend clusters.
We are well on the way to democratizing front-end app development, and a good mobile UX has become table stakes for a modern service, so what it is next?
I believe it is time to democratize machine intelligence. If you look at the largest category winners on the Internet, they all tend to have fantastic machine intelligence capabilities:
- Google Search: search relevance
- Gmail: filtering (e.g. spam), and smart actions
- Google Maps: smart POIs, real time directions
- Netflix: recommendations
- Amazon: recommendations
- Facebook: feed filtering
Or course, these are just the tip of the iceberg. Each of those companies use machine intelligence all over the map. When Google Photos came out, it had a nice UI, but what gave it the world class UX was in giving me results for [my son wearing yellow on his birthday].
Many could build a UI that equals Google Photos, but not many could deliver the search magic.
We have commoditized much on the server side. We have abstracted away compute, storage, and a myriad of other services that used to be hard to deliver, let alone scale. We got used to building simple CRUD based apps, and our “search” features started out as wrappers around SQL queries. Then we got great software such as Lucene, which gave us new primitives to customize our search experience.
We have many other primitives for machine intelligence now. We have the big data side with Hadoop and friends, and nice abstractions such as TensorFlow, but we are also seeing higher level solutions.
Going back to Google Photos, you actually can deliver that magic by using the Cloud Vision API.
Machine intelligence is becoming table stakes, and we need the primitives and higher level abstractions to democratize it. This is one reason why I am excited to be at Google right now, as we are getting to the point where we can reach the vision of external developers should be able to build experiences that are as powerful as the ones that we can build internally.
I love this vision partly because I think it is vital to spread this out as much as possible to enable thriving competition.
Leave a Reply