AI/ML and the important role in forgiveness with technology evolution
There is a lot of chatter on AI and Machine Learning, and recently on specific user experiences they open up: e.g. chat bots, as shown by Benedict Evans:
Others are discussing the overall role of machine learning and how it is becoming a vital component in all great experiences.
So, it is fascinating to hear the perception on how close we are to a particular break through, and when an application of ML can be actually used in production.
For some examples you see real world use cases right now, and for others it feels like we are so far away. Why is that? It is partly due to the complexity of the solution, but I also think there is another important element… the element of forgiveness and resilience that the solution can have. If you can provide forgiveness, it is easier to ship ML today and iterate on it.
Beyonce’s Law is the forgiveness curve of a solution.
Let’s look at some examples.
A Google search has a lot of forgiveness built in. When it launched it gave back a series of blue links, but a human could help select the result from there (that human being: you). Compare this to search that only have the “I’m Feeling Lucky” feature, where you don’t have the option. Over time a search engine can improve, and there is a high likelihood that you will tap on the top result or two, but nothing is perfect and you can play an important role.
Chances are that you will find a result that works for you, so you will feel pretty satisfied if you tap and get some kind of answer to your query. This satisfaction will occur even if there is actually a “better” answer elsewhere. If you are looking to define a word and you get to dictionary.com, chances are you are OK with that, even if Wordnik had something more compelling.
One thing you really care about is performance, and top search engines always respond quickly. This gives you the confidence to come back for more. Google has always known this, and Jeff Dean has talked about all of the layers of resilience that Google has to make sure that it gets good results back to you.
Ilya brought this up in the context of modern web applications just last week at Google I/O and reiterated that:
Search of 99.9% of docs in 200ms is better than 100% in 1000ms
Perfect is very much the enemy here, and it is important to realize that. Now, search has evolved a lot over the years and we see more and more one box answers that try to solve your issue there and then, and that bar is much higher. Get that wrong and you lose trust, but all in all search has a high forgiveness factor.
Hints on Maps
Another example of something with high forgiveness is hints on Google Maps. I am sure you have seen an area where a few landmarks are highlighted for you.
Sometimes the selection feels magical “wow it put my kids school and the lil coffee shop I always go to!”, yet there may also be other landmarks that don’t make sense to you, but chances are you just don’t notice them.
In the map above, I see areas that make sense to me (Googleplex, shoreline movie theater), but I also see Costco, a store I have never been too in my life. Some of these may be personalized, but many may just be popular enough to show almost anyone.
The Maps team can slowly tweak things and get better and better at showing me what I need for my current context. A great forgiving experience.
Go and DeepMind
Winning at Go is an epic achievement in computing, and it was very exciting to watch. It has also been fascinating to see how Lee Sidol has been playing so well after that match, winning his last series of matches and talking about how he feels his game has changed.
As amazing as DeepMind is, it got to a level of “good enough” to beat the worlds best player over time, but who knows how much further it can go. You can never tell if certain moves are “the best move” as the game is so complex.
So, what else is on the low side, maybe even lower? I think that conversational UI (and thus chat bots, voice UI, and the like) are right up there.
When a human interacts with a bot they are looking for the mistake and the minute one comes back they often throw up their hands with “a ha! see! I can’t trust it. I am done!”.
You see a lot of demos of chat bots and them truly seem magical, but have you gotten your hands on one and fully gotten there yet? I was using one recently and even on a happy path it quickly got confused and started telling me about restaurants on “Pork Lane”, instead of restaurants that have good pork sandwiches.
It is just bloody hard to nail it time and time again, with each interaction. How can you hack something to be a lil more forgiving? Is it enough to have humans behind the bots, able to jump in from time to time to take control? It feels like for plenty of use cases this would allow the bar of the bot to be a bit lower, but that obviously comes with its own issues.
Having other UI that works alongside voice is also a useful technique. It allows you to get deep to an action with voice, and then go through some selection process that could be more suited to tactile feedback and the UI of a screen.
We saw the pain of voice systems in telephony. How many people jumped to hit “0” to speak to a human rather than go through the phone tree? How many of you still do that? At least it gave these companies a line of defense and they can keep improving their offerings to a point where they can actually be much *better* than humans, since they can interface into multiple systems easier than the human on the other end.
We can see where this is going, and we see glimpses of this with voice tech such as Alexa. Going from simplish commands to rich back and forth interaction is going to be fun to see, as is solving the current discovery problem. How do you know what capabilities you can tap into with voice? Trial and error.
The a-ha! Moment
For use cases where it is hard to have forgiveness, you often just need to keep grinding and making progress.
With the iPhone, hard work and engineering over generations got us to the point where the electronics were small enough to fit the form factor, screens were powerful enough to display, and touch sensitivity had the right level of resistance. And all of this could be produced at industrial scale with systems that could handle daily usage.
We will always see “it won’t happen for ten years!” and sometimes get surprised. Those surprises will happen most often for scenarios where the forgiveness factor is low, and thus it is hard to get into market.