A conversation between Davio Larnout and Josh Lovejoy, UX Manager at Google
The hype around AI has only been increasing in the last few years. While this AI craze has led to many breakthroughs, new use cases and technology, our fast-moving industry has sometimes been too slow in the way it has handled certain challenges.
Aspects of Fairness in AI, how to deal with bias, and other important Corporate Social Responsibility (CSR) considerations are sometimes left behind in the design process. Purpose is often one of them.
Most companies are currently starting off with the default assumption that any use of AI is a great use of AI, and that we'll "figure out the rest later". Or, as Josh puts it, "Just add AI to something, and it will be good".
In reality, this is not the case, and it is precisely why designing AI with purpose is so important. Building an AI model without a defined purpose is like building a house without foundations. It will only lead to disaster. Purpose should be one of the primary considerations while thinking of AI, not an afterthought. AI should be a logical solution to a defined problem. Adding AI just for the sake of it is pointless.
Radix's CEO, Davio Larnout, sat down with Josh Lovejoy, designer at Google and author of a blog on the topic. Josh is an authority on all questions relating to design processes and AI, and previously worked on design at Amazon and Microsoft before joining Google. Together with our CEO, they explored their shared beliefs on purposeful AI, and how it will change the field as we know it.
What is purpose and what does it mean for AI?
First of all, let's describe what AI with a purpose means. There are three layers to it.
Accuracy is one of the most significant KPIs when it comes to AI solutions. But we should think about the way we describe this concept. How are we being purposeful, about even just the definition of success? Accuracy shouldn't be about "92% accurate" or "75% accurate". Purposeful design is not meant to be 100% accurate. It's about how well matched the training data and the training optimization characteristics are with the application environment and the operators' goals.
Unfortunately, there are bad practices going on in the AI industry regarding purposeful AI that make it all the more difficult. Situations where companies will sell 'snake oil', pipe dreams of "99.9% accuracy" that you somehow have to compete with.
This is one of the most fascinating dilemmas that our industry is facing, because even big companies are sometimes susceptible to engage in such practices. This is why we need to design AI with accuracy as one of, and not the only, important KPI.
This touches on our ability to reason, to interpret what machine learning systems are doing and how their decisions can be explained.
Machine learning solutions are built by engineers and are thus highly related to our human way of making connections of applying logic. We sometimes think that we can explain our decisions with transparent, intelligible, and interpretable answers, but this is not the case in reality. We think AI is by default objective, because we think machine learning systems are somehow not changed by the frailties of human cognition. Yet they are, because they're modeled after us and trained by us.
If we build machine learning models with the expectation that they're just going to produce some sort of perfectly transparent result, we're in for a rude awakening.
This might be the most foundational of the three layers: who or what is accountable when it comes to an AI project? Is this a good use of machine learning and/ or automation? The AI system in itself isn't accountable, but humans should be.
These three dimensions are essential to all the AI solutions we build. Purposeful AI needs to address the industry-wide accuracy problem, be explainable, and most importantly, be made with human accountability in mind.
Limitless tech, human constraints
Two myths are very common in industry discussions on purposeful AI.
"AI is all or nothing, either fully automated or not". Automation is actually a gradient; you can have very precise, narrow forms of automation that lead to great results.
"With enough time, energy, data and money, any system can become perfect" This is a false assumption, because of the numerous variables we work with every day. Our world and humans, in general, are too variable to be able to craft a perfect system. That is also part of our charm.
AI at its core should be a collaboration between humans and the AI systems. That is where the value of AI truly lies. We have to be aware that there are a lot of things that AI just can't do, because it is not creative. It is limited by nature.
This is because our motivation as human beings is fundamentally organic. We have bodies that live and breathe and die. And because of that, we rely on other people. Because of that, we offload memory onto others and even on objects. We get distracted, we forget things. And that leads to our age-old focus on curiosity. Machines do not possess this motivation, and are thus not compelled by the same things as us.
That is a good thing, and we should strive to keep this separation. Let us, as people, be driven by novelty. Let us get distracted. It is our distraction that allows us to get lost on little adventures.
Our differences with the machines also have a more profound, psychological impact. They are what makes us uniquely human and it is from them that our values stem, why we do the things we do.
Think conversations, not monologues
Our collaboration with our AI systems needs to involve a form of turn-taking. This is the most important unlock necessary for machine learning UX. For now, we are still weaning ourselves off of the upvote downvote, like dislike directions. Was this a good music recommendation, yes or no? Was this the movie I wanted to watch? We do it because it's easy.
But what is underlying that simple upload downvote? Or this five-star rating? There are so many questions underneath the surface that go unasked, and that is the problem with building something that is not collaborative, that is not taking turns.
That is why it is so very important that we make it as easy as possible for users to collaborate with the systems that we design. We are currently dealing with a social problem. Most of the fundamental research in AI is happening in contexts that are entirely devoid of end-user needs and focuses mostly on concepts such as accuracy, which we touched on earlier.
AI is a mirror of our own humanity
AI actually allows us to be more human. Far from the misconceptions of AI taking our jobs or replacing us, AI helps us do what we are good at, and what we love. That is how companies and people should embrace it, not as a replacement for anything. But as a superpower.
In the end, AI can and will make us more human, but we need to do it in the right way. With purpose. Not as a gimmicky add-on, but as a thought-out solution that will solve problems that actually need solving.
There is cause for optimism, because so many companies and engineers are building purposeful and carefully crafted systems, bringing us and our industry towards a brighter, more hopeful future.
Interested in finding out how we would envision purposeful AI for your specific project? Get in touch here!