"When a character comes to life, it's like meeting a new friend for the first time." - Lynette Mather
Indeed, in stories we first see ourselves reflected in a different light and few things motivate us like the first time we experience, well, just about anything. Novelty, perhaps above all else, is the spark that lights the mind aflame and creates the conditions for unexpected behavior. Which begs the question, how will humans respond to their first real AI experience? Why will they respond in these ways? How will novelty response play out in our collective behaviors? What patterns of behavior will emerge from repeat exposure? Answering these questions is essential to helping us shape the experience for the better before it happens.
As a species, humans often crave newness and we’re tuned to it even when inherent motivators like exploration or curiosity aren't our primary driver. At a minimum, we respond with heightened awareness and speculation. Because evolving in dynamic environments conditioned us to pay attention and make predictions based on memories. These skills helped improve survival and reduce anxiety about what's to come.
However, when faced with advanced knowledge that something novel is imminent, our conditioning and gifts of imagination often lead to fantastic predictions. We create ambitious, colorful prophecies and fear drenched absurdities alike. Then, when the moment of realization happens, the novelty we’ve been waiting for is more mundane and decidedly different than we predicted. Keeping this in mind, the first time we meet AI (and cannot distinguish its interactions from a remote person communicating through a digital medium) a surreal sci-fi movie plot is unlikely to unfold. Meeting AI probably won’t be like having a new invisible best friend. AI probably won’t care about us as much as we like to think it will. AI will have a job to do and little time for our eccentricities.
In that case, how will these mostly ordinary, not human, novel interactions lead us to behave?
I think we’ll follow some familiar paths of disbelief and then do as we always do, start using AI in ways no one had anticipated. In this series of blog posts we’ll examine the situation as a sequence of events.
First up is first contact, the equally scary and exhilarating prospect of communicating with an entity that's decidedly not human and is in some way aware of its separateness from us. It's becoming ever more likely to occur with an intelligent entity created by us, instead of the traditional aliens the term was coined to describe. But, will the experience be any less alien for the us?
On a consumer level, we’ve already gotten little glimpses of what is to come with the recent flood of interactive voice bots and home appliances, but these are all immature and decidedly not impressive. They can only hint at what the real first exposure will be like. We can certainly assume that AI in its various forms will imitate human behavior, be intelligent enough to function in its intended capacity and learn independent of programming (within limits). What we might not assume but need to be aware of is that AI will almost certainly inherit our biases. (Katharine Schwab, Proof That Algorithms Pick Up Our Biases, In A Single Map Fast Company, April 2017)
Given these assumed parameters what can we expect from our side of the equation? I hypothesize four especially tricky attitudes/behaviors patterns will emerge when meeting AI:
Fueled by skepticism, we’ll try to fool, misdirect and break the AI.
We’ve already seen hints of this behavior and the anticipation of it in the consumer products of today. When real AI arrives on the scene, this behavior will intensify and expand. Extraordinary claims like the arrival of AI will indeed require extraordinary proof. This will play out in a dance of interactive trials where we do our best to prove that we’re superior and can always one up any computer program. To counter act this behavioral tendency, AI creators will have to expand testing even further than the most rigorous user tests of today.
We’ll be frustrated by limitations, because of expectations, and be overly critical.
The first true AIs we interact with will not be the AI of science fiction and they will, of course, have their limitations. We’ll become frustrated with these limitations because of the colorful and outlandish predictions we’ve been making as a society about the nature of coming AI. Expectations are a terrible thing to carry into a novel situation. Today’s bots and voice controlled systems have a ubiquitous out when they’ve reached their limits, “I didn’t understand...” or “I can’t find an answer to your question.” Reusing the same response given when they truly don’t understand is a clever trick, but one that stands up poorly to long interactions. AI creators should look to but also beyond typical human response to smooth over this friction point. Humor is a tried and true human device for dealing with non-comprehension and is likely to be the go-to for AI creators facing this problem. It’ll probably work well too, except it can’t work all the time. What about healthcare and the enterprise? We’ll need a new answer for those humor inappropriate situations.
We’ll embellish the good parts of our experience motivated by social competition.
The desire for exclusivity and status are two motivations that drive much of the digital behavior we see online today. These dynamics often lead to hyperbole and embellishment so that those who’ve yet to have the exclusive experience are more impressed when the experience is recounted. The chance to interact with AI will create new opportunities for us to frame our experiences so we’re shown in the best light in the retelling. This sets the stage for particularly hard to account for secondary perceptions. What new and untrue things will those that interact with AI second be expecting? This feedback loop is of critical importance to AI creators because false expectations are already in play and first contact embellishment will only make it worse. AI creators should give extra attention to the exit perceptions of users so last impressions can be partially tailored in advance.
Motivated by exploration and curiosity, we’ll dream and adapt.
Of all the behaviors, likely to occur when we first meet AI, I hope this one emerges as the gorilla. Innovation using existing tools and creating new tools to overcome the limitations of existing tools is the core of being a creative species. Now that we’ll soon have the chance to create by molding something that can do many of the same things we do, we are truly at a cross roads. Old limitations are falling away and the future is only limited by our and AIs imagination. Indubitably, we are soon to see in a whole new way when we can see through the eyes of a second intelligence. AI creators should embrace this and advocate for it. Too often the creators of new technology see adaptation as an injury and this attitude can only hurt innovation.
Which brings us back to beginning; will the first time we meet a real artificial intelligence be like meeting a new friend? Or will our inescapable human tendencies turn the meeting into something more like a hazing ritual? We’ll see. But one thing is sure, Meeting AI will be different than we imagine and likely better than we can imagine.
Subscribe to Tinman Kinetics
Get the latest posts delivered right to your inbox