/ AI

Time to Revisit Ethics in the Age of AI

"To err is human but to really foul things up requires a computer." - Paul R. Ehrlich

As a competitor in the IBM Watson AI XPRIZE, it's especially important everything we do goes through an extra level of rigor when it comes to the science side of AI. To address Grand Challenges inevitably means dealing with a lot of double edged swords, so integrity and ethics are our only shields from potential harm.

Today, after a long period of stalled starts for attention, AI is rekindling the public debate around ethics in software development. Hot topics range from job replacement to the recent titan CEO arguments on whether AI will unlock doomsday... or not. For this post I’d like to focus in on a quiet trend that has a big impact here and now, that of research involving human participants.

“Digital Software Strategist” is a hot job title and ever since the iPhone hit the scene, user experience has been, justifiably, king. The marketing department wants to use “gamification” to create brand addicts, management wants to use software to “influence behavior” of employees and make them more productive, loyal, and adherent to policies. With data science and now AI, all of this is amped up with unprecedented personalization. What once was aggregate, anonymous data and archetypes increasingly becomes a personal, individualized experience.

Companies with good intentions talk about what can be done through automated processing of employee emails and debate how to get more personal data out of increasingly interactive digital customer experiences. And why not? A lot of good could be done. After all, companies often use software to enable and motivate employees to, say, protect consumer rights, which makes the ethics companies practice more in line with the ethics they preach. Frequently a more engaging brand relationship is a win-win for customers and the companies providing them valuable services.

Sometimes, however, that double edged sword bites us in the digital posterior.

So how do we know what will engage customers or change behavior? We do “qualitative” (exploratory) studies that translate later into “quantitative” (hard numbers) results. Qualitative studies usually involve people participating in a variety of interviews, focus groups, software testing, etc. Practitioners apply ethnographic research into the personas, characteristics, backgrounds, ages, and any number of other personal attributes.

Generally, the intention is completely above board, of course. I've participated in research countless times over the years that didn't appear to run into any major issues. So when we signed up to have our XPRIZE project's research reviewed by an Independent Review Board (IRB) - an almost completely foreign extra step for enterprises working outside medical or academic circles - I assumed the process would be more or less a formality, something to check a box to show we were being extra diligent.

What I experienced instead was eye opening.

The fact of the matter is, someone with the best intentions has a lot of blind spots with regard to how things could go wrong. Our IRB, IntegReview, did an amazing job of working extensively with us for weeks to craft an overarching plan of research that had all the meticulous detail, objectivity, and care necessary to take what could have been a mediocre qualitative study and transform it into well vetted research gold. I can say without hesitation, that not only did our plan for research improve from an ethics perspective, but by really taking the time to think hard about the intent of all the research and what ways we could build in participant protection, the entire approach improved dramatically.

AI is quickly becoming ubiquitous in organizations and even our personal lives, but the cutting edge is still a vast and unimaginably multifaceted frontier. We believe that AI will only reach its true potential when it empowers people – but there’s a lot of science and research that needs to be done to get us there. If we want to keep innovating quickly, it is imperative that we also take the time to really, diligently incorporate strong ethics into our work. The warning signs are out there and growing rapidly, if we don’t do this for ourselves we risk damaging the very progress we are trying to achieve - to say nothing of the people we involve.

I have many more thoughts to share on this topic in the future, but send us an email or leave a comment and tell us what you think of the state of ethics in software and AI today. Have a story you’d like to share? Let’s keep the dialog going for a better future!