A Visual Assistant
You may have noticed that our home page works a little different than most. It's sort of a "choose your own adventure" of point-and-click (or swipe-and-tap) so that, hopefully, visitors can find information relevant to what they are interested in more quickly. While also being presented with related paths they may find valuable but didn't know were options.
Like a conversation. But less talking out loud or typing.
AI Assistants through chat interfaces were the -Next Big Thing- around the 2016 time frame, but failed to engage audiences. The free form text took too long to get started, shallow knowledge bases meant that most free form text failed to bring up anything relevant (even if nuggets of good information was hidden away behind the scenes), and websites usually relegated the whole experience to a tiny "chat" window which was really only admitting it wasn't a core interaction.
Then voice AI Assistants were the -Next Big Thing- in 2018 and from Alexa to Google, AI Assistants began to finally live up to the hype. But companies like Amazon have gigabytes of data and thousands of engineers constantly working on making voice assistants more relevant. What if you were a company that didn't want to create your own Alexa but still have focused content about your knowledgebase, blogs, or other internal content?
“People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world.”
― Pedro Domingos
And for that matter, aren't most people pretty visual? Is there a better way, or at least a visual hybrid?
Our answer to that was released in 2018 and available as a packaged solution for companies of any size. Just like video games started with text adventures and moved on to visual point-and-click adventures, we think there's value in using a "semantic" path through even a small amount of content to help customers discover good info in an exploratory manner.
Using machine learning based style transfer, our AI effectively "re-paints" stock imagery and photography with relevant tagging with key content hand curated in order to add visual cues to content in an illusted manner - without paying thousands of dollars for thousands of custom art - all in the same style.
The AI then takes a content source - in our case, a small subset of blog posts - splits it up by category into bite sized pieces, auto-tags it based on semantic, conversational flow, and presents the results in a human editable database. Then, when visitors come to experience the interactions, the AI does live processing to weave unique narratives based on what it is detecting from the audience choices (or in advanced situations, previous knowledge or system integrations related to that specific individual).
Because the content is compiled down into a human-editable CMS, not only do you keep full control but it's also SEO friendly!
So, what do you think? We'd love to hear from you. Right now we don't have a lot of content but we'll add more over time. If you'd like to discuss the possibilities of integrating something similar into your organization or digital properites, please just let us know.
Subscribe to Tinman Kinetics
Get the latest posts delivered right to your inbox