Technologists argue that in the not-so-distant future if a task needs to be done efficiently, it will most likely be done by robots (as in AI with bodies), while humans will focus on activities that are typically inefficient – think exploration, innovation, science and art.
In September, in suburban London, a delivery of a parcel was attempted by Amazon Prime.
The homeowner was out on a school run but had a video doorbell from Nest Hello (Google) installed in the home. An Apple iPhone X received a live feed from the doorbell, and a two–way chat soon transpired. It turned out that the homeowner’s Tesla was parked right outside, and was accessed via the Tesla app – thanks to its permanent cloud connectivity. The boot was remotely opened by the homeowner, who could see it live on the video stream. The delivery guy was able to leave the package inside, after which the car was remotely locked via the app, resulting in a successful delivery.
What is noteworthy about this story is that it involved four distinct services – Amazon, Google (Nest), Apple and Tesla – all of which were digital, but none were specifically designed to work together.
Yet, in many ways, we are probably in the first hour of the evolution of AI (think before the Internet happened).
Futurists like Kevin Kelly (Founding Editor of Wired) speak of a rapid “cognification” of the machines around us, giving them the ability to harness superhuman powers – minus the (human) distractions. But, they also augur that the most popular AI product that will be in use 20 years from now, hasn’t even been invented yet!
The Merriam-Webster defines Artificial Intelligence as: “The capability of a machine to imitate intelligent human behaviour.”
We, humans, possess a number of cognitive abilities that help us learn new concepts, apply logic and reason, recognise patterns, comprehend ideas, solve problems, make decisions, and use language to communicate. We call this intelligence.
This “intelligence” enables us, humans, to think, to be self-aware, to experience life.
And, human intelligence is not just linear and one-dimensional.
Howard Gardner in his ‘Theory of Multiple Intelligences’ argued that there was a wide range of different abilities operating in the human mind – ones that did not necessarily correlate with each other.
Gardner proposed that these distinct types of intelligence - including logical-mathematical, linguistic, spatial, musical and interpersonal - are what enabled people to become a plumber, farmer, physicist or teacher.
Modern machine capabilities typically classified as “AI” include successfully understanding human speech (as in voice-recognition), competing at the highest level in strategic game systems (such as chess), and intelligent routing (as in content delivery networks or military simulations).
But the scope of AI is disputed: As machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect. As a result, routine technologies like Optical Character Recognition (OCR) are frequently excluded from the definition.
In fact, we tend to think of AI as whatever hasn't been done yet!
The fact is, AI is not just embedded inside Netflix algorithms or voice controlled ‘smart assistants’, it’s embedded in our lives.
The decades-old autopilot systems that fly our commercial aeroplanes is just one example of that. The humble “calculator” is already smarter than most of us in arithmetic, and the GPS chip in our phones is already better at spatial navigation than the average human – both being examples of machines exhibiting intelligence.
Clearly, AI is relevant to any task requiring intelligence.
High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), powering search engines (such as Google), and improving spam filtering or targeted advertisements.
In medicine, AI is being applied to numerous high-cost problems, with initial findings suggesting that AI could save as much as $16 billion. In 2016, a groundbreaking study in California found that a mathematical formula, developed with the help of AI, correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.
In financial services too, there are several use cases for AI. Banks use AI systems to organise operations, maintain bookkeeping, and invest in stocks. AI-based tools help read documents, process cheque payments and respond to customer requests. AI has also reduced fraud and financial crimes by monitoring behavioural patterns of users for any anomalies.
Today, AI can even analyse “silence patterns” on customer service calls to infer insights from excessive hold-times about system delays, outdated CRMs, etc.
However, in our quest for providing more bells and whistles, we may sometimes lose sight of what truly matters. We need to connect the dots... across devices, channels and teams. We need to listen to our customers, our distributors, our employees. We need to move from proposition to purpose.
Do customers + AI have to equal chatbots? Or can we use AI-based tools to actually improve outcomes for our customers?
Here are just a few examples where intelligent use of AI can help improve customer Experience, regardless of the underlying business:
Technologists argue that in the not-so-distant future, if a task needs to be done efficiently, it will most likely be done by robots (as in AI with bodies), while humans will focus on activities that are typically inefficient – think exploration, innovation, science and art.
Ultimately, our ability to deal with what comes next will depend on our willingness to embrace a co-existence with machines and their intelligence. Only then will they become our partners, not just tools.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)