Language assistants like Alexa or Google Assistant are becoming increasingly popular. For companies, they represent a new marketing and communication channel, which is receiving increasing attention under the term Voice Marketing. Marketing expert Sinan Arslan and software developer Ziad El-Jayyousi of the digital agency Neofonie show why "Voice" belongs to the future marketing mix and explain the development in five steps using the example of a separate project.
Voice as an interface to the customer
In the spring, Amazon announced that it has sold more than 100 million Amazon echoes worldwide. The distribution of language assistants is also becoming increasingly popular in Germany. According to "Trendmonitor Germany", around 15 percent of all Germans with Internet access use Amazon Echo, Google Home or a HomePod. Amazon echo comes first with Artificial Intelligence Alexa, followed by Google Home and its Google Assistant. Even the first car manufacturers deliver Alexa from the factory, such as Audi in the e-tron. Further models will follow this year.
From blogcast to Alexa skill
What "apps" on the phone are called "Actions" in Google Assistant and "Skills" in Alexa. The possibilities range from simple skills for information to more complex ones such as shopping skills. Businesses can use skills to give their (potential) customers more service - easy ordering, relevant information, or a direct line to support are just a few examples.
Also for content marketing, language assistants offer excellent distribution channels. In the specific case of Neofonie it is about the distribution of blogcasts, the set version of blog posts that are so far "only" on Spotify, iTunes and SoundCloud audible. But how does the audio content come to Alexa?
Step 1: Define the degree of individualization
At the beginning it must be decided whether the skill should be developed individually or on the basis of a "template". Alexa Skills Kit is a collection of tools, documentation, and code samples to help developers get started. With the Skill Blueprints, there are also extensive templates with which Alexa can be taught new skills even without programming knowledge.
In our "Neofonie Blogcast" skill, we used a mix of templates from the Amazon Web Service (AWS) portal as well as an individually designed solution. In this way, we shorten the time to go live and can implement future changes flexibly according to our own wishes.
Step 2: Develop the skill
If companies decide to work with a blueprint, this process can sometimes be completed in just a few minutes. Depending on the complexity of the application, the process can take several days or even weeks - we needed about three to four days to complete our blogcast skill.
To create a skill, developers first need a free Amazon Developer Account. In the so-called "Developer Console", the development environment, Alexa has its own area. Developers initially have the choice between different models: In a "one shot model" Alexa is activated and performs the defined function in the same step - an interaction does not take place.
Useful are these skills when querying individual information, such as the weather report or the time. Somewhat more complex are dialogue models in which users and skill interact with each other. In this case, specific "intents" must be defined, ie all events that can take place in the conversation - statements and questions from users - and how the assistant should react to them.
Some intents are mandatory, for example, to call help or quit. Also, our blogcast skill is such a dialogue model and includes the audio player interface for Alexa to play the mp3 files. So we've included intents that allow users to toggle between different blogcast episodes, for example. If the dialog models have been set, the skill must be hosted. Also for this Amazon offers with AWS Lambda an in-house solution.
Step 3: Check the skill thoroughly
In this phase, the skill is tested extensively. Different test environments and usage patterns can lead to different results. For this reason, a skill should be checked by neutral users who were not involved in the development.
The "Developer Console" offers a test area for this, in which up to twenty users from the developer community put the skill through their paces and thus identify sources of error that the development team has not gone unnoticed.
Step 4: Complete the information
If the skill has proven its functionality, only a few formal details need to be filled in: a brief description and a detailed explanation, information on the developer and the sender. If the skill contains additional functions, such as in-app purchases or monetized advertising, additional information, for example about a target account, is necessary.
The application must also be assigned to a specific category to increase discoverability. Shortly before sending is also the time to set the company the appropriate voice command to activate their skills. We chose "Alexa, launch Neofonie Blogcast". The command should be catchy and easy to remember - so users have it ready quickly, which increases the probability.
Step 5: Release for publication
After submitting via the AWS portal, Amazon automatically performs a short diagnosis. This usually takes no longer than a quarter of an hour. The company checks whether all necessary information has been filled in and carries out an automated quick check of the software.
Conclusion: Make your first experiences
Language assistants are becoming increasingly popular and used. The quote from Microsoft CEO Satya Nadella from 2014 has long since come true: "Human language is the new interface". Businesses today should be more concerned about how and to what extent they want to leverage voice marketing and integrate it into their marketing mix.
Voice as an interface to the customer has certainly not arrived in the crowd yet, but on the right track. Reason enough to start the first test projects with digital language assistants and gain initial experience - ideally before the competitor does.