Actions on Google

-> to BOTwiki - The Chatbot Wiki

The term "Actions on Google" refers to a service developed by Google with the help of which developers can link self-developed services with the Google Assistant. Once the third-party service has been tested, approved and released for publishing by Google, it can be accessed on Google Assistant compatible devices. Actions on Google is therefore equivalent to Amazon Skills, but only Google Assistant capable devices can be used.

On which devices can an action be retrieved?

On the one hand, the published actions can be played via the smart loudspeakers developed by Google. Google Home and Google Home Mini will be retrieved. On the other hand, they are also available on Google Assistant compatible smartphones or tablets. Google uses Google Assistant to serve both Android and iOS devices. For devices using the Android operating system, the Google Assistant is fully integrated and can be activated by the user at any time. For iOS users, a separate Google Assistant app must be downloaded.

How can a published action be retrieved?

So that users can call the published action, the action must be stored within the Google Actions Developers Console so-called "Invocation Phrases". These are commands with which the user informs the Assistant to open a particular action. Up to 5 example sentences can be stored for the action. As an example, a call could look like this:

  • "Ok Google, talk to BOTfriends"

How can a published action be restricted? 

The developer of a Google Action has various options for restrictions when publishing his action. On the one hand, the languages in which the action should be available can be determined during publication (as of 21.06.2019: 19 compatible languages). On the other hand, device-specific restrictions can be set up. The following questions must be answered within the Google Actions Developer Console:

  • Does your action require audio output?
  • Does your action need a screen output?
  • Does your action require media playback?
  • Does your action require a web browser?

Depending on the answers to the questions, certain devices will be approved or excluded. For example, if it is selected that a screen output is indispensable, the action is automatically excluded for the Google Home and Google Home Mini and is therefore not available to the user on these devices. Within the Google Actions Developer Console, the devices included or imposed are displayed in tabular form immediately after the question has been answered.

How can a Google Action be developed?

Google itself describes two implementation possibilities. The first and simple variant is a direct integration via Google Dialogflow. After the conception and creation of a Voice Assistant within Dialogflow, it can be published without "developer know-how" directly via the Integration tab of Dialogflow. This is a plausible variant, especially if the Natural Language Understanding (NLU) service is Dialogflow. The Google Actions SDK provides developers with a further integration option. Here Google provides client libraries for creating Google Actions. These are available for different programming languages. This variant primarily makes sense if full control over the interaction between Google Action App and user is to be ensured on the server side. Furthermore, this variant is unavoidable if the NLU service relies on another technology such as Dialogflow.

 

> Back to the BOTwiki - The Chatbot Wiki