BOTfriends Preview

Follow-up intents

--> to the BOTwiki - The Chatbot Wiki

Follow-up intents are intents that are linked to an introduced intent and thus form a conversation flow. The output context of the parent intent is automatically created as an input context in the followup intent. The term comes from the NLP service Dialogue flowwhich have the function to create conversation flows very easily without having to create and manage contexts manually.

In Dialogflow you have the possibility to choose between Predefined and Custom Follow-up Intents.

Predefined Follow-Up Intents

Dialogflow provides these intents, which already contain training phrases when they are created. This has the advantage that you no longer have to enter utterances and can therefore build up conversations more quickly. For example, if there is an option to answer yes or no after a question, this is created as a predefined follow-up intent. Training phrases such as "Yes", "Sure", "Definitely", "Yes, please" or "No", "No", "No, thank you" are already there. In addition to common intents such as "Yes" or "No", there are also intents for "Cancel", "More" or "Later". There is also a fallback follow-up intent, which is used to intercept inappropriate answers from the user and return to the actual question. An example:

Chatbot: "Would you like an order confirmation for your pizza?" (Chatbot expects "Yes" or "No" as answer from the user)

User: "I would like to order another pizza?"

Here the chatbot would jump to the Fallback Follow-Up Intent as this does not correspond to a Yes/No answer. So the chatbot could ask again for an order confirmation to return to the topic.

Custom Follow-Up Intents

With this type of intensity, the user must manually enter the training phrases. The contexts are still created automatically.

Example in Dialogflow:

Follow-up Intents in Dialgoflow

> Back to BOTwiki - The Chatbot Wiki


Insult Rate

--> to the BOTwiki - The Chatbot Wiki

The insult rate is a KPI (key performance indicator) in chatbot analytics that indicates how often the chatbot is insulted by the user. For example, if there were two out of a total of 20 conversations in which the user was insulted, this results in an insult rate of 10%. Chatbots often still reach their limits in that they do not have answers to the user's questions or provide incorrect information. This usually results in frustration on the part of the user, which causes insults to be elicited. Sometimes insults also occur without any reasons from the user. This is because the inhibition threshold for insults on the Internet is low due to anonymity. Especially with chatbots, where there are no human interlocutors. Accordingly, chatbots become victims of so-called trolls. Trolls are familiar from social networks, where they use comment functions to post their opinions in a provocative manner.

In general, the insult rate is an important metric that can provide information about user acceptance or the efficiency of the chatbot. However, when observing the insult rate, one should also keep in mind which target group is involved and which effects the specific use case can have.

> Back to BOTwiki - The Chatbot Wiki


BOTfriends Preview

Agents in Dialogflow

--> to the BOTwiki - The Chatbot Wiki

"Agents" are best described as Natural Language Understanding (NLU) modules. These modules can be integrated into an app, website, product or service and transfer text or spoken user queries into meaningful data. This transformation occurs when a user's utterance matches anintent in the agent.

The matching intent then delivers an answer to the user.

This answer can be a simple text or a spoken confirmation or a webhook answer. Type-related parameters (e.g. temperature, date, locations) can be recognized from the query to create the answer. Actions" are used to fill the parameters with actual values. Requests of the user can be forwarded to external services to fill the desired parameters with values.

An agent is the central entry point of a Dialogflow application. All intents with the corresponding utterances, responses and actions are maintained via the agent.

Creating an Agent

The creation of an agent must be done on the Dialogflow start page. The creation via a REST-API interface is not intended.

 

Example of how to create agents in Dialogflow

 

> Back to BOTwiki - The Chatbot Wiki

 


BOTfriends Preview

Quick Reply / Chips / Chips

Quick Replies are a form of interaction in messaging platforms and appear as buttons under a message. Unlike cards, Quick Replies disappear after they have been pressed. Quick replies allow users to click through content in the chatbot instead of typing their request in the input field. If, for example, the chatbot offers a button with the name "Help", it is no different in the end than if the user types in "Help" and sends it. The communication is thus accelerated and the users get an orientation about the topics that the chatbot offers.

Depending on the output channel or messaging platform, different names are used for Quick Replies.

 

Messaging platform Designation
Facebook Messenger Quick Reply, Quick Reply
Google Assistant Suggestion Chips
Microsoft Bot Framework Suggested Actions
slack Slack interactive buttons
Skype HeroCard
telegram Keyboard Buttons
viber keyboards

 

Here is an example picture for Quick Replies in Facebook Messenger: 

Quick Replies

Application in Dialogflow

If you create or edit intents in Dialogflow, you will find several options for replying as a chatbot under the heading Response. Among other things, you have the option of selecting the Quick Reply format and naming the buttons in the form of a list.

A Quick Reply Response is displayed in the messaging platforms as a predefined User Response, which is sent back to Dialogflow by clicking on it.

 

> Back to BOTwiki - The Chatbot Wiki

 


BOTfriends Preview

Granularity of Intents

--> to the BOTwiki - The Chatbot Wiki

The granularity of intents represents the level of detail and content of a chatbot's intents. This means that the more granular a chatbot is built, the more individually it can respond to certain requests. This is illustrated in the following section with an example:

  • Fine granular intent:

Question: "Does the Bundestag offer group tours?"

Answer: "Group tours of the Bundestag can also be booked. You can find more information and bookings here: www.bundestag.de/gruppenführungen"

  • Coarse granular intent:

Question: "Is the group tour in the Bundestag barrier-free?"

Answer: "All information about group tours can be found here: www.bundestag.de/gruppenführungen"

How granular should intents be?

The granularity depends very much on the use case and the complexity. It often makes sense at the beginning not to plan the content of a chatbot with too much gran ularity. During the development and especially during the test phase, one should pay attention to the interaction of the user with the chatbot, in order to then still be able to decide how deeply users want to go into the detail. Of course, it is possible to adjust the granularity during and after the introduction of the chatbot and to create new intents to ensure the level of detail.

Which problems can occur with too granular intents?

The more detailed the intents, the more difficult it is to maintain the content of a chatbot. Of course, it helps to name the created intents in a meaningful way to ensure that several developers can work on one chatbot. However, there is another challenge that arises when the intents are set up too granularly. When requests are very similar and very close to each other, it is difficult for the NLP service to identify the correct intent behind a user's request. The NLP service has to decide how sure it is that this request belongs to this intention by creating different question possibilities (utterances). If these utterances are too similar, the confidence score drops and the machine is no longer sure of the intention behind the query.

Overall, it can be said that a middle ground should be chosen, which will crystallise through testing with real users.

 

> Back to BOTwiki - The Chatbot Wiki

 


BOTfriends Preview

Chatbot Training

--> to the BOTwiki - The Chatbot Wiki

Chatbot training refers to an improvement in language comprehension and an optimisation of the intention recognition behind a user input. For chatbot training, it is important to enrich existing user intentions with further question possibilities(utterances) and thus ensure that the recognition rate(confidence score) increases and the intentions are recognised even better. On the other hand, it is important to check the questions received during operation and to find out whether requests are received for which no intent or intention is stored. In this case, the chatbot plays the Fallback Message and cannot give an answer to the question. To improve this, the chatbot should be trained to create intents for the requests for which there is no content yet.

When do you train the chatbot? 

The chatbot training should already take place during the development phase to ensure from the beginning that the appropriate content is played out in response to the queries. Nevertheless, training is most important during chatbot operation, because this is when "real" users interact with the chatbot and the weaknesses or content gaps usually only become apparent during live operation. During training, it is very easy to see with which content expectations users approach the chatbot and in the event that users expect different/additional content, it is important to intervene quickly and train the chatbot further.

How can I train the chatbot? 

In most cases, this training takes place directly in the NLP services, such as Dialogflow. Here you will find user interactions in a special section, in which you can see how the NLP engine matches intents to user requests. There are 3 possible scenarios:

  1. the NLP service hits the right intentwith the user request
  2. the NLP service assigns a user request to a wrong, existing intent
  3. The NLP service cannot find an intent for the entered request and plays the fallback intent.

Example picture for the training section in Dialogflow:

Training section in Dialogflow

 

> Back to BOTwiki - The Chatbot Wiki

 


BOTfriends Preview

edge case

--> to the BOTwiki - The Chatbot Wiki

Edge cases are outcomes of a conversation that are not expected , that rarely occur and therefore represent exceptions in the conversation . These edge cases are created, among other things, within the Conversational Map or the Conversation Flows. The counterpart to edge cases is the Happy Paths are. Here, the expected outcomes of a conversation are described, which are most frequently taken by the user.

Example for the Happy Path on the Use Case "Order Pizza

This example shows what a "happy" conversation between user and chatbot can look like when ordering a pizza. All information that the user provides to the chatbot can be processed and no misunderstandings arise.

Example for the Happy Path

Example for the Edge Case at the Use Case "Order Pizza

This example shows that there can also be great potential for error if the user sends answers that the chatbot cannot process. The graphic below shows that the user enters an address that is outside of Germany. In this case, no delivery can take place. Such edge cases should be considered in advance and mapped in the conversational map. You should ask yourself how to deal with the user in such situations. For example, you can inform the user that you only deliver within Germany or you can give them the option to enter the address again in case of a misunderstanding.

Example of Edge Cases in Conversations

> Back to BOTwiki - The Chatbot Wiki


BOTfriends Preview

Rasa Core

--> to the BOTwiki - The Chatbot Wiki

Rasa Core, together with Rasa NLU, forms the Rasa Stack. It is responsible for having a conversation with the user that is as natural as possible. [1]

assignments

Rasa Core is responsible for the conversation flow, context handling, bot responses and session management. It can be built on the Rasa NLU or other services that take over the intent recognition and entity extraction and make the results available to the Rasa Core. [1]

superstructure

The Rasa Core keeps a tracker for each session, i.e. for each user. This contains the current state of the conversations of the respective users. If the bot receives a message, it first runs through the interpreter, which receives the original text as input and returns the input, the intent and the extracted entities. Together with the current state of the tracker, the policy component now decides which action(bot response) should be executed next. This decision is not made by simple rules, but just like intents or entities, on the basis of a model trained with machine learning.

These processes can be influenced at several points. First of all, there is the configuration of the interpreter, or the Rasa NLU. This should reliably recognise the correct intents and extract all required entities. The policy component can also be configured specifically for use cases in a designated file (policy.yml).

You can choose between several policies, each of which can be adjusted even more precisely. The actions are the bot responses. These can be simple text responses, quick replies, images or action webhooks. The latter send a POST request to a previously defined interface and from where the responses are sent. In this way, API calls or database access can be realised, for example. [2] [3]

> Back to BOTwiki - The Chatbot Wiki

Sources

[1] [https://rasa.com/docs/
[2] https://rasa.com/docs/core/policies/
[3] https://rasa.com/docs/core/architecture/


BOTfriends Preview

System Entities

--> to the BOTwiki - The Chatbot Wiki

Entities are used to extract user information from natural language.

A distinction is usually made between system entities and custom entities. System entities are entities already contained in the system for addresses, times and numbers, for example. For a complete list of Dialogflow system entities, see System Entities. The following are some examples of the different types of system entities.

System Mapping

These system units have reference values. For example, @sys.date matches common date references such as "January 1, 2015" or "First day of January 2015" and returns a reference value in ISO-8601 format: 2015-01-01T12:00:00-03:00

System Enum 

>These entities do not have a reference value. For example, @sys.colour matches the most common colours and returns the matching colour as it is without assigning it to a reference value. For example, reds like "scarlet" or "purple" are not mapped to "red" but return their original values of "scarlet" and "purple".

System Composite 

Diese Entitäten können andere Entitäten mit Aliasen enthalten und Objekttypwerte zurückgeben. Beispiel: @sys.unit-currency dient zum Abgleichen von Geldbeträgen mit einem Währungsnamen wie "50 Euro" oder "20 Dollar und 5 Cent". Es wird ein Objekttypwert zurückgegeben, der aus zwei Attribut-Wert-Paaren besteht: {"amount":50,"currency":"EUR"}

> Back to BOTwiki - The Chatbot Wiki

Source:


BOTfriends Preview

Sentiment Analysis

--> to the BOTwiki - The Chatbot Wiki

Sentiment analysis, also called "sentiment detection", is a subfield of text mining and refers to the automatic evaluation of texts with the aim of identifying an expressed attitude as positive or negative. [1]

How it works

The task of sentiment detection is approached by statistical methods. In addition, the grammar of the utterances under investigation can be included. For statistical analysis, a basic set of terms (or N-grams) is assumed, with which positive or negative tendencies are associated. The frequencies of positive and negative terms in the analysed text are compared and determine the presumed attitude. Based on this, machine learning algorithms can be applied. On the basis of pre-processed texts for which the attitudes are known, such algorithms can also learn for further terms which tendency they are to be assigned to. With the help of natural language processing techniques, knowledge about the natural language can be incorporated into the decision. For example, if the grammar of texts is analysed, machine-learned patterns can be applied to the structure. [1]

Supplier of Sentiment Analyses

Many cloud providers have sentiment analysis. The services are listed in the following table:

 

Provider Sentiment Analysis
Google Cloud Platform Natural Language API
IBM Natural Language Understanding
Microsoft Text Analytics
Amazon Amazon Comprehend

Many NLP services like Google Dialogflow include a sentiment analysis. [2]

> Back to BOTwiki - The Chatbot Wiki

Sources

[1] [https://de.wikipedia.org/wiki/Sentiment_Detection
[2] https://cloud.google.com/dialogflow-enterprise/docs/sentiment?hl=de