Skip to main content

3 posts tagged with "Dialogflow"

View All Tags

· 3 min read
Yuri Santana

Dialogflow sits in the middle between the user and our application, helping us capture conversations from natural language into useful and readable data.

Entities take care of extracting information and details from what the user says. They are present the moment you create your first intent and start creating training phrases, Dialogflow will automatically identify and label some words suggesting entities for you to match with an intent.

Having entities set in place will help you train your assistant and make it more efficient for the users. These can be created manually or by a JSON or CSV file.

There are multiple types of entities:

  • System entities:

These are default entities of Dialogflow and they match many types of common words like geographic locations or dates.

@sys.date
  • Custom or developer entities:

These allow you to define your own words to trigger an intent, you can also provide synonyms.

They come in handy when building your own assistant with specific words you want it to listen to and identify so you can provide an accurate response to your users.

Just remember that when providing a custom name, it can start with a letter, number, dash or underscore.

@computer_service
  • Custom or developer composite entities: These are built from multiple custom entities linked to be triggered together.

    @os_computer[@os_device @computer_service]
  • Session entities:

They are generated for a single user-specific session, from one conversation between the agent and the user.

These entities expire automatically after 20 minutes.

  • Regexp entities:

These utilize Regular Expressions to match more specialized entities from the user.

It is important to remember that the order in which you present your regular expressions to the agent matter because the search will stop once a valid match is found.

Entity vs Intent

Entities will make your development time quicker and, once identified by the agent, provide accurate responses to the interaction at hand. They are the way you have to catch important data from the user. Intent helps understand what the user request really means, it usually contains training phrases that help it identify what the end-user expression wants, actions to be performed after an intent is identified, parameters that will form the entity and dictate how data is extracted and responses that will be returned to the end-user.

Join the conversation

Fonoster is developed in the open. Here are some of the channels you can use to reach us:

Discord

GitHub discussions:

Twitter: @fonoster

· 3 min read
Yuri Santana

VUIs (Voice User Interfaces) is the ability a virtual assistant has to respond to a voice commands utilizing NLU (Natural Language Understanding), NLP (Natural Language Processing) and speech recognition technologies.

Speech is more intuitive and natural for humans to communicate with each other while it also helps us gain important information and context, this is why voice assistants have become more popular in the last year with multiple uses from home, health, entertainment, businesses and many other sectors.

VUI technology is becoming more sophisticated and reliable, being fast to adopt and leaving the users with higher satisfaction levels than conventional chat or text assistants.

But what are the real advantages of Voice and Speech recognition technology?

  • Users don’t need to be trained on how to use the interface

Finding and understanding how to use new features on a system can be difficult, especially for new users. When you have many menus, dropdowns or information to display to the user they can feel overwhelmed and frustrated to not know how to pick what they’re looking for.

Voice can help ease the user to reach their goal on your product faster, just by voicing a command to the assistant and finding what they’re looking for immediately, offering more flexibility than a text/visual only interface.

  • Makes your product more accessible for the users

Accessibility is essential in this day and age, we have all suffered from a disability whether that is temporary or permanent so making your product accessible is a must.

Many groups of individuals rely on voice features to navigate the internet completely and even people who want to limit their keyboard use due to fatigue or cognitive disabilities.

Incorporating voice will help include a good section of the population that is often overlooked, placing you and your product in the competitive advantage in front of those with less accessible products for the users.

  • Boost productivity levels

Voice can provide support and assistance to customer support or task management, it allows you access to the information you need with just one voice command, taking less time than it would to type out a query on a text or visual only interface. Stanford’s study has stated that speech is three times faster than typing.

Voice prevents you from having to use hardware to achieve your goal, for example taking out your phone to get a direction from Google Maps, minimizing the risk of accidents.

  • Users will connect with your brand and product

Voice for the users feels more like a human interaction, providing comfort when the VUI actually understands what the user is saying and providing an accurate response to the intent and feeling of the user.

Voice provides a personality to your brand, it can be programmed to have humor, to be kind or to be friendly. All of those human traits the VUI learns over time, will make the user feel more connected to the brand.


Speech has the freedom that it can be applied for any industry, so the benefits are not only for the tech community. Voice can significantly improve the user experience and make the interaction with the product be more efficient. It ultimately, when done correctly, combines the best of the graphical and voice interfaces in benefit for the user reducing time and fatigue.


Join the conversation

Fonoster is developed in the open. Here are some of the channels you can use to reach us:

Discord

GitHub discussions:

Twitter: @fonoster

· 2 min read
Yuri Santana

Connecting Fonoster to Dialogflow is just a few clicks away using the Fonoster Dashboard.

Trunking information you'll need:

  • VoIP provider
  • Number
  • Username
  • Password
  • Host

Set up your provider's information

Sign in to Fonoster and go to the Fonoster Project Dashboard, next select SIP Network tab and create a new Trunk.

Here you'll need to provide this information from your provider's account:

  • Your provider's name
  • Your username
  • Your secret / password
  • Providers Hostname or IPv4

Google Service Account key

Next step, you'll need to create a new Secret on the Secrets tab and set it to be the Google Service Account json key.

Create a new Fonoster Application

Now we are ready to create a new Application, go to the Applications tab and create a new one.

  • Pick a name
  • Select the secret you previously added from the previous step
  • Pick a voice
  • Type the intent ID from your Dialogflow Agent
  • Type the project ID from your Dialogflow project
  • Hit save
Add a new number to call

Lastly, we need to add a new number we can call and trigger Dialogflow.

Create a new number from the SIP Network tab

  • Add your number from the provider
  • Add the weebhook URL http://voice.fonoster:3000
  • Click save

And there you have it. You're ready to call that number and be able to interact with the AI.

Need help?

Fonoster is developed in the open. Here are some of the channels you can use to reach us:

Discord

GitHub discussions:

Twitter: @fonoster

We look forward to hearing from you.