I don’t know about you but for me my experience with many a business begins with a phone call. Yes, I’m old-fashioned like that. I will glance through the website but if I don’t find what I’m after in under 30 seconds, I will call and ask my questions.
The best experience I get on the phone is a pickup on the second ring and a well-informed receptionist who is able to answer all my questions and (sometimes) schedule an appointment at a perfect (hopefully) time to get my beard trimmed. Okay, okay, you got me. Since COVID these appointments have been few and far between. Yet this is why a great experience (starting with the phone and ending with the coffee) is so important to me. These days - a cut is an experience in and of itself.
The worst experience I get on the phone… I won’t talk about. I’m sure you have your own worst experience to roll your eyes at.
What is the best virtual receptionist for a small business?
Not every small business, especially today, has the manpower for a full-time receptionist. As a matter of fact, due to a perfect storm of 2020, as many as 51% of U.S. small businesses report that they are understaffed.
If you are losing customers because you lack the staff to professionally attend to your phone, you have at least a few options. The most expensive one is hiring a new receptionist. The others concern virtual receptionists. Here you have two choices - you can hire an answering agency and get the help of a human online receptionist. Or you can set up a human-like AI digital receptionist to answer your phones and… you guessed it, delight your customers.
Pros and cons of a human phone answering service:
Pros
- Flexible on the phone
- Sound human
- Can chat about the weather
Cons
- Disinterested in your business
- May lack access to your latest schedules/calendars
- May speak with an accent
- Any change in your service has to be communicated to management, in hopes it gets to the agents in time
- While cheaper than hiring internally, can be pricy
When considering a human answering service, bear in mind that the agents are generally overworked and underpaid and highly disinterested in the service they are providing. There is a reason that the call center industry attrition rate hangs around the 120% per year mark.
Pros and cons of an AI phone answering service*:
Pros
- Never has a bad day, always as interested and excited, as the minute you launched it
- Actually sounds like a human and understands your customers like a human would
- Through integrations has direct access to your calendars and CRM
- Any changes to the script can be made in minutes
- Always on and always available (even at 4:20am)
- Very inexpensive
- Can transfer to a live person, if needed
_Cons _
- It will only respond and interact with the customer on pre-programmed pathways
- At the end of the day it is limited by its programming and interactions
- It is not a human
** This section is referring to Dasha AI phone answering app, because our tech is the only one we feel we know enough about to speak of. *
How to set up your AI phone answering service using Dasha AI
So you are thinking about an AI digital receptionist to answer phone calls for you. Companies already use Dasha AI to automate their inbound customer service conversations, including with AI answering services. The cool thing about our tech is you can start testing it and using it a low volume with no contracts or payments.
If you are not a programmer, this section may look daunting at first. I understand because I, too, am not a software developer, but a marketer. Yet I found myself writing AI apps with Dasha in under two hours. The language syntax is self-explanatory and the entire low-code studio is designed in such a way, as to let any citizen developer* use it with minimal difficulties.
** A citizen developer is any person who uses low-code/no-code development environments to create simple applications that directly solve for business needs, without resorting to the help of a professional software developer *
Here are the steps that we will go through:
- Install Dasha Studio and load an app
- Modify the app
- Test the app
Installing Dasha Studio and loading a pre-built AI virtual receptionist app
Head over to https://nodejs.org/en/download/ and download and install the latest version of Node.js. Node.js is one of the world’s most common JavaScript frameworks. You won’t need to do anything with it but it will need to be installed on your machine for Dasha Studio to run its code.
Now go to https://code.visualstudio.com and download the latest version of Visual Studio Code for your machine.
Finally, go to http://community.dasha.ai and join Dasha’s developer community. A team member will reach out to you and give you instructions on how to install Dasha Studio, as well as receive your API key. We are issuing API keys manually for now, as we are in closed beta, and want to be sure each new developer (or citizen developer) has access to our Customer Success engineering team to answer any questions they may have.
Click through to the Dasha Code Sample repository on GitHub. You are looking for the SMB receptionist sample app: https://github.com/dasha-samples/dasha-smb-receptionist-demo
Click on the green “download Code” button and select “Download Zip”, as illustrated below.
Unzip the archive into a folder. Now, open your Visual Studio Code and click File >> Open >> your folder.
Voila, you have opened the Dasha SMB Receptionist conversational AI app. Now, make sure that you have Dasha Studio extension installed in your Visual Studio Code. Navigate to extensions on the left side bar, type in Dasha Studio, click on Dasha Studio 1.2.3 and hit the Install button. (since I already have it installed, my button says “disable”).
Now, turn on your terminal. The Terminal (sometimes referred to as the command line or console), lets you run tasks and commands without using a graphical user interface (GUI). You will use your terminal to test Dasha apps.
Perfect, now you’re all set and we get to the fun part.
In the terminal type in:
npm i -g "@dasha.ai/cli"
This will install the Dasha CLI (command-line interface). if you are on MacOS, you may need to preface the command with sudo - this will let your machine know to run the command, as top-level administrator user with full access rights. The full command will be sudo npm i -g "@dasha.ai/cli" and once you run it, will ask for your password. This is the password you use to log into your machine.
Perfect. The Command line interface is installed and initiated. Now type in the command npm i.
You’re all set. Now let’s figure out how the whole thing works. I promise - it’s not too complex.
Understanding the DashaScript code and making changes to the AI virtual receptionist app
First - click on > app in the left-hand menu and doubleclick on main.dsl and intents.json. These two are the most important files you will work with. Main.dsl is literally the actual conversion script. Intents.json lets you train the neural network on which responses signify which
Now, if you click on main.dsl, you will see a little icon off to the right that looks like a dialogue pathway. Click that. Now you should see this:
This map is a visualized dialogue pathway of the AI receptionist app. Here we only see the high-level nodes. To get into the nitty gritty of their behavior, we need to look at the code in main.dsl. Do refer back to the map, as you study the application code in main.dsl to help you better understand the flow.
Let me help you to understand what is happening in the code.
Lines 7-16
start node root
{
do
{
#connectSafe($phone);
#waitForSpeech(1000);
#sayText("Church's Barbershop and Sodapop! How can I help you?");
wait *;
}
}
This is the root node, the first node. This is where the conversation begins. When the connection is safely established to the phone number specified, the AI receptionist waits for 1000 milliseconds. If it hears speech within 1000ms, it responds with "Church's Barbershop and Sodapop! How can I help you?” If it hears silence on the other end of the line, after 1000 ms (1 second), it states the same greeting. Then it waits for a response.
This is a very straightforward sample app and the AI is really expecting a single type of response back - the user wants to come in for a hair appointment.
Let’s navigate over to intents.json file to understand what types of interactions our AI is prepared for.
There are two major parts to this file:
"includes": {
}
Refer to phrases that elicit a response in the AI.
"excludes": {
}
Refer to phrases that the AI ignores.
We have the following intents in the file:
"schedule_haircut"
"cancel_appt"
"bye"
"monday"
"tuesday”
"wednesday"
"thursday"
"friday"
"saturday"
"sunday"
As you can guess - the intent cancel_appt tells the phone answering software that the customer wants to cancel their appointment, bye is the intent that tells the AI that the customer wants to end the conversation, the days of the week tell the AI when the user wants to come in and schedule_haircut tells the AI phon answering app that the customer wants to schedule a haircut.
Let’s look at that last one to understand how the neural network sorting intent is programmed. You, as the user, need to specify a few variations of the phrase that the customer might use in conveying their intentions to the AI. In this case, we have specified six ways of saying the same thing. This is most of the time enough data to train the neural network to recognize a variety of variations in which the customer might go about requesting the same thing.
"schedule_haircut": [
"I need a haircut",
"I want my hair cut",
"schedule haircut",
"can I have a haircut",
"schedule an appointment",
"haircut"
],
Let’s try and add a new intent to the file. The AI virtual receptionist introduced itself as “Church’s barber and soda fountain.” How about we ask what sodapop is available on tap.
To do this we will need to copy one of the existing intents and paste it right below. Then we need to change the name of the intent to soda_on_tap and add a few definitions of the intent for the neural network to train itself.
Here is the resulting code:
"soda_on_tap": [
"what soda do you have?",
"what sodas do you have on tap?",
"can I get a soda?",
"what about soda",
"soda?"
],
Now hit Control+S or Cmd+S to save the file. (Your Visual Studio Code likely has auto save features, however, we want to be on the safe side.
Well done - you have created your first intents. In a bit, you will instruct Dasha AI Platform to train a neural network to recognize them. How cool is that?
Anyway, now we need to use these intents in the course of the conversation. Our conversation structure (the script) is kept in the main.dsl file. Click it now.
You already met node root (lines 7-16). Now if you look at lines 18-31, you see digression _schedule_haircut _
As you can probably tell, this piece of code is called upon when the customer wants to schedule a haircut. Yet, you may have noticed it is called a “digression” and not a “node” What is the difference between a Digression and a Node? A Digression is a node that can be called up by the customer at any point in the conversation. Sort of like, when you’re speaking with your friend and at any point in the conversion, irrespective of what you’re talking about, you could say “oh by the way, how is the weather where you are?” and your friend will understand the question and respond. By the same token, a Digression prepares Dasha AI to answer a given question at any point in the conversation.
This is a very useful function for our purposes. As we saw when we looked at intents.json the app is now trained to answer three requests - scheduling a haircut, cancelling an appointment or ending the conversation. However, with adding a new intent soda_on_tap we need to add a new digression, to make use of the intent.
To do this we need to copy lines 18-31 and paste them into the file. Right below, at line 32 is fine. Great. Now, let’s go through the code line by line.
digression schedule_haircut
{
}
This line specifies that the code that follows within the curly brackets is a digression and names the digression, in this case schedule_haircut let’s change that name to soda_on_tap since this is what we called our intent.
conditions {on #messageHasIntent("soda_on_tap");}
do
Here we instruct the AI receptionist app to do a specific action when it recognizes a specific intent. In other words, if the AI app recognizes an intent, this digression is launched and the action is performed.
Since the intent we created earlier is called soda_on_tap, this is what we replace the intent name with
- note: digression name and intent name can be different. I’m using the same here to make it easier on us
do
{
#sayText("You want to get a haircut, did I hear you correctly?");
wait *;
}
Here, we instruct the AI app what action to take. In this case - say a specific text and wait for the customer to respond. Let’s tell the customer about our soda options and ask if they want a haircut.
We’ve got Dr. Pepper, Coke Zero, Raspberry Sprite and a mystery flavor. You’re welcome to unlimited soda. Would you like to book your haircut appointment now? You can use this text, or write your own version.
transitions
{
schedule_haircut_day: goto schedule_haircut_day on #messageHasSentiment("positive");
this_is_barbershop: goto this_is_barbershop on #messageHasSentiment("negative");
}
We had the AI virtual receptionist ask the customer if they want to go ahead and book their haircut appointment now. In this block of code here, we instruct the AI app what to do if the customer says “yes” or “no”. Note that we did not have an intent for “yes” or “no” in the intents.json file. That is because positive and negative sentiment is recognized at the native level by the platform. To call a upon it, the function #messageHasSentiment("sentiment"); is used. In this case, if the sentiment is positive, the AI app goes to node schedule_haircut_day to ask the customer what day they want to come in. If the sentiment is negative, the AI app goes to can_help_then, a node that asks the customer how they can be helped.
We will not change anything in this code, because it does exactly what we want it to do.
Here is the code we just created.
digression soda_on_tap
{
conditions {on #messageHasIntent("soda_on_tap");}
do
{
#sayText("We’ve got Dr. Pepper, Coke Zero, Raspberry Sprite and a mystery flavor. As you get barbered, you’re welcome to unlimited soda. Want to book your haircut now? And before you ask - we do have a restroom.");
wait *;
}
transitions
{
schedule_haircut_day: goto schedule_haircut_day on #messageHasSentiment("positive");
this_is_barbershop: goto can_help_then on #messageHasSentiment("negative");
}
}
Congratulations - you have successfully created your first Dasha AI app - a virtual receptionist.
Talking to the AI phone answering software
Now, you’ll want to talk to the the Dasha AI app you have just created. To speak to the AI, type into the terminal
npm start phone_number
Where phone_number is your number starting with country code. For example, for mine it looks like this:
npm start 19733588889
As you are building and testing, you may want to just chat to the AI, to save time. To do that, type into the terminal:
npm start chat
Parting words
As a non-developer, I tried my best to make this tutorial as easy to understand as possible. If you want to get deeper knowledge of building with Dasha ai - check out our documentation docs.dasha.ai or keep an eye peeled for content here on the blog or in our Twitter.
If you found this tutorial helpful - let me know: arthur@dasha.ai. If you didn’t find it helpful - all the more reason to let me know.
Good luck and godspeed.
Top comments (0)