How to Build the Perfect Bot
Estimates place the number of contact center agents in the United States at roughly 1.8 million, and the industry’s turnover rate averages between 30 percent and 45 percent.
Those statistics are concerning, and they provide the basis for many industry experts to maintain that chatbots are essential for customer service today. There simply aren’t enough humans to handle the millions of interactions, they contend.
That’s why Grand View Research expects the global chatbot market, which stood at $5.1 billion at the end of 2022, to grow at a 23.3 percent compound annual growth rate through 2030.
Bots are nothing more than automation, so they can perform tasks faster than a human. But if not designed properly, they can also make mistakes, like providing incorrect answers, at a much faster rate. So to best leverage a bot, you must design it correctly—and that’s not an easy task.
The first step is to start with the final outcome in mind, says Frank Schneider, a vice president and AI evangelist at Verint. “Why are you buying a bot or building a bot? What are you trying to elicit or trying to get it to do? That would inform the approach to building it.”
“Today’s chatbots need to be able to meet the demands of modern customers,” adds Raghu Ravinutala, CEO and cofounder of Yellow.ai, a provider of conversational artificial intelligence and automation solutions. “Convenience and speed are critical, and we’re seeing customers favor brands that are able to anticipate their needs and provide personalized recommendations and support.”
When building these chatbots, companies need to keep the latest technology top of mind, especially generative and conversational AI, Ravinutala adds. Furthermore, concepts like performance, scalability, ease of integration, data privacy compliance, and adherence to global standards need to be at the core of development.
Building a bot also requires a sufficient appetite for doing so, Schneider adds. “Seven to 10 years ago, people had a bot just to have a bot.”
Building bots used to be an arduous process, Schneider recalls. Several people with some technical expertise would collaborate, entering details into spreadsheets, then creating widgets to receive various inputs and provide answers.
Simple question-and-answer bots became commonplace. As natural language processing improved and demand for bots increased, bot-building software started hitting the market. These solutions enabled people with little software expertise to design bots themselves rather than going to third-party bot developers, according to Schneider.
Once those questions are answered, the next step is to determine which of the readily available bot-building kits enables the company to build the type of bot it wants, Schneider says. Some kits will be very low-code, with very intuitive instructions. Others will be no-code, and still others will require a large amount of coding and programming.
Companies just looking for a simple FAQ bot should keep it simple and not try to make it perform complex tasks, according to Schneider. However, he adds, companies will eventually want to move beyond simple bots to ones that can handle more complex interactions.
Building a conversational chatbot involves the following five steps to ensure its success, according to experts:
- Precisely define the chatbot's goal, clearly stating its capabilities while also specifying what it should not do to avoid scope creep.
- Create a persona for the chatbot that aligns with brand identity, including language, tone, and humor. Schneider recommends considering a persona that aligns with prospect and customer expectations and the brand identity.
- Design the conversation flow, akin to a flexible family tree with multiple branches. This approach enables users to navigate various paths toward their intended outcome, maintaining project focus.
- Create the chatbot script, which involves crafting phrases and responses for the chatbot so that it can interact with users naturally. Always aim for a conversational style and avoid long-winded text blocks, advises Ivan Ostojic, chief business officer of Infobip, a cloud communications platform provider. "Ensure each branch in the conversation flow ends logically and doesn’t leave users with unanswered questions. When intent is unclear, ask for clarification before proceeding to the next step, and always offer alternatives if the chatbot can't fulfill a user's request," he says.
- Test to confirm that the bot is performing as expected. "The testing phase is crucial and should not be rushed. It’s essential to test every possible conversation flow to verify its logic and alignment with the chatbot’s goal,” Ostojic says. “While specialized tools can automate some testing, involving real people from diverse backgrounds and age groups is invaluable. They can identify areas that require modification or additional phrases, ensuring the chatbot functions smoothly and effectively.”
Important components in the development process are natural language processing and speech analytics that facilitate seamless services across the customer experience journey, Ravinutala adds. With these capabilities, the chatbot can then convert vocal conversations into text, enabling precise analyses of client inquiries and sentiments.
“To support continual customization and evolution of speech analytics performance, a rich data layer should be built,” Ravinutala says. “This will enable enterprises to modify the system to fit their business requirements. As a result, the analytics solution is kept flexible and in line with shifting needs.”
Next, a continuous feedback loop system must be put into place, Ravinutala continues. This enables ongoing iteration and learning from speech analysis. It’s vital to steer clear of several frequent errors while adopting speech analytics. One such error is aiming to instantly incorporate or master every feature. The best way to implement analytics is in phases and iteratively, progressively increasing performance capacity and raising levels of productivity.
Though building a bot today doesn’t require as much technical expertise as it once did, other skills are still required, according to Schneider. “You can design and curate a customer experience with a design-time solution that’s really good.”
For proper design, a customer experience practitioner or someone with a contact center background should determine which problem(s) the chatbot should solve. A bot that attempts to handle too many issues will have a slow response rate, so that’s another consideration in design, according to Schneider. To control runtime, look for a bot development toolkit that enables you to design and operate the bot. There are some vendors that specialize in runtime.
GENERATIVE AI'S IMPACT
Since OpenAI ushered in the generative AI generation a year ago with its release of ChatGPT, generative AI’s large language models (LLM) have impacted all types of business, including the design of bots.
"When a bot is based on an LLM, it is equipped with a vast repository of knowledge and language understanding," says Izzy Traub, CEO of Inspira.ai, a provider of productivity optimization software for digitized workflows. "However, the success of such a bot hinges on channeling this potential into a specific function. By homing in on one primary service—be it providing nuanced customer service, personalized content curation, or targeted informational responses—the bot can leverage the LLM's capabilities to offer unparalleled expertise in its domain."
As with bot design without LLM, specialization is paramount, Traub adds. "A bot designed to excel in a singular task can navigate the complexities of language more effectively and provide more coherent and contextually relevant interactions. This focus enables the bot to harness the underlying LLM’s sophisticated algorithms to deliver concise and accurate responses tailored to the user’s immediate needs."
An engineer-down approach is crucial when working with LLMs, Traub says. "Start with the most basic iteration of the bot and then incrementally add complexity. This process ensures that each feature introduced is essential and directly enhances the bot’s core purpose. The result is a streamlined AI bot that provides a frictionless experience, free from the clutter of unnecessary functionalities."
Nagendra Kumar, cofounder and chief technology officer of Gleen, a provider of generative AI chatbots, offers the following steps for using LLMs in bot development:
- Collect your relevant knowledge base.
- Use an LLM to generate embeddings for all documents in your knowledge base.
- Capture the customer’s query and generate embeddings for those queries.
- Use the embeddings to perform a similarity search across your knowledge base to find the documents that are most relevant to the queries.
- Send both the queries and the most relevant documents to the LLM, then let the LLM generate a response.
- Analyze the bot's response for accuracy and to ensure the bot isn’t hallucinating. Though preventing hallucination is only a single step, it’s also the most difficult.
LLMs cannot cite their sources accurately, as they generate text through extrapolation from the provided prompt, according to executives at Master.of.code, a developer of AI-powered conversational solutions. "This extrapolation can result in hallucinated outputs that do not align with the training data. For instance, in a research study focused on the law sector, 30 percent of trials demonstrated a non-verbatim match with unrelated intent, highlighting the model’s hallucinative tendencies," the firm's executives wrote in a blog post recently.
"AI has enormous promise, but trust concerns must also be taken seriously," adds Javed Hasan, CEO and cofounder of Lineaje, a provider of supply chain automation solutions. "While building and enhancing our own Lineaje.AI products, we learned that AI and machine learning technologies are probabilistic, and large language models can unfortunately make up things. Enterprise users simply don’t expect that. The key to building the perfect bot is making it trustworthy. Restricting bots to a subset of known correct answers is key."
Even after following all of the above steps, evaluating the ongoing performance of the bot is necessary to ensure that it performs and continues to perform as expected, making improvements along the way as new customer service needs arise and as the underlying technology evolves.
"The perfect bot doesn't exist, and like most perfect ideas, probably never will, but our ability to create bots that can hold productive conversations with human beings is improving every day," says Joe Bradley, chief scientist at conversational AI technology provider LivePerson. "This means until you have a fully general AI (which we don’t), you still need to hone and continually teach bots how to improve on the specific real problems you’re trying to solve, as well as which conversational patterns lead to positive or negative outcomes."
Ask Astro Bot’s Story
The development of Ask Astro, the generative AI-based bot for data orchestration company Astronomer, demonstrates the steps in development for such a device.
Astronomer developed Ask Astro, its intelligent chatbot powered by LLMs, following the Emerging Architectures for LLM Applications framework developed by software venture capital firm Andreesen Horowitz. Ask Astro started as a combination of LangChain, a framework for developing applications powered by language models; Facebook AI Similarity Search (FAISS), a library that allows developers to search for embeddings of multimedia documents that are similar to each other; and a Slack Bot front end. To operationalize it, the first step was to separate the back-end indexing operations from the front-end querying operations.
Like most enterprise applications, LLM-powered chatbots start from a desire to leverage a wealth of knowledge in domain-specific documents, says Julian LaNeve, Astronomer's chief technology officer. While businesses everywhere are looking to leverage LLMs to accelerate growth and improve customer experiences, a few key steps must be considered when developing chatbots for enterprise use, he says. They include the following:
- build the foundation;
- select a vector store for scalability and reliability;
- optimize schema design for vector databases;
- select a model for ingestion;
- select an effective chunking strategy for documents;
- build an experimentation framework;
- schedule document ingestion; and
"For the first iteration of Ask Astro, we selected Weaviate because of its open-core-plus hosted structure, and we created a Weaviate provider for Apache Airflow that simplifies management while enabling clean, readable, and maintainable data pipelines," LaNeve says.
Large documents need to be split into chunks to fit within the context window of the selected model, LaNeve continues. "This is perhaps the most important consideration for retrieval performance and, like schema design, is very use-case-dependent. The Apache Airflow Taskflow API, along with tools such as LangChain, Haystack, and LlamaIndex, make it very easy to use best-of-breed logic for splitting documents."
LLMs, like all machine learning and AI, are "living software" that morphs over time, LaNeve explains. With LLMs, the pace of innovation is very fast right now, and retrieval augmented generation (RAG) applications need not only high-quality processes and frameworks for automated, audited, and scalable ingest but also flexibility for efficient experimentation.
"Building your LLM application with a flexible orchestration tool like Apache Airflow using the guidelines above will allow you to continually adapt your LLM to ever-changing business requirements and new technologies," LaNeve advises.
Phillip Britt is a freelance writer based in the Chicago area. He can be reached at firstname.lastname@example.org.