Conversational AI: the technology that has made our lives so much easier

Aliya Grig
10 min readSep 1, 2022

Coaching in general has proved to make significant improvements in people and organizations. Meanwhile, AI coaching has just started to make its way into the world. Although having an AI completely identical to human intelligence is rather unimaginable in the near future, its impact is already significant to modern society.

According to IBM, conversational AI refers to technologies like chatbots or virtual agents, to which users can talk. They use large volumes of data, machine learning, and natural language processing to help imitate human interactions, recognize speech and text inputs and translate their meanings across various languages.

This essay will include an overview of several research and projects on conversational AI. We will cover perspectives on chatbots, the application of conversational AI in organizational coaching, as well as three main projects: conversational AI for sleep coaching programs, CoachAI, and Evolwe’s Nova chatbot.

AI coaches

The most common form in which conversational AI can appear is, of course, a text-based chatbot. This article will mainly focus on regular conversational chatbots that perform the function of a coach. Currently, modeling AI coaches on human coaches is the most widespread, although perhaps not the most obvious. Since this area of research remains new, it is acceptable to start developing AI coaches based on real human coaches, as we don’t have any empirical evidence to the contrary.

When creating a chatbot coach, a skills guide for human coaches can be used as a reference. According to the International Coaching Federation (ICF), there are eleven coach competencies, including meeting ethical guidelines and professional standards; establishing the coaching agreement; establishing trust and intimacy with the client; coaching presence; active listening; powerful questioning; direct communication; creating awareness; designing actions; planning and goal setting; and managing progress and accountability. Naturally, current progress on artificial coaches does not cover all of the competencies fully. However, several aspects of an actually realistic chatbot coach have been put forward by Heather Knight. They are:

Nicola Strong and Nicky H.D. Terblanche combined the requirements mentioned above and created a list of characteristics, which mainly focuses on the perception of a coachee:

  • Visually familiar
  • Natural movement
  • Intelligent conversation indicates a common sense
  • Comprehension (how they sense us)
  • Personality
  • Security and safety of coachee’s data
  • Functionality and level of autonomy
  • Ethical and moral grounding
  • Empathy, intuition, and imagination
  • Presence

As you can probably tell, there is a long list of demands that require consideration when designing a successful bot.

From a visual perspective, there is a tricky part called the uncanny valley, which basically means that, while bots aim to become as humanlike as possible, the state when AI is almost identical to a human but not quite makes people feel extremely uncomfortable. Nevertheless, chatbots still need to make us deeply trust them. Therefore, developers need to find the right balance between visual familiarity and comfort.

Logically, chatbots are characterized by their conversational abilities. The ones that can have closed, short and retrieval-based (that use pre-defined scripts) conversations are called simple chatbots. Others that have open, long and generative-based (using machine learning to generate sentences) conversations are called complex chatbots. To truthfully imitate human coaches, bots should aim to become as complex as possible. To create one, some criteria suggested by Kamphorst may be useful:

  1. Social ability, or the ability to bot with the coachee.
  2. Trust and credibility; maintaining trust throughout the session period.
  3. Context-aware; personalization.
  4. Proactive; to encourage the client to talk.
  5. Theoretical underpinning; using a reliable base of knowledge.

The Designing AI Coach (DAIC) framework

In his paper “A design framework to create Artificial Intelligence Coaches”, Nicky H.D. Terblanche proposes a design specifically for organizational coaching. It includes the two facets of the DAIC framework to map effective human coaching to chatbot design.

The DAIC framework

Starting from the first facet, effective human coaching, it underlines four main principles and elaborates them further. The principles are broadly agreed on human-coach efficacy elements; the use of recognized theoretical models; ethical conduct; and a narrow coaching focus. According to Bozer and Jones, the factors that contribute to successful organizational coaching are self-efficacy, coachee motivation, goal orientation, trust, interpersonal attraction, feedback intervention, and supervisory support.

In order to build a solid coach-coachee relationship trust, empathy, and transparency surely need to be developed. Not to mention trustworthiness, which largely influences the coachee’s commitment to the coaching process, and consists of Ability, Benevolence, and Integrity. Besides the mentioned qualities, predictability and reliability also result in successful coaching. The second principle talks about different scientific approaches used by AI coaches being theoretically sound and verified. The third one is once again ethics. Users need to be assured that our chatbots are unbiased and safe. Lastly, the researchers point out that usual Weak AI systems are able to focus on one task only while traditional coaching sessions may cover multiple issues altogether.

A crucial aspect worth considering when building a bot is ethics. Perhaps soon it is going to become the main area of AI research. The problem includes societal biases (racism, sexism); data privacy and protection; liability and division of responsibility; and autonomy. To design architecture with longevity and trustworthiness, one has to prioritize ethical aspects. Let us break down the main issues:

  • Machine learning operates on large amounts of data, which is not necessarily neutral and unprejudiced. Designers need to make sure that during the learning process, their AI won’t acquire biased information.
  • Since most bots need to store, process, and analyze private information, users might find it disturbing and worrying, so it has to be stated clearly how this information is being used and by whom.
  • A responsible coach needs to be alert and reliable when it comes to some clients that may need a different specialist than a coach.
  • Users’ decision-making process will be affected by the chatbot’s words and actions. They can even be influenced and manipulated to a certain extent.

The second facet brings us to the chatbot design itself. The five principles included in the framework are:

  • Level of human likeness: even though the uncanny valley is definitely not an easy problem to solve, some human-like actions can be very beneficial. For example, having a name, using informal language, and conversing in the first person. These characteristics make chatbots feel more relatable and less eerie.
  • Managing capability expectations: it is important to state the bot’s abilities and limitations clearly and explain that AI bots and human coaches are not the same.
  • Changing behavior: it should be clear to a user that AI is constantly developing and learning, hence some of the interactions may change.
  • Reliability: as was mentioned previously, bots are still learning, so making mistakes is completely normal, which can be a helpful reminder to users.
  • Disclosure: make it clear that AI coaches are not humans and their capabilities are quite different.

Nicky H.D. Terblanche then proposed the full organizational DAIC framework based on the two facets. He pointed out that even though human-to-human coaching may focus on various topics simultaneously, it is better for now for Weak AI to focus on one outcome only.

The relationship aspects mentioned previously regarding chatbot design were elaborated on in the table below:

Designing chatbots to support strong coach-coachee relationships

To conclude, Terblanche mentioned some of the advantages of AI coaches, such as inexpensiveness (for large-scale studies, since no recruitment is needed), the lack of barriers, and the convenience of AI apps or websites. He also addressed some of the not entirely solved questions, like:

What influences trust in AI?

How does the personality type of a client influence trust?

How humane should a bot be? Should a chatbot have a gender?

How do structure the initial conversation to build trust?

To what extent should a bot accept its limitations? Etc.

Sleep coaching Program

Heereen Shim developed an exciting model of an AI coach that helps with insomnia and sleep patterns in general. Firstly, a brief overview of the existing sleep treatment of CBT-I, which is a sleep treatment that focuses on investigating the relationship between how we behave, how we think, and how we sleep, has been provided. This type of treatment can be performed by a human expert or via computers. Conversations play a huge role in in-person treatment. They allow determining whether the patient is appropriate for CBTI treatment. Throughout the treatment period, it is important that the therapist monitors the progress, identifies the patient’s difficulties, provides personalized support, and encourages the patient to complete the treatment. Computerized treatment is a bit different; some consider it more convenient. In some tools, patients provide their personal information and progress by completing questions and questionnaires via the internet. However, they still miss the interactive conversational feature, which includes automated support and feedback.

Shim believes that “conversational AI will enable natural interaction between users and a system” and raises three main questions to achieve it:

How to triage via conversation together with the completed questionnaires?

How to monitor the progress during the sleep coaching program?

How to understand a user-specific situation for a personalized coaching program?

The underlined sub-tasks for the first question are complaints assessment, follow-up questions, and triage results. For the second question, a narrative sleep diary written in free text is planned to be used. The last question will be solved by building a model that performs aspect-based sentiment analysis (ABSA) on review comments from a user to further provide personalized support and behavior change programs.

The proposed project is expected to contribute to natural language processing (NLP) and AI research. Besides, it is bound to provide a more accessible and affordable sleep treatment solution and an automated analytic system to lessen the workload of human experts.

Coach AI: the Health Coaching Platform

The next team, consisting of Ahmed Fadhil, Gianluca Schiavo, and Yunlong Wang, designed a platform specifically for improving people’s lifestyles. The platform is called CoachAI, and it is a “conversational agent-assisted health coaching system to support health intervention delivery to individuals and groups.” It is powered by a conversational agent and a supervised machine learning model, which performs user clustering based on their physical exercise level. The platform helps to decrease caregivers’ workload (just like our previously discussed project), providing them with insights about their users. The main idea has always been not to replace human agents, but to complement and assist them. The research question is “How can different individuals’ motivational factors be integrated into the knowledge-base of CoachAI to generate support tailored to their preferences?” The team created two complementary layers, the coaching dashboard, and the conversational agent, but this time we are going to focus on conversational AI.

It is based on text conversation and simple graphical elements. The chatbot essentially collects information and feedback. It handles tasks about activities assigned, user feedback per activity, exercises, private messages by the coach, and health intervention questionnaires.

There have been numerous statistical analyses done. One revealed that, depending on a coaching style, the preference for certain coaches differed.

The study also revealed no change in individual intention (physical activity, healthy diet, and mental wellness) throughout the three weeks of the research.

The proposed platform focused on increasing the time efficiency of the caregiver and decreasing their fatigue. The team provided an interesting channel for caregiver-patient interactions.

Evolwe AI Algorithm

Evolwe’s coach for personal growth, Nova, is complex in its technological features:

  • Deep neural network (DNN)
  • Evolwe DNN is trained to recognize a human’s emotional and psychological state
  • DNNs can model complex non-linear relationships and are typically feedforward networks in which data flows from the input layer to the output layer without looping back
  • Artificial empathy (AE) or computational empathy
  • Based on Natural language processing (NLP)
  • The use of word embeddings
  • Named entity recognition (NER)

Sentiment analysis

  • Terminology extraction
  • Coreference resolution
  • Discourse analysis
  • Natural language understanding (NLU)

Question answering

  • Other unique features include the following:
  • A new mathematical model for AI based on the human brain, cognitive psychology, and consciousness
  • An intelligent chatbot that speaks like a human: empathic, proactive, and consistent
  • Chatbot with emotional skills and styles. Our chatbot is completely generative: neural networks generate chatbot answers.

AI with personality

  • The technology was built on the basis of neurophysiology, analysis of thinking, psychology, how the human consciousness and subconsciousness function, and emotional intelligence.
  • Out-of-the-box and customized dialogue solutions designed to automate processes in any industry. Can be integrated into messengers and websites. The Virtual assistant with embedded sets of skills is easy to customize and quick to integrate.
  • API to enable quick connectivity to any form factors and smart speakers
  • Use of state-of-the-art Transformer neural networks: GPT2, GPT3, BERT, T5, ruGPT3, XLnet, etc.

Limitations and challenges

The four primary challenges of modern chatbots described by Britz include incorporating context (both linguistic and physical), coherent personality (consistency in its answers), evaluation of models, intention, and diversity (lack of context-specific answers). These points require further consideration and development.

In this article, we’ve discussed some of the latest conversational AI projects and evaluated them thoroughly. Although current conversational AI is not advanced enough to replace human coaches, it can still provide an affordable, convenient, and theoretically verified service in various fields. And since ML is based on constant learning, AI coaches will continue to improve gradually.

--

--

Aliya Grig

Visionary and Futurist. AI expert. Founder, CEO Evolwe AI — the first conscious AI. Founder of the Cosmos City