AI & Customer Service Survey Shows Higher Expectations But Lower Satisfaction

0
113

Applause, the provider of testing and digital quality, recently conducted a survey with 6,680 respondents about their experiences using artificial intelligence (AI) in the form of voice applications such as chatbots, interactive voice response (IVR), and other conversational assistants.

Conducted in February 2022, the survey showed alignment between the expectations and experiences of participants in the US and across Europe. According to the survey, consumers expect apps and websites to provide AI-driven customer service solutions, but they are not always satisfied with the user experience.

For example, 93% of respondents expect chat functionality on a website, but only two-thirds said they were somewhat satisfied or extremely satisfied with the experience. More than half of respondents in the U.S. (51%) and Europe (57%) said they preferred to wait for a human agent when calling a company for customer support.

“The fact that more than half of respondents preferred to wait for a human agent instead of using a chatbot, IVR, or voice assistant speaks to a potential lack of confidence which perhaps is based on previous experiences. When a user has a bad digital experience, it is difficult to change that perception. This is a moment when quality can be a real differentiator, separating a brand from its competition. If customers expect these solutions to disappoint, they are predisposed to anticipate failure and quickly lose patience with any alternative that isn’t a human interaction. Therefore, there is tremendous advantage to those who are able to deliver better experiences that can exceed the service level they have been conditioned to expect,” said Luke Damian, Chief Growth Officer, for Applause.

User Experience Trails Expectations

  • 93% expect chat functionality on a company’s website or app but only 63% said they were somewhat satisfied or extremely satisfied with the experience.
  • 89% expect call centres to have IVR systems that greet them but only 25% prefer immediate access to automated touch tone response systems, and 22% prefer an automated virtual service representative that responds to voice commands.
  • 44% always expect mobile apps to have voice assistants or voice search features while 41% said it depends on the app category.

A single AI application can require tens of thousands or more accurate and relevant data artifacts, all of which need to be collected with the application’s specific purpose and needs in mind. Applause leverages a community of more than one million qualified testers worldwide to collect the volume and quality of real-world data needed to train and validate AI algorithms, like those used for IVR or chatbots, and then test the trained systems to ensure they are working as intended.

Bias is a well-known challenge in AI. Algorithms that are not provided enough data or learn from data that is collected from a group of people that is too homogenous, can produce overly-generalized, biased outcomes and unintended behaviors. The size and breadth of the Applause community enable a diversity of feedback and input representing a wide variety of devices, plus limitless diversity of demographic and psychographic characteristics, including countries of origin or residence, ages, genders, cultures, and abilities, languages, socioeconomic variables, and more.