CX Power Hour

Human+Machine: The Future of customer Service

There’s been much debate among CX leaders over what the future of customer service looks like, specifically around the reliance on chatbots and artificial intelligence. Do you expect to see bots ruling the industry or believe that traditional human-to-human interaction will prevail? In this session of CX Power Hour, leaders in the CX industry will discuss the future of customer success and how the emotional intelligence of humans paired with the machine learning techniques of bots can change the CX world and increase customer satisfaction.

Want to learn more about Forethought? 

Book a Demo

Webinar Transcript

Rose Wang:
Awesome. Hello and good afternoon everyone. I am very excited for y'all to join us for our CX Power Hour. I am Rose Wang, Head of Customer Experience at Forethought and your moderator today.

Rose Wang:
For the next hour, I will ask our esteemed panelists a few pre-prepared questions. If you have a question, or a few, please put it in the chat and if we have time, I would love to get to your questions at the end. Right now I am joined by three awesome customer experience leaders who are at the forefront of, I think, the future of customer experience.

Rose Wang:
Deon Nicholas is the founder of Forethought AI, and my boss, a customer support AI used by companies like Instacart and Marriott.

Rose Wang:
Alan Pendleton is the President and co-founder of ArenaCX, a company that helps you with your outsource workforce needs.

Rose Wang:
And Seth Earley, the CEO and founder of Early Information Science and the author of the award winning book, The AI Powered Enterprise.

Rose Wang:
So happy to have you guys with me today. I would love to [crosstalk 00:03:12], yeah of course. So I would love for you guys to introduce yourselves and, as a part of your intro, could you tell everyone what you would define as AI?

Rose Wang:
For example, does a keyword search plus decision trees count as AI? Let's start, how about, in the same order I talked. So Deon, do you want to get started, and we'll go to Alan and then Seth.

Deon Nicholas:
Absolutely. Thanks for having me, great to be here, and good to see you all. Hi everyone, I'm Deon Nicholas. As Rose mentioned, I'm the CEO and co-founder of Forethought. And my definition of AI is ... I kind of like to think about how technology can think and act similar to humans. I think in the past, we've seen a lot of technology that can appear like human, but can't think like a human and so that's one of the things that I'm really excited about here at Forethought and excited about for the future of AI.

Rose Wang:
Thanks Deon. Alan?

Alan Pendleton:
All right, hi Rose. What a pleasure to be sharing a moment here with you and the other panelists. For folks tuning in, I'm Alan Pendleton, I'm the CEO and co-founder of ArenaCX. We're a platform that helps businesses find, manage, and optimize their outsourcing partnerships. So we are outsourcing made simple.

Alan Pendleton:
In today's conversation, I represent the human part of the equation and despite AI's prominence, I still remain a fan of humans. To your question, if a computer is following my decision tree, I don't think it's probably AI, but if the system is designing or drawing the decision tree for me, now we're talking.

Alan Pendleton:
As for my own definition, I heard the same thing Deon mentioned. You hear it explained as a simulation or imitation of kind of human intelligence. I like to think of it as computational systems that match patterns of inputs with patterns of outputs for the purpose of making real time decisions at scale. So-

Deon Nicholas:
Real time.

Alan Pendleton:
... rather than mimicking human decision making, it helps us overcome our cognitive limitations by doing so with huge amounts of data, huge complexity, and small amounts of time.

Seth Early:
Perfect. So I'm still Seth Early and founder and CEO of Early Information Science, and we are a professional services firm. We've been around over 25 years and our tagline's to make information more usable, findable, and valuable.

Seth Early:
And that cuts across lots of different areas from knowledge processes, to large product catalogs, eCommerce, customer service, customer support, and a number of other areas in which we help organizations optimize customer experience, or the employee experience. And I like to say, "You can't have [inaudible 00:05:54] upstream and expect a seamless customer experience externally." So we have to address all of those pieces of the puzzle.

Seth Early:
From my perspective, when I think about cognitive applications, cognitive AI, I think it's a bit of a misnomer. Because those systems don't think, they don't actually think, they don't ... They may mimic some cognition, but they don't actually work the way a human brain works.

Seth Early:
Because the human brain ... My background was biology and chemistry, and there's 100 different neurotransmitters, three billion neurons in the brain connected to hundreds of thousands each. I mean, there's astronomical combinations and possibilities, and those 100 neurotransmitters are all analog, they can vary. So there's enormous complexity in what we carry around in our heads and I think it's going to be a while before computer technology is going to be able to get anywhere near human cognition.

Seth Early:
That said, I agree with the statement that it's a tool that looks patterns of data and makes predictions. So you can make a prediction about an answer to a question, you can make a prediction about a pattern of applications and look for one that may be fraudulent, or may be a high risk applicant. You can actually do things like machine translation with large amounts of data, translating from one language to another, or you can actually do machine translation of questions to answers if you have enough data.

Seth Early:
But I'm a big believer in the knowledge engineering aspect of this where we have to have curated and structured knowledge as the training data for most of our customer support systems. And I also believe that it's a real ... It's something that works in conjunction with humans as opposed to replacing them. But I might be stealing someone's thunder for later, so I'll stop there.

Rose Wang:
I love the three different viewpoints and I think that's why this panel is going to be so interesting today. Because already I can hear that there's some nuances that we're talking about, right? And we're going to be talking about in the debate today. What does AI look like today, what does it look like in the future?

Rose Wang:
And then also, Seth you're talking about preparation, what does that really look like? And I think that's a huge part of it, it's not just on the AI companies but you have to begin to prepare yourself for AI and what does that look like?

Rose Wang:
And then lastly, Deon, I think we would also love to explore really how can you tell the difference between old school AI versus what is state of the art? So we will get to all of that as we get into the discussion. But before I do, I always like to start and establish a baseline. So Alan, how is AI being deployed in customer service today? What does AI do better and what do humans do better?

Alan Pendleton:
Well Rose in 2019, I did a little research in Crunchbase where I searched for startups that were tagged as conversational AI for customer service. And I had received venture funding that year and I found, if I recall, about 40 companies, and that was just the VC funded ones. So add in big IT shops, corporate spin outs, university spin outs, PhDs in the basement; it could easily have exceeded 100 companies in 2019 alone. And based on my own email inbox, I'd hazard a guess it's a lot higher number than that.

Alan Pendleton:
So my point in saying this is I can't possibly touch on all the exciting, fascinating use cases probably being worked in the world by all the AI shops out there, but here's what I am seeing. So we have AI for operations and AI for conversations. AI for operations is like using natural language processing and machine learning to automatically tag and triage incoming customer operations, to build better demand forecasts using machine learning, to route and distribute workload optimally among teams, and to empower agents with recommended knowledge and responses.

Alan Pendleton:
AI for conversations are the customer facing chat bots and interactive workloads we all see online when we're logging a ticket with a brand. So, personally, I think AI for operations is more mature, it's a great use for this amazing technology, it can distribute workload smartly and set up humans for success.

Alan Pendleton:
And AI for conversations is going to get there, but it's still improving. It's working on empathy, it's working on learning nonverbal cues and cultural differences. It's getting better but for really sensitive customer issues today, I'd still trust a human.

Rose Wang:
And I think that's shared across the board because we did have the era of chat bots in 2016 that didn't go so well, and so we have been burned. And I think when you talk about AI can look at your tickets, read it, triage it, a lot of AI companies today say they can do these things, "We can read sentiment, we have an understanding conversationally what the customer wants, we can build workflows."

Rose Wang:
And Deon, this actually is a great question for you is in this ever growing crowded space, how are people supposed to really understand? What is the difference between somebody who says, "I can route tickets one way versus another?" What is the gold standard and how are you bringing Forethought in that direction?

Deon Nicholas:
[inaudible 00:11:24] that's a great question. And I agree wholeheartedly with what Alan said earlier around AI for operations versus AI for conversations. I think in the past decade with the basic decision tree based algorithms, with basically regression models being able to do classification and being able to apply models to a process has been largely solved to some degree, and I'll talk a little bit about how that's being solved.

Deon Nicholas:
One of the things that's really exciting is that jump from hey we're just using basic statistical models to now we're starting to do things that are a little bit higher level, like one layer removed a little bit more logic oriented.

Deon Nicholas:
And so this starts to happen in conversations. So how can AI actually start to generate realistic sounding conversations and dialogue models? How can AI actually understand nuance and go beyond things like keywords like hey when I see the word refund, go and issue a refund? And start to understand things like when the customer says, "I'm upset, I want my money back." Then that means exactly the same thing.

Deon Nicholas:
And so what we're seeing is this wave of new technologies that are able to understand nuance in human language and then able to kind of run with it. Whether that's on the classification side to be able to truly understand what is the root cause or case reason for this case, or on the conversation side. So the ability to then go and actually respond to the questions and respond to the tickets.

Deon Nicholas:
And the last thing I would say, Rose, to your question on if I'm a customer or a customer service practitioner, how can I tell the difference? A very good question you can ask is, "Well, one what is implementation like?" Because you'll find that with true AI systems, implementation gets a lot easier because we can read your conversation history and build models on top of that.

Deon Nicholas:
But you'll find a lot of traditional systems are built around this concept of a decision tree. Well you got to go and train the model, you got to go and build out all these rules. So that's one of the things that I would ask, and I'm sure there's a lot more there, but that's one of the really quick ways you can start to suss out whether it's more traditional chat bot based, or traditional systems, or a modern AI based solution.

Rose Wang:
Yeah, Seth I see you shaking your head a lot. I wonder if you have anything to add?

Seth Early:
So I think there's a couple of things to consider here is that the data intensive approach works if you have a lot of data, right? You do need lots of conversation, you need lots of conversation history, and you also need to curate that to some degree.

Seth Early:
Because when you think about it, you're really answering knowledge questions. And you can do things that are unambiguous that are straightforward that have a clear answer automatically, or more automatically, or purely with a bot type of an approach if there's a clear, unambiguous answer.

Seth Early:
Whenever you get into something that's advisory or that's judgment based, you can build things that will help the human that will be an agent assist to access the knowledge that they may need to apply, but they still need to apply human judgment and they need to connect with people so you can automate with technology, and you can engage with humans.

Seth Early:
And when you think about a lot of the stuff that needs to happen in a call center, it's really problem solving. Why do people call the call center in the first place? Is something broken, or something doesn't work, or they can't find something, they can't get their question answered. So they don't a want 50 page document, they want the answer.

Seth Early:
And when you allow a system to be able to make that information more accessible, the system can surface that information, or an agent. The agent vet it, review it, take a look at it, and pass it on to the customer.

Seth Early:
It also allows for some learning because the [inaudible 00:15:18] system won't always get it right. But what you are doing is you're doing some degree of text analytics. You're listening in on a call, processing that verbal communication, translating that into text, and then using that text as a mechanism, as one of the signals to retrieve some content.

Seth Early:
And so that does require a degree of componentization, it does require a degree of knowledge engineering, but it also can bring into bear things like intent analysis, understanding when people are saying things in different ways that they have the same intent, and being able to disambiguate. And when people ask questions, you need to have some other signal to really understand what they're talking about.

Seth Early:
Now again, the agent is there to facilitate that, but a lot of stuff can be done using a system that is a retrieval mechanism. When I think about chat bots, or virtual assistants, they're really knowledge retrieval mechanisms, they're information retrieval; just like search is information retrieval.

Seth Early:
You take a signal, which is a phrase, an utterance, they boil it down to an intent. What are all the different ways that you can say this thing that means, "I need to change my password." Or, "My computer isn't working, my password isn't working, my computer's mad at me, I can't log in, my username isn't working, I forgot my password." All those things mean the same thing and will kick off a certain workflow or a certain piece of content.

Seth Early:
But again, what we're trying to do is understand what that user wants and then use that signal to retrieve something. And that's what a virtual assistant, that's what a cognitive AI tool will do is reducing the cognitive load on the human.

Seth Early:
When I hear cognitive, it's not because they think that system think, it's because it's reducing the cognitive load on the human and there's a lot of different ways to do that, but they do require things like text analytics, and content management, and knowledge management, and knowledge engineering.

Seth Early:
And as Deon was saying, you can do that with large amounts of conversational data and use that to model things, but you will need some kind of a repository that's the source of truth that your agents can put in front of your customers or use it to walk them through the process.

Rose Wang:
Absolutely. I love that answer, Seth, because so much of AI is we need a past to predict the future. If there's no past, how can we predict?

Seth Early:
[inaudible 00:17:49]

Rose Wang:
Which is I think you are alluding to ... I would love for you to even get deeper into the question which is, essentially, we understand that AI, at least in operational sense from Alan and Deon's perspective, we're close, we can solve most workflow problems.

Rose Wang:
That said, the promise of AI is seductive, yet elusive; so many companies aren't seeing the value. And so I would love for you to give a more holistic answer on your take on why expectations don't meet reality when it comes to AI implemented in enterprise.

Seth Early:
Sure, so there's a lot of challenges and one is that all of these algorithms run on data. At the end of the day, it's about data, right? You need data in order to drive these things and if you have poor data, crappy data, junk data, you're going to get a poor result.

Seth Early:
I talk a lot about this in my book, the fact that organizations have been trying to solve these problems for decades. They're smart people, lots of money and we run into the same intractable evergreen problems, and they're typically around data.

Seth Early:
In fact, semantic search and intent classification is really, in many times, many cases to make up for our sins in past content management and data management. Because we use different terms to describe things or tag things in different ways.

Seth Early:
But again, that's the nature of humans. Humans are creative in both the way they solve problems, and the way they present problems, or characterize problems, so we're always going to have that challenge. But at the end of the day, we do need to curate data, we do need to have good data.

Seth Early:
There's a big misconception that AI is going to fix your data. Well, it can go a long way to helping to clean it up, but you still need a reference architecture, you still need terminology that describes your services, your products, your problems, your solutions.

Seth Early:
And AI is not going to know highly specialized knowledge or information about your products, especially if you have a complex set of products and solutions, so we have to be able to understand that, curate it, manage it. And so when you ... At the end of the day, it's really about the data, it's really about managing it and organizing it to some degree. Again, AI can help solve the problem.

Seth Early:
I was once doing a webinar with a consultant from one of the big consultancies, and his premise was, "Oh you just point your AI to all of your data and it figures it out." And I said, "I'm going to disagree with you on that." And I said, "Look, you always contextualize no matter what you're doing, you look in some place for information; humans do that. You don't just dump all of your system information in one place."

Seth Early:
Once a CFO said to me, "Why do we need taxonomies and otologies, why don't we just get Google?" And I said, "Do you have a chart of accounts for your finance organization?" He said, "Of course I do." I said, "Why don't you get rid of your chart of accounts and just get Google?" And it's a rid- ... Because a chart of accounts because it's ... A taxonomy is a chart of accounts for knowledge.

Seth Early:
And they had started solving a problem for many years, many times without those basic foundational organizing principles, without that reference architecture. And once they had it, then they could build their knowledge infrastructure, then they could organize that information for humans and systems to access.

Seth Early:
People talk about an ontology, and ontology consists of all the different taxonomies in the organization and the relationships between them, and it becomes the knowledge scaffolding for the organization. Once you have that, then these systems can access them. And we know today that bots' conversational systems are pretty crappy. But one day, they will be really good.

Seth Early:
And I outlined a scenario in my first chapter, that's the future. And we will be interacting with these things just as we're interacting. We'll be asking them questions and it'll be conversational. They'll be setting our calendars, and arranging travel, and negotiating on our behalf; we will get there, we will get there.

Seth Early:
And the companies that get there first are the ones that are going to get a handle on their knowledge and their data and be able to produce these highly functional, highly efficient systems and tools and virtual assistants, and they'll be able to offer products and services at a lower cost and much higher quality of customer service, and they're going to clean the clocks for the organizations that don't. So I can go on, but I'll [inaudible 00:22:20].

Rose Wang:
I love how passionate you are about this, and I think it's a very interesting discussion of AI now versus what it looks like in the future, and how companies prepare now in the future.

Rose Wang:
But before I do go to Deon for that question, Alan I do ... We hear this all the time, companies need to prepare their data, they need to have data. And so you coming from the outsourced workflow and working with companies that just probably want to have the data, what is preventing companies from being prepared that you see?

Alan Pendleton:
I mean, that's a good question. So think about 80% of companies today specify use of their own CRM or CCaaS. About 20% rely on that from the outsource providers or the BPOs of the world. That's flipped, it used to be the opposite.

Alan Pendleton:
When BPOs owned the data structure, they would have sort of one BPO to many customers, and it could be somewhat more standardized across the base. Now with the opposite being true, usually one customer with few BPOs partnered with them, a few resources. And so there's probably more fragmentation in the data sources around the world based on that.

Alan Pendleton:
And you also now have the owners of the data are customer experience leaders, customer service leaders, VPs of operations and they may or may not enjoy, based on their function, the privilege of tapping into their in-house IT and engineering resources to the extent they wish they could.

Alan Pendleton:
So service providers can help them. They're not going to be able to easily build it on their own due to resource limitations. Third parties can come in like Forethought, or like Early, like ArenaCX who specialize in playing these roles that sit in between the operations and the customers that are helping to mediate flow. And when you have specialty companies, you regain that one to many situation that promotes better data standardization. So I'd say that one comment really to that question.

Rose Wang:
Mm-hmm (affirmative), yeah. I think it's always a resource constraint and information constraint, I think, environment that we're all operating in. So Deon then as AI is coming to take over and the AI startups, as we're coming in and working with different enterprises, would love to hear one, you address where AI is today and what AI companies can do to help companies, what Forethought is doing to help companies over that hump? And then longer term, speaking to some of Seth's concerns longer term, where do you see the horizon for AI five to 10 years down the road?

Deon Nicholas:
Yeah, what's interesting is that kind of almost secretly, AI has had this huge leap in the last three to four years, especially with respect to natural language processing, especially with respect to customer experience.

Deon Nicholas:
In and around, I would say 2017 with the launch of a few different datasets ... So Stanford launched their question answering dataset, after that Bert, which is this really interesting kind of natural language understanding model was launched and, more recently with GPT-3, which is this language model launched by OpenAI, we're actually seeing, in so many different ways, an explosion in how good AI's actually getting.

Deon Nicholas:
Prior to that, quite frankly, AI wasn't really that good, so to speak. You could, with a lot of energy, with a lot of effort create your taxonomies or create your dataset and get AI to the point where it could solve some of these basic tasks, but I think that was a major limitation.

Deon Nicholas:
And also part of what led to this over promise/under deliver phenomenon where everyone has been excited about this idea of AI, but when you actually launch it in practice, outside of a research lab so to speak, it's been underwhelming for a lot of different folks.

Deon Nicholas:
And so I think the first thing that folks should do and should think about when you think about AI is really level set expectations. So as Seth mentioned, sometimes it's a matter of hey making sure that your data is correct. The companies are actually going to start providing software, even AI companies will start providing software that can automatically help you analyze gaps in your knowledge base and things like that.

Deon Nicholas:
And we're actually working on some technology there, but that'll help you get to that baseline of getting a good dataset. And so there's actually a few things you can do as you're preparing for launching AI that'll help level set those expectations and get you a good kind of finished product. And then in terms of where I think AI is going, again we're in this inflection point right now, a ton of research going in from companies and folks here on this call, but as well as research labs.

Deon Nicholas:
But I truly think that AI is going to start to hit that next level where it can be a true helper for humans, and I think we all kind of realized that. At least 10 years ago there was this whole AI is going to transform take over the world type thing, but we're starting to realize that every time technology or a new technology wave comes in, it actually ends up making people better and then uplifting what humans can do throughout our society.

Deon Nicholas:
A really simple, but great, example of this is like spreadsheets. When the launch of the spreadsheet was such an innovation that people thought that accountants were going to be put out of a job; going back to that finance chart of accounts example.

Deon Nicholas:
But now we're seeing that finance within companies is becoming even more strategic and the role of the CFO is becoming even more important. And I think the same is going to be true in the call center and the contact center in customer experience that AI is going to enable every single agent to have this agent assist tool so that your new agents can start to operate like experienced agents from day one.

Deon Nicholas:
There's going to be the ability to have AI solving problems for your customers so that the simplest problems can get resolved in minutes rather than hours and so on, and so forth. So I'm starting to, and I think we're all starting, to see a lot of these superpowers where AI can start bringing these superpowers to agents, to administrators, to leaders in and around the organization and it's just a really exciting time.

Rose Wang:
Thank you so much. Deon, so as you're talking about this, I think so many [inaudible 00:29:28] ... So I work with customers and I think a big question for customers is, "Well then, where am I in the adoption curve? Am I an early adopter? Do I want to wait for other companies to test this out and wait and see before we come along when AI is further developed?" And so I would love to get Alan, you and Seth's, viewpoint on how does a company assess what risks they can and cannot take and when to actually adopt AI?

Seth Early:
You want Alan to start?

Rose Wang:
Whoever wants to. Alan go for it, because it's A.

Alan Pendleton:
I'll start off, Seth, then you can take it home if that works. In terms of whether a company's ready to embark on their first AI journey, was that the nature of the question? Did I get that right?

Rose Wang:
Yes, exactly. So as Deon's talking about where we are today, where you guys are talking about, I really want to make sure that companies feel comfortable that they understand what their risk is and where they want to come along in the adoption cycle.

Alan Pendleton:
Yeah, I mean I think any customer experience overhaul brings some degree of risk, and I've never met a company who felt ready to take on certain risks. Outsourcing itself in the first place is it is a risk, people are nervous about that as well.

Alan Pendleton:
If you consider AI, it's a resource, conversational AI is another resource just like a BPO is, or just like an in house team is. And the question is can this resource perform at a level that is competitive with the rest of the resource pool and, therefore, serve as a suitable complement to the capacity that's needed to deliver the experience they want?

Alan Pendleton:
So I think there's going to be trepidation no matter which resource you add is kind of the first statement I would make about it, and I wouldn't necessarily hold AI to a radically different standard than to adding a human team as long as the organization remains nimble and flexible to adjust.

Alan Pendleton:
If a human team is failing due to lack of training or some other capacity issue they may be facing, the flexible, resilient organization is going to have a contingency plan to route that traffic to a different resource; the same should be true for AI.

Alan Pendleton:
If you plug it in and it's failing, you should be able to have a fallback to humans or even have two AIs in competition, and the one that does better gets the traffic. So there's ways to approach risk mitigation to enter this, but that's kind of a macro statement.

Alan Pendleton:
And then in the micro view of it is you have to have the data, as Seth said, there's got to be a starting point of how will this machine interpret who you are as a business? To steal a quote from Seth ... Well I won't do it, but as the IA,-

Seth Early:
Oh please.

Alan Pendleton:
... I'll let you have that one.

Seth Early:
Please. No, no steal my quote.

Alan Pendleton:
Can't get the AI without the IA.

Seth Early:
There you go, perfect, love it.

Alan Pendleton:
Information architecture needs to be there and so if there's no starting point to ground the AI in your business dataset, then it's going to struggle to learn. And so humans can play a role in feeding it, training it, and tuning it to de-risk and improve its ability to be successful. So I'll stop there and hand the baton.

Seth Early:
Yeah, Alan you made some really great points there and one of the things ... And by the way, you can always say there's no AI without IA as long as you give me attribution. Ginni Rometty used it at the Davos Switzerland at the World Economic Forum, but did not cite. Tom Davenport told me she stole my line. I coined the phrase several years ago in an article I wrote for the IEEE, so it's on the internet.

Seth Early:
Anyway, but one of the things you said is really, really super important. You said, "You actually need the same things to train an AI as you do to train [inaudible 00:33:35]." Think of the AI as a [inaudible 00:33:37], you have to give it the information it needs to do its job.

Seth Early:
And in fact, when you're making that information more consumable for employees, and for call center agents, you're actually ... You can do it in a way that is re-purposable for the AI, for the cognitive assistant.

Seth Early:
So when you engineer these things correctly, you're serving both purposes. And in fact, what I will say to organizations is, "The first thing about AI is forget AI. Think about the information, think about the architecture, think about the process, the use case, and the scenario, and curating your knowledge so that it's easier to use for everybody, including your customers."

Seth Early:
This about it, why do people call the call center? Because something's broken, something's not working, they can't find information. Let's go upstream and solve that problem and we can use AI to solve that problem.

Seth Early:
We can curate our knowledge we can make predictions about the next best action, the next best product, the next best piece of content and maybe that'll keep them from calling the call center. We want to give them a those self service capabilities and we can use virtual assistants to do that, and you can do that especially with unambiguous clear result types of problems.

Seth Early:
The other thing to think about is when you are ... somebody wrote, oh I guess I said all my points. But I think the point here is that when you're considering this, it's really about the information, it's really about the knowledge. And I think organization do not have a choice but to start investigating this because if you wait too long you're going to be left behind.

Seth Early:
And as I say, the things you need to do today, you need to do anyway for your employees, for your people, for your customers and that is getting that knowledge in shape. When you do a search, you don't want 100 documents and one of those documents is a 300 page document; you want an answer. So you want to build question/answering systems and that is through componentization of the knowledge. So when you think about these things, it's really getting down to the basics and understanding the scenarios and those use cases.

Seth Early:
And the other thing to keep in mind is when you have these out of the box AI solutions that are great for handling more generalized types of interactions, you want that. But think about it, standardization is for efficiency, differentiation gives you a competitive advantage. Keep that in mind: standardization gives you efficiency, differentiation is competitive advantage.

Seth Early:
When you go into a store and they can sell the exact same products as another store, but you like this store. The signage is different, the feeling is different, the product assortment may be varied, but the point is, you're differentiating because you're trying to meet the needs of that customer. And a big piece of this is understanding the mental model and how they solve their problem and then emulating that through your technology, through your architecture, through your bots, through your agents, through all of those things.

Seth Early:
So the point here is to keep in mind that you're not just going to get this by following others, you're going to get this by curating your own content, differentiating your experience, and finding that competitive advantage by being different. And that means your products, your services, your solutions, your way of interacting with your customers has to be captured by these tools and these technologies.

Rose Wang:
Love that and this will be a fun something for you, then Deon, as you're scaling, we are scaling Forethought and AI startups, that is something we want to do. Customers usually don't have structured and well labeled data consistently, let alone sometimes they just have big chunks of data missing. And so as AI startups grow, and there are more and more companies coming into this space, that is a big question. How do AI companies respond? Are we relying on customers to get their data ready and then what does that mean for the growth of industry?

Deon Nicholas:
I think it's one of those kind of supply and demand curves or supply and demand meets the other. But I think replacing the word supply and demand, I think demand is really the data preparation of the company, and then the other curve is actually the sophistication of the AI.

Deon Nicholas:
And so as AI gets better, and I think this is a research problem that all AI vendors are aware of, is we need to continually get better at doing more with less. There's a lot of research around reinforcement learning, for example, how AI can actually learn from, so to speak, on the job training from what your customers are saying, what the agents are doing that it can actually train the system.

Deon Nicholas:
There's a lot of research, for example, on one shot learning and few shot learning and ways that AI can learn a lot with a little bit of data if it's been pre-trained in a certain way. And so I think one of the things that helps AI companies stay ahead of the curve is continuing to do this research and how to do more with less.

Deon Nicholas:
And then on the flip side, one of the things that, as the AI companies and as the technology continues on this curve, the things that the businesses can be doing are things like starting to label your data. A lot of businesses will start with, "Hey, let's add case reason categorization." So what are the biggest reasons for these tickets and then we often see a lot of companies have sub-reason or something like that.

Deon Nicholas:
So is it generally billing, is it generally a shipping issue, is it troubleshooting, et cetera? And having anywhere from five to 10 different categories and then in there maybe three to five different subcategories, that can actually get you a long way through the curve.

Deon Nicholas:
And you can often do that, whether that's in-house or as Alan mentioned, signing up a BPO to do that, those are the kinds of things that you'd probably have to do anyway, but a very sophisticated customer experience team will start to do as they scale that actually helps you prepare for AI, and we've seen that a lot.

Deon Nicholas:
When we come in, even if the data's a bit messy but you have some categorization that you've been doing, that can actually help bootstrap the AI and the intent detection. So I think there's a lot that can be done, both on the vendor side and we kind of know that and the research is coming, but then also some really, really solid business practices you can do as a customer experience organization to start preparing.

Seth Early:
I want to just take off on that by saying that is information architecture. When you're labeling your data, that is IA. When you're building categories, that is IA. When you're building that across multiple dimensions to the degree that you can very precisely extract entities from utterance and be able to use that to cut across multiple slots.

Seth Early:
So any policy information for an employment agency, for liability in Massachusetts. Well guess what? There's about four entities in that, and probably a fifth that you can derive. We did that for Allstate many years ago. Essentially a faceted retrieval and you're able to pull back that very precise piece of content because it's labeled across multiple dimensions; that's building the ontology, that's building the information architecture.

Seth Early:
And you can also componentize that knowledge much more easily. So there are tools, there are AI tools and technologies that will break content up into pieces, take a big giant document and break it up into topics, and then you can use AI and text analytics to auto categorize that across multiple dimensions.

Seth Early:
So as Deon is saying, there are tools that will add efficiency to the process of preparing and curating that data. Doesn't have to be done by hand but, again, you need that reference architecture, you need that scaffolding, you need that ontology. Once you have that and you start labeling data, it becomes a set of assets of increasing value.

Seth Early:
The same with use cases. You want to build use cases, and scenarios, and tasks and that becomes a library upon which you test all of these things, and that becomes an asset with enormous value. So all of these things need to be cataloged, and organized, and maintained and they build over time. And you can even abstract the design elements of a bot, of a virtual system into an ontology and make them more usable, more scalable, more reproducible, more portable.

Seth Early:
And actually that's some work that we've done, we have some patents on that but it's actually taking those elements and allowing you to scale it. These are large scale information management problems, and right now the current approaches are actually leading to greater fragmentation of the knowledge and making the problem worse because you're splitting up amongst all of these different systems. What you need to do is bring those into knowledge bases, bring those into sources of truth.

Seth Early:
And it's funny you said something, "It should depend on the customer." Once I was talking to a bot vendor and I said, "Show me how you train this thing, show me the backend." And he pulls up this admin panel and he shows me question/answer pairs with misspellings and freeze variations on the question. Went, "You're not even using intent classification, you're not using any AI on this, this is [inaudible 00:42:52]." He says, "Oh yeah, the customer has a knowledge base." I said, "Really? The customer has a knowledge base?"

Seth Early:
There's an old joke, I'll tell it really quickly. There's a chemical engineer, a mechanical engineer, and an economist; this is about economists making assumptions, it's about assumptions. They're on a desert island, they have a container of food, no implement to open it. And the mechanical engineer's going to use coconuts and break it open, the chemical engineer's going to use sea water and sunlight. And the economist says, "Assume a can opener."

Seth Early:
When you assume a knowledge base, you are assuming that can opener. You cannot assume the solution, you cannot assume a can opener, you cannot assume a knowledge base. You have to build that, that's the issue is you have to build that, that's your competitive advantage.

Rose Wang:
I think you guys have made your point very strongly and-

Seth Early:
Sorry [crosstalk 00:43:48]-

Rose Wang:
... no don't apologize.

Seth Early:
I was over amped on that one.

Rose Wang:
I was going to say. [crosstalk 00:43:48] well then the question actually, we've gotten a question from a couple audience members which is, "Okay, well I hear you. We should definitely get IA up and running." So are there any practical tips you can offer for preparing data for future AI development?

Seth Early:
Yes.

Rose Wang:
And what are they? I will help finish that question.

Seth Early:
No, I think it's really what we're talking about, it's really looking at the assets that differentiate your business that you need to run your business. And when you think of all those things that people call the call center for, it's understanding those conversations, it's understanding assets that are being used, and it is beginning to build out that information architecture and that structure so that you can begin to curate those assets.

Seth Early:
And think about it this way, you need to train employees. I'm surprised at how poorly organizations are deploying knowledge bases and knowledge graphs and it's because they use the wrong approaches over many years that this stuff has not been done correctly.

Seth Early:
So there are methodologies. You read my book, you'll find every answer to all of these questions, because I've been doing it for 25 years, but the methodologies work, they work. If you do it the right way, you will be successful. You need to curate that knowledge, you need to structure it, and you need to make it readily available to employees and then use it for the differentiation of your intelligence assistance.

Seth Early:
That's what needs to happen and it's getting a handle on all of those different data sources. Most organizations' data is a mess. Need to solve that problem, you don't solve that problem, it's going to bite you no matter what. And the organizations that are getting there first are seeing tremendous value.

Seth Early:
There's one large organization saving hundreds of millions of dollars per year in content operations because they can publish it in one place, and then syndicate it out to all these channels. It's taken them many years to get there, but they are prepared for this next generation of cognitive assistant. They already have that and it's going to be easy. But other companies or other organizations that they haven't started, going to take years, so you have to start with that. That's my opinion and I'm sticking to it.

Rose Wang:
Okay then.

Alan Pendleton:
Rose can I weigh in on that one?

Rose Wang:
Please.

Alan Pendleton:
Yeah, I just have some practical tips that we've seen in our operations where clients are utilizing BPOs and this applies not only to BPOs, but to any agent population in-sourced or not. And what I've seen used successfully to start seeding a knowledge base or to build one are a couple of things.

Alan Pendleton:
One, most CRMs can make the tags a required field. That's a nice little start because the agent can't save or update the ticket without beginning to tag it. And then you add the accuracy of tags to the QA assessment where they're held accountable for getting it right. And that's just a very nuts and bolts step that any experienced leader can bake right into their CRM or to their CCaaS.

Alan Pendleton:
Another one is democratizing content creation. It's very difficult to seed and build a robust knowledge base through a small centralized team, and the ROI on that team is less easy to demonstrate and prove. But when you have a proper knowledge framework to operate within and you can train your agents and embed knowledge creation curation-

Seth Early:
To your job, yep [crosstalk 00:47:25] measure-

Alan Pendleton:
... into their [inaudible 00:47:25] job-

Seth Early:
Yep, absolutely, I agree.

Alan Pendleton:
... just works. So instead of writing an article in addition to working a ticket, they write or use knowledge articles as the way they work the ticket, and so you bake it into the process. Those two steps go a long way into getting the information architecture established.

Rose Wang:
I see. And I appreciate that because it can be so daunting when you look at, "I want to completely restructure my information architecture." And so I'm going to ask, Deon, for you to go even further in discrete terms if you can. So and work with lots of different companies across SMB, the market, to enterprise that do utilize AI. And this question does come up which is, "What does that actually mean then? What is good enough for AI today?"

Rose Wang:
And then if we can get to it, I would also love to answer the question also of one of our listeners about then if we know discrete data where AI needs to go today for you to be able to start using it, how does that translate then into deflection volume? What is a deflection rate that's considered successful, and then how is that correlated with the data?

Deon Nicholas:
Absolutely. So, first and foremost what is being prepared to look like kind of discreetly, concretely? So one of the things is volume, so you can start to ... Some of the things we look at in like, "Hey is this business likely to be a good fit for AI or for Forethought?"

Deon Nicholas:
The first thing we look at is kind of your volume and your size of team. So are you getting a few 100 to a few thousand tickets a week at least? Let's say you're getting at least 1,000 tickets a week or somewhere in that range, few 100 to 50,000, then you know you're starting to get to that volume where AI, or something like it, can be useful to you. But if you're only getting five, 10, 20 tickets, it's a very different story.

Deon Nicholas:
And then for that modern AI that uses your past conversation history, you usually want to past bank of somewhere in the 20,000 to 50,000 past tickets, just in general. And that's assuming you have a few different categories or whatnot, and it's assuming that these tickets are generally representative of what you're seeing.

Deon Nicholas:
So if you kind of have that ... So you have 100 to 1,000 tickets a week, roughly 50 weeks of data, so about a year of data to some degree, you can probably get started with less. If you have a high volume, you're usually in a good spot. And then a lot of the stuff that Alan mentioned around setting up your team, things you should be doing anyway, triaging, making that a part of the ticket closing process, and then attaching that as part of your regular QA and kind of performance reviews, these are all things that actually help you then move to that next phase of AI.

Deon Nicholas:
And then the second part of your question was what does good look like? So if I go and I launch an AI system, let's say it's a ticket deflection system, what is a good deflection rate, what are some things that I can expect? And before answering that, I will call out that it is very different depending on what kind of business you are, and I usually like to think about things in terms of my own internal kind of quadrant.

Deon Nicholas:
So imagine that your X axis is easy to complex tickets and your Y axis is action oriented versus knowledge oriented tickets. So you can imagine there's kind of four different kinds of companies. So you can have the easy knowledge oriented questions, which are a lot of how tos, "Hey my router's broken, how do I go fix it?" Those sorts of things.

Deon Nicholas:
You can also have the easy but action oriented tickets. Like imagine you're an eCommerce business, and it's, "My thing didn't ship can you send me a new shoe?" Or, "Could you issue me a refund?" And then you can have the more complex tickets around complex knowledge. Imagine your SaaS business and you have a bunch of different kinds of questions about how to use this complex software or complex actions, "Hey, can you go and login to my system for me and figure it out?"

Deon Nicholas:
And so the reason I do want to draw out those distinction is depending on the kind of support team you are, you're actually going to get different kinds of volumes, different kinds of tickets, and different expectations across what kinds of tools you can use.

Deon Nicholas:
And so if you're in that simple kind of eCommerce category or the simple how to category, you can probably, over the class of tickets you're looking at are in those categories, you can expect getting 20 to 40% of your tickets being deflected, or even higher, with an AI system, a sophisticated enough one. Versus if you're in kind of that more complex area, then maybe a five to 10% range is actually pretty good where you also tend to have the higher cost per ticket, so it's actually just as much ROI for you.

Deon Nicholas:
So those are some of the things that I think about. What kind of team am I, what kinds of questions am I getting? And that's just deflection. You might also be thinking about triage AI, operational AI as well as agent assist which, again, starts to become more useful as you get to the more complex types of tickets. So hopefully that answers some of that question.

Seth Early:
I can [crosstalk 00:52:38].

Rose Wang:
Yeah, please go for it, Seth.

Seth Early:
You can think of it, I totally agree Deon, you made a really good point there. Because think of it along two axes, and the vertical axis is the domain complexity. So are you in life sciences or are you a simple eCommerce commodity seller? That's the domain.

Seth Early:
And then there's the task complexity or the dialogue complexity. So a low task complexity, low domain complexity might be, "What's the status of my order?" That's easy. You don't want call center agents taking that call; that's ridiculous. Do that, that's easy to automate. Then as you go up that complexity scale, then you can do things like helper bots, agent assist bots that will bring that information to the agent who can then offer the advisor, because you need both.

Seth Early:
But think of it from those perspectives: how complex is your world, how complex is the problem or the task, and then don't start in the upper right quadrant, high complexity domain, high task complexity. That's what IBM tried to do with MD Anderson Cancer Center, it's all cancer right? $70 million dollar bust.

Seth Early:
But I think that the other point is to, again, start thinking about the fact that ... Alan made a really good point about embedding the knowledge creation process into the agent's work. But think of it this way, people are creating content all the time; I mean, that's what they do.

Seth Early:
Think of the engineering organization, think of product development, think of marketing; people creating content all of the time. But they're doing in silos and they're doing it in a way that is not easily repurposed for both self service and call center support.

Seth Early:
What you want to do is be able to author in such a way that the piece of content that's created by that engineers is tagged in such a way that it can be used for self service, can be used for the call center, it can feed a bot, it can be used for product documentation. That is the state you want to be in and that is what this large technology firm I mentioned earlier did. They were able to componentize all the content so it can be repurposed across multiple channels; that will get you the biggest bang for your buck. That is a big lift, though, over many years, but it has to be done.

Rose Wang:
Awesome, well we have about five minutes left and rather than start a whole new question, which I think we could probably sit here and talk for easily another hour; this has been so awesome. How about this? I feel like this is like a presidential debate, but I would love for all of you to summarize in one or two sentences really what you think for these companies that are looking for implementing AI, what do you think they should be doing today to get prepared for the future. How about we start with Alan?

Alan Pendleton:
Sure. First of all, thanks a lot for letting me be a part of this today, it was great fun. I love being in this group beside some pretty esteemed folks; makes me feel good. I think one thing is just from a distance, we're going to have to learn to coexist and come to grips with that, and we shouldn't fear about losing jobs, we shouldn't ... The accountants and spreadsheets example Deon gave was a good one.

Alan Pendleton:
AI is going to have its rightful place and people will have ours. So in the humans plus machine, it's the plus that I'm talking about. That's the area where we are either going to face friction because we didn't properly engineer the overlap and the handoffs, or we're going to find great synchrony and work magic for our customers.

Alan Pendleton:
So one way I would suggest to do that is, and I mentioned this earlier, let's judge AI by similar standards to humans. Why wouldn't a chat bot get a CSET score or a QA grade? This will let it ... It will shine in the low complexity, easy tickets, low left bottom left quadrant and that'll make it obvious to everyone that that's its rightful place.

Alan Pendleton:
And where it struggles to keep up, then it's not time yet and we need a routing system that can award business to AI where it's excelling, and business to humans where they're excelling, and then we become complimentary.

Alan Pendleton:
That is what we want, AI helping people and people helping AI. We got to train it, we got to tune it, we've got to do all that stuff. And so that's what's exciting to me about all of this is the ROI is best for humans and for bots, and AI in general when we find ways to work together. So let's cross that bridge mentally, accept that we're on the other side of it, and then we can get down to the business of achieving it.

Rose Wang:
Awesome, thank you so much, Alan. Seth, you're next on my screen so-

Seth Early:
Sure.

Rose Wang:
... love for you to go. Awesome.

Seth Early:
So I will try to keep it to a minute so that Deon has some time. So I would say begin by looking at the use cases and scenarios and the problem areas when you ... What is the biggest bang for the buck in terms of the problems that agents are having, the problems that customers are having, the problems revealed on your website, and really you can really tune your use cases to hit those areas. You should always be able to show ROI by hitting the places that have the biggest problems.

Seth Early:
And you may be rigging the game, that's fine, if you want to prove the value of this stuff, so build use cases. Always have baselines, you can have baselines, make sure you're instrumented for baselines so you can measure the impact, and pick an area you know you can impact. And then target that and use that as your proof point, and then start building out from there. But it has to start with the architecture, the content, the data, and the use case.

Rose Wang:
Very practical. Deon?

Deon Nicholas:
Awesome, I'll start by echoing what Alan said earlier, so thank you Rose and thanks Seth and Alan for having me here, it's great to stand among giants. And secondly, my takeaway is we're all learning, we're all trying to figure it out.

Deon Nicholas:
As hopefully folks can see on this call, you have folks who are AI practitioners, you have folks who are consultants, you have folks in the BPO space. And what's great about this ecosystem is we're all learning together and kind of growing as an ecosystem which, ultimately, means better things for our customers and, literally, people across the planet, so I'm really excited to be here.

Deon Nicholas:
I think my last takeaway will be that because we are all learning, feel free to come and talk to us. Whether you're exploring BPOs, folks on this call, or whether you're exploring AI or consultancy, even if you're not necessarily ready or you don't know you're ready, having that early conversation can always be helpful, and we'll work on these problems together, so I think that's the exciting part.

Rose Wang:
Thank you.

Seth Early:
We are headed in an inflection point in history. This is going to be a reality and it's just going to be the way things are done. And we'll look back at this and say, "I remember the day when these things were awful and now we use them all the time." That's what the future will be.

Rose Wang:
Well, you heard it here, everyone. Thank you all so much, we have everyone's contact information, so please reach out with any questions. Any way we can be helpful, you can tell that we love the education piece and we're all learning together. So again, thank you all so much for joining me and thank you so much to our esteemed panelists.

Seth Early:
Book offer stands, I sent it in a text.

Rose Wang:
Okay.

Seth Early:
For several people that send me a note, I'll send you a signed copy of the book. There it is, look at that.

Deon Nicholas:
Oo, I want one.

Rose Wang:
I know, I just have a draft.

Seth Early:
We'll get you one.

Alan Pendleton:
All right, thanks everyone.

Rose Wang:
Thanks everyone, have a great rest of your-

Deon Nicholas:
Yes, thank you so much for having me as well, this was [inaudible 01:00:29].

Seth Early:
Bye.

Rose Wang:
Bye-bye.

Alan Pendleton:
Thank you.