10 minute read 3 Jul 2018
Designers meeting together & working in laptop at workshop

How robust planning via AI solutions can minimize pitfalls

By

Gavin Seewooruttun

EY Asia-Pacific Artificial Intelligence and Analytics Advisory Leader

Elevating people through emerging technologies. Methodical and structured approach. Promotes team and individual autonomy. Enthusiastic and success-driven.

10 minute read 3 Jul 2018

Show resources

Organizations are better positioned to unlock AI’s advantages when they start by aligning strategy and technology with purpose.

Even though he wrote it more than 25 years ago, business guru Peter Drucker’s quote is as relevant today as ever: "Plans are only good intentions unless they immediately degenerate into hard work." And it applies particularly well to organizations seeking to harness the tremendous potential of artificial intelligence (AI) for their business growth.

The journey to AI success can be challenging — even overwhelming for companies still exploring how to deploy powerful AI-related technologies. Organizations are better positioned to unlock AI’s advantages when they start by aligning strategy and technology with purpose, and then coherently define which aspects of their business can most benefit from AI-related investments. These steps then enable the organization to move ahead confidently in delivering the technology component of AI solutions.

Three key differences set AI solutions apart from “traditional” IT solutions. Together, these differences introduce substantial risks along the route to successfully implement AI in the business. To avoid the pitfalls that each of these distinct characteristics represent, organizations should consider these questions and steps to help them navigate a smooth path forward.

Designer reviewing the design in a tablet
(Chapter breaker)
1

Chapter 1

Training AI solutions: advance planning minimizes cost and time burden

A key difference from traditional IT solutions is that AI solutions tend to be trained rather than coded.

Most AI solutions combine elements of rule-based and nondeterministic logic. For example, robotic process automation (RPA) solutions have been implemented by several organizations to automate business processes that can be defined as fixed sequences or decision trees. To expand the capability of RPA to cover decisions that involve judgment, organizations are combining RPA, cognitive computing and machine learning technologies into intelligent automation (IA) — solutions that learn through training. That training yields powerful capabilities, but the process of training AI solutions is fraught with costly traps.

Chief among these is the tendency to underestimate the time and effort required to train — and periodically retrain — AI solutions. Much of the development work that has traditionally been done by IT is now shifting to people across the business; specifically, to subject-matter experts, who may have limited capacity to spare for these new responsibilities. For example, in a recent project with an EY client, almost 40% of the client organization’s time was spent by trainers asking questions and rating the answers of a cognitive system. That is important work, but it can represent a significant cost to the business.

Three questions can help organizations design an efficient training process for AI solutions.

1.  How much expertise is required?

Identifying the level of business expertise that will be required for AI solution “training” enables the team to build the appropriate scope of effort and schedule right into the plan, thus helping to ensure that expertise will be available when required. This mapping effort can also flag potential gaps, especially in the time available from the appropriate domain experts. The organization can then assess whether to engage consultants to train AI solutions fully or partially, a tactic that can help reduce the demands on staff and improve the performance of the solution by accessing expertise that is not yet available inside the organization.

Another key consideration is ensuring that AI training teams can tap into the software engineering skills necessary for successful AI coding. Most AI projects will eventually encounter software performance issues, largely because the domain experts are not likely to also have the software engineering expertise to accurately code the algorithms and business rules applied within the AI solution. Ensuring that the solution training process includes qualified software engineers will reduce the need for costly fixes down the road.

2.  Are there more efficient ways to train the solution?

Some AI solutions can be trained in “batches,” using preexisting data. This requires a data set that includes the correct output given a predefined set of inputs; for example, a simple table of inputs and the corresponding decision of the expert. Another example is a “training query set,” which is a list of queries that represent common questions and answers. The AI solution can use this query set as a starting point for its learning. Another route to limiting overall training effort is to purchase a pre-trained solution. Increasingly, AI software vendors offer products that have been pre-trained using industry and domain knowledge.

For example, a risk management and regulatory compliance firm used pre-trained solutions to identify and address common regulatory citations. In another case, a large software firm used this approach to build domains for its AI suite and associated applications. In addition to the reduction in time and cost that pre-training solutions can deliver, some even include the ability to access additional industry data that can enhance organizational data for richer insights.

3.  What is the optimal training effort to generate the best impact?

How much is too much? Organizations will want to identify the optimal amount of training that equips the AI solution to generate the best results. After all, overtraining not only wastes effort but also risks “overfitting” — that is, developing a solution that is too finely tuned to the cases in the training data set, losing valuable generality when applied to broader data. A good approach is to first define a benchmark to measure the performance of the solution.

For example, a client was able to compare the customer satisfaction ratings for interactions with a chatbot against those with call center staff after each training cycle, which yielded important insights that helped to shape the chatbot training. Another client used a customer control group to measure the sales uplift attributable to a next-best-offer solution. One important yet often overlooked consideration is the time required to periodically retrain AI solutions. As the environment in which the solution operates evolves, its performance may suffer. Monitoring and periodic retraining can ensure continued relevance. For example, the algorithms that predict customer behavior may need to be retrained as customers become conditioned by repeated messages. Similarly, training query sets should be regularly updated to reflect new information.

business woman working on laptop computer at modern office
(Chapter breaker)
2

Chapter 2

Better input, better output: managing data “ground truths”

Defining the ground truth also relies on the input of the domain experts in the organization.

The old adage “garbage in, garbage out” certainly applies to AI. To avoid “garbage in” requires establishing and managing a ground truth of the data. This “ground truth” is the gold standard; the AI training (described above) “teaches” the AI solution what that gold standard is and how to achieve it. Not surprisingly, the solution’s performance will be commensurate with the quality of the information on which it draws.

The first step in managing the ground truth is to understand where the most current and reliable information for each domain exists. Ideally the organization has an information management strategy (IMS) that identifies the stages and controls that apply to its information assets. In this case, the IMS shows where to source the latest and most trusted version of the information to feed into the AI solution. Organizations without an IMS may wish to develop one, at least for the domains that will be covered by AI solutions.

Finding domain experts is often easier said than done, particularly when the experts have different opinions on the same information. For example, a project used AI to assign a probability of fraud to insurance claims by learning from claims agents. The project team assumed that if different agents had different views, then the solution would provide the average opinion. In fact, the solution was trained by a small group of agents who did not have strong abilities in detecting fraud. As a result, the AI solution was not sufficiently equipped to assign fraud probability.

A better approach is to objectively measure performance and then select the most successful people to train the solution. Using the earlier example of the call center chatbot, customer satisfaction ratings could be the basis on which to select the agents best suited to train the chatbot for optimal performance.

Be aware, though, of the potential unintended consequences of defining KPIs to measure the performance of an AI solution. For example, a company trained a next-best-offer solution as a way to uplift sales. Yet, in some cases, improving sales actually reduced marginal profitability. That’s because customers were buying the products on offer in isolation — or together with low-margin items — which did not offset the cost of the offer.

Another risk that has captured the attention of governments is the potential for bias in algorithms, in particular bias that could result in adverse social consequences, such as denying insurance to a segment of the population or robotic financial advisors creating systemic market risk. Part of the answer here may involve adding a “compliance engine” to each solution, which could monitor adherence to decision parameters that are governed by the organization’s management.

Team of creative business people working in office
(Chapter breaker)
3

Chapter 3

Technology choices amid fast-evolving innovations

Organizations are increasingly embracing digital strategies that involve short cycles to test and implement technology.

With persistent pressure to keep innovating, AI software vendors are shifting to shorter release cycles for their products. Also, organizations are opting for digital strategies involving short cycles to test and implement technology. In these conditions, it’s no surprise that different vendors prioritize different capabilities, resulting in some meaningful differences in the performance of AI software across a range of AI workloads. Amid the rapid pace of AI evolution, organizations will need to take care in selecting their technology stacks.

Most vendors focus on cloud-hosted services that can be accessed via application programming interfaces (APIs). While APIs can be an easier route to developing AI solutions than attempting to do so “from scratch,” they may be limited in the degree to which they can be customized to fit the organization’s specific use cases.

As an example, we found a gap of almost 50% in the performance of voice-to-text conversion services when the level of background noise increases. For companies aiming to implement a chatbot to converse with customers in busy environments — such as an airport, shopping mall or manufacturing plant — this performance gap could have meaningful impact on the customer experience.

Another important consideration is the service host location. As regulators introduce tighter data privacy legislation, such as Europe’s General Data Protection Regulation, international transfers of personal data may need to be restricted to avoid stiff fines. AI solutions may increase compliance risk because personal information embedded in unstructured data files may go unnoticed, and the vendor’s cloud services may transmit this data across sovereign boundaries. Another implication of using cloud services hosted far away is that the speed and stability of the services may impact the user experience.

Lastly, one more factor to map in advance is the extent to which the AI solution must integrate with other systems. The choice of technology will affect the amount of bespoke development required to successfully integrate it with these other systems. At present the major software vendors provide limited integration between their own AI products and virtually none with the products of other vendors. In response, some consulting firms are developing reusable integrations, which can help accelerate AI programs within organizations.

Careful, robust planning can enable companies to address these challenges and avoid potential pitfalls of AI initiatives. Of course, good AI development practice does not preclude the critical importance of strong leadership. Indeed, many of the execution issues raised here involve people and organizational issues. Those organizations that adopt rigorous AI planning and work to bridge the gap between early and late adopters of AI within the company will be better equipped to launch a successful AI journey.

Summary

Companies still exploring how to deploy powerful AI-related technologies must take into account three key differences that set artificial intelligence apart from “traditional” IT solutions.

About this article

By

Gavin Seewooruttun

EY Asia-Pacific Artificial Intelligence and Analytics Advisory Leader

Elevating people through emerging technologies. Methodical and structured approach. Promotes team and individual autonomy. Enthusiastic and success-driven.