EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
How artificial intelligence is disrupting the finance function
In this episode of the Better Finance podcast, EY Global Trusted AI Advisory Leader Cathy Cobey discusses the technologies covered by AI and how they are being used for finance.
Trust is the foundation on which organizations can build stakeholder confidence and active participation with their AI systems. However, in the finance world, one mistake, or the perception of a mistake, can harm an organization’s trust in the technology. How do finance functions build trust into their AI?
Podcast host Myles Corson welcomes Cathy Cobey, EY’s Global Trusted AI Advisory Leader, to discuss how she helps organizations understand some of the new risks that AI brings and how, most importantly, they can start to create the right governance and control mechanisms.
They explore the definition of AI, and its use in the COVID-19 landscape. With AI introducing new risks and impacts that have historically been the purview of human decision-making, organizations need a new framework for identifying, measuring and responding to the risks of AI to make it operational.
The discussion also covers how AI is being used in the finance function, from the early days of AI predominantly being used for anomaly detection, to some of its current uses, which include valuing securities, financial forecasting, and data quality and assessments.
Key takeaways:
AI is really a broad spectrum of technologies starting to be applied in all industries.
In response to COVID-19, AI is being used to create a global map to track cases, and in labs to try identify potential treatment plans.
There are serious operational risks of using AI without a robust governance and control mechanism around it.
To build organizational trust, AI needs to be trained properly, with a lot of boundaries around it at first. Then one has to monitor its performance after it is put into production to understand its flaws.
For your convenience, full text transcript of this podcast is also available.
Myles Corson
Hello and welcome to The Better Finance Podcast, a series that explores the changing dynamics of the business world and what it means to the finance leaders of today and tomorrow. I’m Myles Corson from Ernst & Young and I’m your host. I’m delighted today to be welcomed by Cathy Cobey. With 25 years of experience as a Technology Risk Advisor, Cathy is the EY Global Trusted AI Advisory Leader and oversees a global team that works on the ethical and control implications of artificial intelligence, AI, and autonomous systems. Cathy serves on a number of technical advisory committees to develop industry and regulatory standards for emerging technology. Welcome, Cathy.
Cathy Cobey
Thanks, Myles, glad to be here.
Corson
Cathy, maybe you can just start and tell us a little about your role and how you got into AI as a subject?
Cobey
I became EY’s Global Trusted AI Advisory Leader about two years ago. What might surprise some of your listeners is I’m actually a CPA by background. Even though I’ve been working in technology risk for over 25 years I don’t code. I really am self-taught and I’ve really immersed myself in the last couple of years to really understand what are some of the new risks that AI brings and how, most importantly, can we start to create the right governance and control mechanisms. You know, a lot of the advisory committees that I work on are actually with different levels of global and country leadership that are looking to develop standards and policies around how-to better guide organizations, and users of these AI systems, how they can better trust the systems. So it’s a pretty evolving field and one that I’ve found to be quite exciting to be involved in.
Corson
Cathy, we hear a lot about the potential of AI to disrupt existing ways of working. Can you maybe take a couple of minutes just to level set about what we mean when we say “AI” and perhaps share some examples of the most successful use cases that you’ve been seeing?
Cobey
You know, it’s surprising that that question comes up a lot, but it’s actually really difficult to answer because there probably isn’t a common understanding or definition right now for artificial intelligence. Depending on who you talk to they’re going to include different types of technologies, but I think the main thing for your listeners to understand is that AI is really a spectrum of technologies from quite simple to very complex and they can perform different functions.
So you’ve got artificial intelligence systems that are designed to read numbers and to do a lot of detailed mathematical calculations on them, make predictions. You’ve got others that are watching video, distilling information from that, others that are having verbal conversations with humans. It’s quite a broad spectrum of AI.
And we really are seeing AI start to show up in all industries. They’re in drones, in autonomous vehicles, robotics that are in a manufacturing plant. You’re seeing them helping with credit profiling or predicting the value of an investment in financial services. We see consumer products using it a lot to try to predict what their customers are going to want. And we’re actually starting to see it in a lot of mainstream areas already that five or ten years ago would never have thoughts was a more, really deeper, higher level cognitive function that computers could just never replicate that now they’re starting to replicate.
If you just take our current situation with COVID-19 and just all the ways in which AI is being leveraged as we work together globally to try to combat COVID-19 from the simple algorithm being used by Johns Hopkins University to create a global map and keep it updated in regards to the different cases of COVID to it being used in the labs in testing, trying to kind of being used to try identify potential treatment plans that might work for the symptoms of COVID.
And so I think there’s going to also be some potential use cases of AI as we all move to working more remotely, having more digitized records. It will probably open up some of our eyes about what some of the use cases are and potentials of a more digital economy and lifestyle moving forward. It’ll be interesting to see as we kind of move back after all of our self-isolation is over as to what the new normal looks like. And I think AI is going to be a continuing player moving forward into all of the transformation we’re going to see.
It really is quite broad and yet it’s still very early in its infancy in regards to the complexity of the technology and as well in the use cases we’re seeing. So there’s still a lot of room for it to grow and for companies to invest in it.
Corson
You mentioned it being in the infancy and, clearly, there are some relevant use cases out there. But I think one of the things that many of us would have heard about in terms of the concerns that are being raised is how do we trust in something that is so new where, perhaps for many of us, we don’t really fully understand how the results that are being generated were achieved? Is AI different from other technologies in terms of how that trust is established? How should business leaders think differently as a result of that?
Cobey
There are certainly some key attributes of AI that make it different and make that trust harder. The first is that it is a probabilistic system. It is usually used in areas where you have incomplete information, so just like with a human that needs to make some kind of judgement based on the information it has. And really right now what we’re finding is that the performance of an AI, how accurate it is, is really dependent on a lot of different factors. How complete the data it has, the quality of that data, how it was trained, and how that is representative to now when it’s operating within production.
So I kind of like to use the example of how I taught my two daughters how to ride a bike when they were five. I had to recognize that they had to be given a lot of instruction at first. There was a lot of heavy training needed and that they would eventually kind of be getting comfortable in a certain set of environmental conditions. I then built more trust that they could handle themselves in that. But then as I started to think about them moving outside our little crescent into like bigger roadways, lights, more cars, you know, I certainly wasn’t going to trust them at the start in that.
So I think what’s really important is for organizations to appreciate that AI is also going to be trained and developed in that same kind of learning mechanism. What you want to do is put a lot of boundaries around it at first because it’s really difficult to train it completely in a lab. You’re going need to put it into production. You’re going have to really monitor its performance and really understand where it’s going wrong. The other thing that is important for AI is to recognize its use cases. By replacing some of the functions we currently rely on humans for puts it into a different realm of areas where it can make mistakes.
One of the most common areas that is talked a lot about is bias and fairness. We aren’t usually concerned that much with legacy technology in those areas because we know that the developers have developed a set of rules that we feel are legal, follow regulations, are following whatever ethical or social corporate values. But for AI when you know it’s learning, it’s training itself off data sets, it could train itself to operate in a way that is actually discriminatory or is biased against a group of people.
And how can you as an organization understand that, measure that, and ensure that you’ve got the right remediation in place? That’s what I think is probably the biggest challenge right now in trusting AI. It’s that everyone doesn’t have a lot of experience with it so they’re not sure when is it going to work well and when is it going to fail. And so if you don’t have that kind of understanding you would always kind of be conservative to assuming it’s going to fail more than it probably will.
Corson
I like the way you describe that as setting boundaries initially and understanding the limitations as the AI learns. As we think about building sustained trust, one of the areas a company needs to be thinking about is obviously they have a wide range of stakeholders, you know, that’s customers, suppliers, communities, regulators, and beyond. As AI continues to change both in how organizations use it and how it evolves itself and learns as it continues to operate, are there any recommendations or advice you can give our listeners as they consider the governance, accountability and transparency issues needed to really make AI operational within their organizations?
Cobey
The first thing I would say is don’t underestimate how important those governance and control mechanisms are. We’re used to organizations by the time they actually bring a technology into their organization it’s been tested a lot and they’ve kind of got this inherent expectation that it is bringing with it all the right control mechanisms that are required. AI is different because it is so early in its development and it is so accessible. You can just go onto an open source library, go to one of the technology companies, and someone in your business organization can pull an algorithm down and start using it against a data set within sometimes hours. It’s a technology that also doesn’t lend itself to be having those same procurement or strategic sourcing gating that you might have with some of your other technologies.
It’s important to recognize that anyone that plans to start using this technology needs to recognize that you’re getting the functionality, but not always all the control functions because it could have been something pulled off of a library and there wasn’t a lot of thought to the control functions that were required. It’s kind of like a user beware.
And there are some AI technologies right now where they’re so on the cutting edge that the control functions haven’t even been designed. There are still academic researchers trying to figure out what they should be and what they would look like. They haven’t even been implemented into an organization. They need to recognize that as they go to use these systems there are going to be different use cases that require different control and governance practices. It’s not going to be a one size fits all. So what you need to really do is right upfront build in all of those requirements from the beginning. But what I would say is don’t also underestimate how much of your existing governance and control functions you can utilize. What you need to do is think about what can we retool for AI and where do we have some gaps, and how quickly can we fill those gaps? Until those gaps are filled you may decide that you’re only going to use AI in certain kind of lower risk opportunities, but still I would say you should go ahead and start using it because there’s nothing like firsthand experience to really understand a technology. And I think that firsthand experience, kind of like riding the bike, is important so that you can kind of build those capabilities in your organization.
Corson
So there are some existing practices and control experience that will be applicable?
Cobey
For sure, and you know, there is a lot of consensus, at least at a theoretical level, across technology companies, governments, organizations about what they want to have in place. The key thing right now is just to develop those and get them implemented into organizations. So whether that’s around the explainability levels, the explainability that you have into the decision framework that AI systems are using or what does fairness and bias mean in the context of an AI system? There’s quite a lot of consensus there on what the principle should be. It’s now just thinking about what do they actually look like in practice. There’s a lot to build from so far.
Corson
You also mentioned some of the resources available for companies to put their own data sets against in terms of the algorithms that exist. As companies think about their AI investment, what should they be doing to balance what they build themselves versus leveraging those existing services and providers?
Cobey
I think in the last couple of years there was a predominance of having more in-house built AI system if you were going to use AI. That requires a lot of deep technical data science resources and those are in very short supply around the world. I think what we’re starting to see is that the more entrants now coming into AI, and even those that were early entrants are now finding how costly it was to maintain those kind of resources. So I think we are starting to see a shift towards less pure in-house built AI systems and more leveraging some of the existing technologies that are available. But what I would say is even those organizations that are predominantly in-house built, most likely they aren’t building it all from scratch. It’s really like a component. Different functions that the AI requires to do you can actually maybe pull some of those off of open source or something already prebuilt by a technology company.
I do think it’s going to be a little bit of a hybrid where you’re going to be pulling different functions that you require that have already been predesigned into your algorithm and then you’re going to be tacking on or maybe supplementing that with some of the enhancements that you need to make for your own use case. I think what that does is it kind of complicates the design because it’s not all in-house. It could be that you’re now working with multiple vendors. You need to think about how much transparency there is into the code that you’re now leveraging out of these third parties.
So it is going to kind of complicate the governance structure even though it’ll probably accelerate the development cycle. It’ll be interesting to see, but I do think we’re going to find a lot more organizations relying on third parties.
Corson
It sounds that what you’re describing is what will be proprietary is how an individual organization puts together the components to solve for the specific business issues that they face as an organization.
Cobey
Yea, and I wouldn’t underestimate also just the quality and the quantity and sourcing of data. That’s a really key component that is probably much more important for AI than other previous legacy technologies and so that is also a key determinant on the quality of the outcomes you’re going to get.
Corson
We’ve talked generally around applications of AI and it’s clear that it has significant potential to drive disruption across sectors and across functions. Can you perhaps share some specific examples about how AI is being used in the finance function?
Cobey
Sure. I think that in the early days where finance was predominantly using AI systems was in anomaly detection. We’ve seen it being used to identify fraud, also to look for unusual transactions. So it could be an organization that has a lot of subsidiaries that are looking to see if there are any unusual variances or blips or reductions in revenue or costing. You know, one of the reasons why that was used so early on is because there isn’t as high a need for controls and governance over that type of application. Because ultimately the AI system is just telling you something you maybe didn’t know before, but it’s not necessarily replacing a lot of the previous analysis and investigations you would have been doing by your human finance directors.
As we start to see AI mature and particularly in supervised machine learning, we’re starting to see finance organizations use AI in a more predictive fashion which actually might start to replace some of the previous estimation and judgement procedures that they perform.
One area would be in valuing securities. So the price of a security. Another we’ve seen is in doing financial forecasting of whether it’s revenues or costs into the future. Another area we’ve started to see AI being used is to just do straight data quality and assessments, as well as replacing data, coming up with derived values for missing data, but using AI in what is considered a much more granular way to cluster different records to come up with more accurate results. I think finance is really just kind of scratching the surface. We’re going to start to see a lot more applications open to finances as the years progress.
Corson
That’s very helpful and it makes sense, starting with some of those areas that probably many of our listeners are familiar with in terms of machine learning and automation and how AI can re-turbo charge in those examples. Are there any other technologies as organizations are thinking about the digital transformation of their finance function that may also be opportunities to bring AI in? I’m wondering about things like blockchain for example.
Cobey
Well, certainly I think you’re going to see a lot of collaboration amongst all these emerging technologies. Internet of Things and sensors can create a lot more data that can be used into the algorithms of AI which that data could either be sourced from or stored on a blockchain. AI is being used with natural language processing and robotics to seamlessly be able to kind of navigate full processes in an automated fashion. But a couple of years ago robotics could only maybe be used to automate 60 percent of a process because there was a certain level of cognitive functioning required maybe even as simple reading a PDF document which now natural language processing can now fill that gap. I think you are going to start to see a lot of these systems get quite complex as these different emerging technologies work together, as well as alongside the legacy technology.
Corson
You talked earlier about the importance of data in successful use of AI and obviously finance functions are often seen as the custodian of that data. Associated with that there’s obviously a number of repetitive and routine tasks that often go into preparation of the financial information and data. So those kind of create opportunities for uses of AI going forward. What questions should CFOs and finance leaders be asking about AI and the role it can play both in terms of transforming their own function, but also the broader investment case across their organizations?
Cobey
Probably the very question I’d have them ask themselves is does it have to be AI? AI can be really powerful, but it can also be really expensive to both design and maintain. They need to kind of recognize that AI is not going to be the technology to solve all their problems. We expect probably less than ten percent. So what they need to really do is to have a really good process at the beginning to understand what is the right technology? In a lot of cases very repetitive routine tasks can be more cheaply automated through robotics, robotics process automation or maybe blockchain, but then there’s going to be again that component that you want to have AI used for.
The other thing that I think CFOs in finance organizations is that they are probably the holders in their organization of the control knowledge. What they need to be thinking about is how are they utilizing their backgrounds, as I have with my CPA, to really help the organization to navigate in understanding what is the appropriate level of governance and controls and how can they guide their organization through those decisions in the buildout of the governance and control framework? And ultimately just kind of challenging themselves to be thinking about, as they go to automate some of these functions, what then can they be utilizing the talent and the knowledge of their existing finance team members to maybe be doing further analysis?
I think a lot of people look at AI thinking it’s going to replace a lot of jobs, but to be honest, I think there’s a lot of work that hasn’t been done because there weren’t enough resources, there wasn’t enough time, there wasn’t maybe enough information which AI could provide. I do think that most of my clients find that they don’t actually replace a lot of FTEs with AI. What they do is they enable them to do much higher order analytics and analysis functions and who better than a finance organization to be able to leverage that skillset. I’d really kind of have them think through that as well as to challenging themselves as how they kind of partner their AI projects with also a real concrete strategy on how to better utilize their human operators moving forward.
Corson
Cathy, we’ve covered a lot of ground on this. As we wrap up, what’s the one thing that you think, based on your purview across technology, has the biggest potential to create better finance organization?
Cobey
I’m really excited about all the different use cases I’m seeing for AI. And as much as I day-to-day have the job of trying to kind of warn everyone about where it can fail and how it can potentially go wrong, I am really excited to see just how engaged everyone is in regards to recognizing that for this technology in particular control and governance is important. And I think the finance organization can really be front and center in that conversation, but I also think that it can really help us to make better decisions. Much quicker decisions. It can really help pinpoint areas where things could be going off the rails.
I know I talked to a lot of my clients about trying to move beyond using controls to detect problems that have already occurred, but think about designing controls that can actually start detecting a problem that’s starting to emerge and actually catch it before it even happens. That’s where I think the power of AI will come into. It’s that ability to, hopefully, become being able to kind of predict control failures, predict financial losses, and actually be able to kind of stop them from happening in the first place.
Corson
Fantastic. Well, Cathy, again, many thanks for joining us today and sharing those insights and perspectives with us.
Cobey
Well, thank you, Myles, for the invitation. Always love to have this conversation with you.
Corson
And for our listeners, as always, thank you for listening. If you enjoyed this episode, please remember to subscribe to the series or leave a rating or review. If you’d like to find out more about the topics discussed, related links are posted on EY.com/betterfinance. I look forward to speaking with you on the next episode of The Better Finance Podcast, a series that explores the changing dynamics of the business world and what it means to the finance leaders of today and tomorrow.
(Music)
Corson
We hear a lot about the potential of AI to disrupt existing ways of working. Can you take a couple of minutes to level set about what we mean when we talk about AI and share some examples of the most successful use cases you’re seeing at the moment?
Cobey
Well, it’s interesting, because that’s probably the number one question I get and yet it’s a really hard question to answer because AI is actually not commonly defined yet around the world. There’s a lot of debate as to what technology should be considered AI or not. But what I would say to your listeners is that the main thing to remember is that AI is a large spectrum of technologies from very simple to quite complex and that the functionalities, the cognitive functions that AI is trying to replicate is quite broad as well. So from a machine learning application that is using very structured traditional data to maybe an AI system that is using video feed from a moving car and trying to kind of partner that with an AI algorithm that is trying to detect what’s in front of it to using it in a more natural language capacity. Such as in a call center where someone calls in and is actually talking to an AI agent or even your Alexa or Siri that is now interacting with us quite seamlessly. I think what we have to kind appreciate is that AI will also continue to evolve and there’ll be new types of AI and use cases that are involved.
If you just take our current situation with COVID-19 and just all the ways in which AI is being leveraged as we work together globally to try to combat COVID-19 from the simple algorithm being used by Johns Hopkins University to create a global map and keep it updated in regards to the different cases of COVID to it being used in the labs in testing, trying to kind of being used to try identify potential treatment plans that might work for the symptoms of COVID.
And so I think there’s going to also be some potential use cases of AI as we all move to working more remotely, having more digitized records. It will probably open up some of our eyes about what some of the use cases are and potentials of a more digital economy and lifestyle moving forward. It’ll be interesting to see as we kind of move back after all of our self-isolation is over as to what the new normal looks like. And I think AI is going to be a continuing player moving forward into all of the transformation we’re going to see.