Podcast transcript: How organizations can shape the world of Responsible AI

45 mins 15 secs | 1 November 2023

Susannah Streeter  

Hello and welcome to the EY and Microsoft Tech Directions podcast. I'm your host, Susannah Streeter, and in this episode, we are going to be taking a deep dive into the technology of the moment - which offers huge potential but is causing big concerns about the risks it poses to society. We're talking about Generative AI. It's a catch-all term for a powerful technology using algorithms to create new content which can be highly realistic and engaging.

Recent breakthroughs mean it has the potential to drastically change the way content creation is approached, whether in text, image, audio or video form. But the advances have brought alongside them a wave of worry – that this type of machine learning will poses significant challenges and risks, such as misinformation, privacy violations, bias, and plenty of ethical dilemmas. 

In this episode, we are going to focus on the potential for good that generative AI brings – and discuss what's needed for a responsible use of the technology while addressing some of these fears. We'll also find out about the very latest generative AI approaches, methodologies, and tools being developed to enhance services and products, with plenty of examples along the way. Also, we'll examine current regulations and how governance may change in the future – and where we could be with this technology decades from now.

They are big topics, and I am delighted to say I have two experts in the field – who are supremely qualified and experienced to provide you with valuable insights. But before I introduce them, please remember Conversations during this podcast should NOT be relied on as accounting, legal, investment, or other professional advice. Listeners must, of course, consult their own advisors.

I'm very pleased to introduce: Nate Harris - General Manager and Global Leader for Data & AI at Microsoft and Beatriz Sanz Saiz – EY's Global Consulting Data and AI Leader. A big welcome to both of you. Nate, where are you today?

Nate Harris  

Thank you, Susannah. So here in Atlanta, Georgia. Great to be with you. And looking forward to our session.

Streeter  

We have a lot to discuss. And also, please welcome Beatriz Sanz Saiz, EY's Global Consulting Data and AI Leader. Welcome Beatriz. Where are you today?

Beatriz Sanz Saiz  

Thank you. I am delighted to be here. I'm in Madrid in the middle of a heatwave. 

Streeter  

Oh, God, very, very warm. Well, I'm sure we got lots of hot topics to debate. So let's get stuck in, and we can't start this chat without a chat about ChatGPT, of course, which has really captured the world's attention and imagination. Nate, did you expect that the release of ChatGPT would cause quite such a stir?

Harris  

Well, for sure, it has certainly been an interesting moment. And it's really a great question. And if you think so, a lot of people would say, well, what really has caused the acceleration or the adoption? And what I mean by that is ChatGPT actually became one of the fastest consumer products to reach 100 million users. Now, that happened in about two months. And if you say, well, why did that happen? Or what has really driven that growth and acceleration? I think there are two or three things to really kind of take away. The first learning is really around the fact that the growth was around the user experience. If you just think about the intuitive, natural language and the format of question and answer, that's really what enabled people to interact with this application in a whole different way. Now, we talk about generative AI, and we're going to talk about it much more later today and in the session, but I think it also stems from the fact that it is the interface to a lot of this generative AI and some of the models that allow people to really interact in a different way. And that's what's really driving growth.

Streeter  

Can you understand, though, the concerns given the tsunami of information about its potential use that's been unleashed?

Harris  

For sure. And if you were to summarize it a little bit, AI is definitely creating, we might say, unparalleled opportunities for business. Now, at the same time, there are certainly legitimate concerns about the power of the technology, the potential for it to be used, certainly to cause harm, rather than benefits. It's not surprising if you think about it. In this context, governments around the world are looking at how existing laws and regulations can be applied to AI. They're considering what legal frameworks we might need. How do we ensure the right guardrails for responsible use of AI that also won't limit the technology that companies and governments want to use? There are a couple of things that I think are important to say here. From a Microsoft perspective, we've recently announced three AI-specific customer commitments, really to help our customers along the responsible AI journey. You could think of these three commitments as the following, the first one just being we'll share our learnings about deploying and developing AI responsibly. The second one is we will create, and we are creating an AI assurance program. And then the third is really supporting you as a customer as you implement your own AI systems responsibly. What are our commitments to customers to help them on this responsible AI journey?

Streeter  

Beatriz, let me bring you in. What is your take on this and the opportunities versus the challenges the world is facing, and also the responsibility of companies themselves to set their own guardrails, as Nate's been outlining?

Sanz Saiz  

Yes, I'm in no doubt that this is a discontinuity, a new form of intelligence that has been created and will push the market. It is not just about using this technology to solve problems in a different way. But it's about solving different problems or addressing new opportunities. When it comes to opportunities, obviously, there's the productivity angle that I think at the moment everyone, every company, every enterprise is trying to address. That is not only a productivity thing, but it's about the opportunity to create new business models. The market is in an experimentation phase. If we look at the innovation that is happening just in the last three months, in terms of the number of patents that are being created, all those patents will be commercialized in the next 12 to 18 months. What we expect is that discontinuity to kind of really learn and be very, very visible in the next two to three years. Now it gives us a bit of time as well, the market, to address the some of the challenges. I think there are three waves. One wave is everybody talking about what the use cases are and how to create value. There's another thread around responsible AI. You were talking about challenges. I think there's a big portion of the market now focusing on the topic that is the good news. There are two worlds: the world that we all want to build, a world of integrity, a world of values. But unfortunately, there's the world of internet, which also has a lot of misinformation. And unfortunately, some of these models are learning from that. I think the good news is, again, everybody's conscious of that. Companies like Microsoft are working towards building and embedding human rights into the design principles of these models. And the third thread that I mentioned is the sustainability impact. So, we will have a window of a couple of years where we will see that disruption really come into play.

Streeter  

So clearly, we are entering this disruptive phase right now. There is a huge amount of excitement, as you say, about the opportunities that could be presented. Nate, you talked earlier about the three main commitments that Microsoft has pledged. But how would you define responsible generative AI overall? And what is Microsoft doing to uphold that vision?

Harris  

It's a great question. It certainly starts with our commitment to making sure AI systems are developed responsibly and in ways that warrant people's trust. That's a piece I think Beatriz was talking about just earlier. It's important that we maintain and enable people's trust in the systems and in this technology. If you say, well, what does that look like? Or what is Microsoft doing relative to that? It starts with the fact that we're committed to the advancement of AI overall, generative AI being one piece of it, by six ethical principles. I'll just talk about them briefly. I think they're important when we think about Microsoft's approach to responsible AI. So, the first principle is around fairness. And all AI systems should treat all people fairly. The second principle is around reliability and safety. The third is about privacy and security. AI systems should be secure, and they should respect privacy. Again, as you hear these, you start to hear how they build trust. Inclusiveness would be the fourth item; AI systems should empower everyone and engage people in an inclusive way. Certainly, transparency matters. And so definitely, AI systems should be understandable. How did a model or how did the AI come to a particular point of view? The last and final one is around accountability. People should be accountable for AI systems. And I probably would just add one last item. While we certainly could talk at an even deeper level of depth about this particular item, the piece that I would add is really in reference to the three customer commitments I talked about earlier, which is, we're also committed to sharing our progress on the responsible AI journey. We've released a responsible AI standard, which is publicly available, and it shares our progress on our own responsible AI journey. But what it also does is it enables a framework for you as a customer or an organization, really to operationalize the six ethical AI principles in terms of - how do you build them into your own organization, into your own AI systems? How do you enable and operationalize them?

Streeter  

So, you've got the principles there. So, you're really aiming for that, but of course, when you look at some of those initial ones, reliability and safety, for example, and it goes back to what Beatriz was saying, there's a lot of misinformation out there. How do you join that circle together, given that there is a lot of misinformation, but the principle is reliability and safety?

Harris  

That's right. So, in our journey in responsible AI, one of the things, and we'll talk about it a little bit later, as we start to talk about examples or such, is one of the key principles, to creating reliability is a human in the loop. Systems right now, as we're progressing along the journey, it's important to have safeguards set up through the progress of AI, and as AI systems, we do have the right human intervention, point of view, perspective and such. That's also a key design piece that I was talking about earlier relative to what we call responsible AI standards and principles of how you operationalize this.

Streeter  

So, Beatriz, we've heard they're clearly about the principles from Microsoft, how do you think you can ensure that generative AI is used responsibly and accountably across companies as a whole?

Sanz Saiz  

Well, we obviously are playing a big role. And we are very keen to play a big role because of our firm - the connection that we have between governments, regulators, enterprise, and the citizens overall. So, it's kind of these four constituencies. For us it is key to play a role around education, closing the gap in between kind of these four constituencies. At the moment, we are seeing governments and regulators put in the effort. The first kind of big one is the EU AI Act, which will somehow set up some standards about how that also applies to the enterprise. We are working towards establishing some maturity assessment, and some new risk frameworks. We were challenging the principles of the previous kind of AI, because generative AI introduces new challenges around explainability. So, it's hard to explain to really be accountable for one specific outcome, because the results may change as the algorithm is trained. So, it's very, very important to challenge those design principles and take them to the next level. We are working towards establishing a confidence index based on this maturity assessment quite aligned with regulation, with the applications that regulators are identifying as high-risk applications in which the confidence index needs to be quite tight, so very active in the moment, as you can imagine, and working hand in hand with Microsoft on this matter.

Streeter  

But as you highlight there, Beatriz - it's a state of constant flux, really. So, how can an organization guard against abuse and unintended harm that may result?

Sanz Saiz  

So that's exactly why, again, the regulation is going to be quite tight on those cases in which human safety, either physically or psychologically might be affected. And for that, again, we are working towards establishing those risk frameworks and these assessments to help enterprises be prepared and at least identify the areas of focus because ultimately, the entire enterprise, the entire world, somehow will be ruled by AI. I think the first way to help companies manage those risks is to help them identify the areas of focus, and that's something that, with the maturity assessment, with confidence in these principles, the overarching governance, we believe is important. Ultimately, what we are advising companies is as they find the routes to take the roadmap of transformation, they define the AI governance so that the entire program can roll out with that governance in place.

Streeter  

And Nate, one of your key principles you explained earlier is to make sure systems are secure and respect privacy. So, what are some of the policies Microsoft adheres to ensure customer data is protected?

Harris  

It's a great question. I'd like to really start with the fact that Microsoft is a cloud that you can trust. It's AI you can trust. And so relative to that, we start with kind of three fundamentals. So your data is your data. What that means is Microsoft does not use your data. This is a consistent commitment across our Azur platform as an example. All of our Azure services and our Microsoft services, your data is your data. We don't own your data. We don't have access to your data. And so your data is protected, and only you have the right to your data. That's the first item. All of the systems safeguards are built around that commitment and that principle. The second one is your data is not used to train any of the open AI foundation models without your permission. If you go back to what we were talking about earlier, we're really trying to build a world, organizations, platforms that really build trust. And that's AI that you can trust. And so you don't want your data being used to train models unless you want that to happen as an intentional outcome. So that's the second principle - your data is not used to train the open AI foundation models. The third one is your data is protected by the most comprehensive enterprise compliance and security controls. I'd like to use the example - our open AI service is specifically an Azure service. Now, what do I mean by that? Well, all of the enterprise, compliance security, auditing, and hardened controls that exist for all of our enterprise-grade Azure services also apply to Azure open AI. This is the piece around protection. We do, from an industry perspective, believe we have the most comprehensive enterprise, compliance, security controls and privacy commitments back to our customers. Those three pieces really ground us on - how do you have a Microsoft Cloud? And how do you have AI that you can trust?

Streeter  

What are some of the most exciting responsively use cases you've come across?

Harris  

Well, there are a number. And so it's really interesting when you think about generative AI, and we should really start with just AI in principle, the use cases that AI and then more specifically generative AI can be used for are really far and wide-ranging. It's really actually a challenging question to say, what are some of the most exciting ones? We've seen everything from how do you use AI to be more sustainable in farming, for example. Or how do you use AI to help with other different types of causes and such? I'll use a particular example here: it's a business example. If you think about just something that many of us can relate to, the process of shopping for a used car or a used automobile, it can sometimes feel overwhelming. As you're digging through countless types of specs, or specifications and reviews. But there's a particular used car company in the US by the name of Carmax. They're the largest used car retailer in the US. And one of the things that they've done, which I think is fairly innovative, is they've made it easier for customers to find the most useful information as part of the shopping process. If you think about how do we use the powerful AI language models to really help a potential buyer now see all of the summaries or even a summarized view of all the customer reviews for every make, model and year of a vehicle? There are, I don't know, up to 5,000 different combinations of vehicles that CarMax sells. And so when you think about an inventory of probably plus or minus 45,000 cars on hand. What this enables or what this generative AI is enabling is this ability for the user and the consumer to really get a summary. Easy to read takeaways of real customer reviews, whether that's for a particular family car, but it's tailored specifically for what the users are looking for: how comfortable the ride is, there is enough space to pack for weekend adventures. If you think about the time that it takes to do that on your own and sort through all the reviews, imagine if that were now available to you if you just asked a simple question. It goes back to the whole value and engagement we talked about with ChatGPT at the beginning. That's one particular example that I think points to just making a user experience so drastically different in terms of value and finding what they are looking for. And I'll just say one last thing: I think it's important for us to take away that if you think about AI, and you think about generative AI, the real key takeaway is that it's really driving a different level of efficiency and accuracy to help people get things done, get them done quicker, get them done more accurately.

Streeter  

Seems like the possibilities are endless. It would be very useful for me. In fact, I've been trawling through the internet, trying to find reviews for a particular camper van. So, I do like the idea of shortcuts. But Beatriz, tell me what you think the highest value areas are where organizations can really start leveraging generative AI?

Sanz Saiz  

The good news is this is pervasive. Basically, it applies to all sectors and all areas from back office, middle office and front office. If we start from a sector perspective, I just want to say I'm very, very excited with the disruption it is bringing into the industry of education, the world of education. Actually, we believe that AI is an opportunity to reduce inequality, because what we are seeing is that basically, the lowest skilled workers are the ones that are benefiting the most in terms of productivity improvements. One of the areas that we are seeing the most rapid adoption is on customer support, customer service support, and all kinds of front office. I think contact centers will be completely reimagined in a very short period of time. It's interesting how rapid these large language models have been embedded across the ERPs of this kind of organization. So, corporate functions, like finance, supply chain, and operations, they will benefit. To me, again its productivity improvement. This conversation is not just about solving problems in a different way, but addressing a new opportunity. It is also an opportunity for new revenue generation, the launch of new channels, thinking differently about commercials, and thinking differently about customer service. It's both a cost-cutting opportunity but not just that. It's also a growth revenue opportunity.

Streeter  

I suppose that's really important to grasp, isn't it? Because Nate, you're talking about the car reviewer, perhaps losing his job, but he's the one at the moment to trawl through all the information. And the reviews to give a synopsis. He might not be needed. But other opportunities may well arise, perhaps positions you're already trying to create.

Harris  

One of the things I would say is that I think there's a really important principle, and you'll hear people ask questions about - will this technology replace people or scenarios like that. I think the most important principle is if you go back to kind of a couple of different really important items. So, at the beginning of the session, I talked about the real intention of Microsoft and their real opportunity of technology is really to help empower humans. It is not necessarily in any way to cause harm. And if you think about, for example, that in the ChatGPT example, Microsoft has partnered with a particular generative AI company called Open AI. And in that partnership, there are a couple of key what we call commitments to the partnership. And the reason why I say that this is super important is the commitments in the partnership are really all about how do we build AI technologies together in our partnership? How do we build an AI computing platform? But how do we make AI more accessible, inclusive and helpful for the good of humanity? And so I think that's a really important principle. I think it's a differentiating principle, also in Microsoft, and certainly in our partnership with Open AI, which is really the company that has created these generative AI models such as GPT 3 or GPT 4. But I just want to go back to that principle because it's really important and we all have an opportunity and we also all have a responsibility to safeguard the use of AI towards good and not necessarily towards harm or towards making humanity better.

Streeter  

Beatriz, what would you say the least risky areas of the business where AI could be optimized?

Sanz Saiz  

The least risky areas? I would say all the back office, everything that is not directly kind of interacting with an end consumer, for example, or an end user, a human. Because that's where the regulation will put a focus, right? AI will always orchestrate around the entire enterprise, but where to put the focus from a risk perspective - there's a lot of talk about AI replacing jobs, etc. The reality is that so far, AI has only created more jobs. It's a new economy. So again, it's not just the risk associated but also the opportunities.

Streeter  

How would you say that your clients are embracing the responsible use of AI? Are they really pushing forward on this?

Sanz Saiz  

Absolutely. This is now what's kind of the number one conversation. This technology is here to stay. And I think clients have taken a few months, but they are already now conscious of that. They are rethinking the enterprise transformation. While we help them on that thinking, we are making sure that the AI governance and responsible AI are considered as an early stage input to inform the entire transformation. What can be done or what cannot be done? What are the guard rails? What are the risk frameworks associated with the strategic transformation program? So there's a lot of interest at the moment.

Streeter  

Certainly sounds like it. And Nate, what would you say the best way of mitigating the risk of misuse is for a company heading out on this journey?

Harris  

If you think about the power of AI, I'll use examples such as the GPT AI generative models. They're pre-trained on a vast amount of internet text. I think Beatrice was talking about this earlier. Forcsure, it comes with a risk of generating either harmful or unintended results. If you think about Microsoft, how are you helping that? We've made significant investments to help guard against abuse. Unintended harm is another example. And that includes requiring applicants, for example, to show well defined use cases and incorporate Microsoft's principles around responsible AI use. So, I used the example of CarMax. There are many different ways. In fact, I'd love to talk about more use cases and different kinds of use cases that people are using. But one important way I talked about with CarMax was putting a human in the loop. Microsoft, how are you advancing relative to these investments you made around putting integration of responsible AI into your systems? You could look at it this way. We have now integrated into the Azure Open AI service, a responsible AI system. It helps filter out content that is sexual or violent or hateful, or ones related to self-harm, for example, and this is going back to those commitments of principles around responsible AI. And so you'll see us continue to add additional filters and customization features as we continue to work with customers along this journey in the preview period. All of our generally available service and work that's going on now. You'll continue to see this filtering system really identify more and more patterns. And then we'll work directly with customers to investigate or respond or block everything from abusive scenarios or such. We have incident response teams that are available to quickly update filters as the language continues to evolve. I think that's a super important item. We're also doing things about operationalizing this. We've enabled and added robust systems around providing how you design a user interface with guidelines and patterns that enable transparency and enable the key items such as describing the limits or the intended use cases or characteristics of the service. We're thinking very carefully about the tools that we create and how we help users, organizations, and customers around the world use these tools to develop responsible AI applications and enable responsible AI in their various scenarios or systems or use cases.

Streeter  

And obviously, regulators and what happens in the regulatory sphere will be crucial right across the board. Which is regulators, to some extent, are really behind the curve, aren't they, given the rapid advances you mentioned earlier regarding new regulations coming in in Europe? But could new rules pose difficulties for companies who have adopted or are adopting these new technologies? How do you think they can ensure future projects are not disrupted by a change in legislation? 

Sanz Saiz  

This is a phenomenal question. This technology is in its infancy. I think it's important that talking about assuring things or really committing things today, I would not be able to do that, and I think we shouldn't because this technology of huge power is still in its infancy, and the regulation will evolve. There is a gap between software developers and regulators and institutions and citizens. I think it's more about bringing these parties together at the earliest stages so that everyone provides inputs to the future regulation at the earliest stages. I think it's more the journey rather than committing to how we can avoid something. It's about a journey.

Streeter  

And Nate, how do you see governance developing in this space? How can companies future-proof themselves?

Harris  

I always wrestle with this definition of future proof. The reason why I struggle a little bit, and I think it's probably fair to say we're all thinking about this, is just the rapid evolution of technology, the rapid evolution of adopting the technology, governance, all of these pieces, will really see how the future continues to unfold. But I think there are three things: if you were an organization or customer, you said, Well, how do I best enable myself for, let's call it, the era of AI? The first item is, if you think about how you can work with Microsoft, or how you best work with Microsoft, our strategy and approach is that we will infuse generative AI into every Microsoft service and product across our Microsoft Cloud. We want to bring that generative AI value to customers, and you'll hear Microsoft talk about this item called copilots. These are copilots, for example, in Microsoft 365. Where in an office scenario where you're using PowerPoint, you want to create a PowerPoint. How do you create a PowerPoint, a first version, rapidly with the parameters that you want so that you can be more efficient? And your time to value and creating is faster? The first approach is certainly embracing copilots, just as part of Microsoft's products and services. The second one is definitely deep engagement. And certainly, there's co-innovation to be done in using generative AI solutions relative to the AI Azure Open AI service. What I mean by that is if you're an organization, or if you're a customer, and you have a particular line of business application, and you want to bring generative AI into that line of business application. It might be very specific for you based on your industry or your domain. The second strategy - how do I bring generative AI into my mission critical and line of business scenarios in the best way to help give me more or less breakthrough outcomes? The third item would be, and it's one of the areas I'm most excited about, where I think there's just so much opportunity across the industry, which is, if you think about the partner ecosystem, or just the ecosystem in general, across technology, we're using our AI platform and really helping the partner ecosystem, infuse Open AI service into offerings they offer for customers. You can use examples from health care to finance to where there are specific domain experts, companies, organizations that have created domain-specific applications that are highly tuned and tailored to regulations and particular requirements and industries. Well, they also want to ensure that they have the power of generative AI to help them, and so if customers kind of step back and say, I want to take advantage across these three different pillars or these three different approaches. I think that's a great way to put yourself in a position of strength going forward.

Streeter  

And Beatrix, what about you? What uses do you see as having the most potential?

Sanz Saiz  

Before I answer that question, I think there are probably three things that need to happen, or I would encourage in order to capture whatever is the opportunity in the enterprise. The first one is education, education and education. For executives and communities, this is a technology that requires people to be educated. So, open mindset and education. Second is governance. I mentioned before, that if these risk frameworks are considered early in the journey, then those guardrails are established from the beginning. And third, I mentioned that this is about solving new problems. It is creativity. It is creativity to the limit. With those three ingredients, we are moving towards more and more disintermediated economies. And so there's an opportunity to reinvent that interaction with clients. I think the potential is very driven by sector. So the potential in health has no limit. Law, for example, education, manufacturing, sustainable energy, I think opportunities are limitless.

Streeter  

Certainly, it does seem as though there are limitless opportunities right now. So Nate, what uses still in their nascent stage are you most excited about?

Harris  

It's such an exciting time, just as you were referencing. If I reflect on what Bea is talking about, and I reflect on what we see in the market, customers, organizations. There are really four capabilities that we see generative AI really powering and, specifically Azure Apen AI. Beatrice talked about a couple of them. You can see ideas from or use cases from content generation to summarization. They're also scenarios of code generation. If you think about converting natural language into code, and then we also see it around semantic search. But the ones that I'm most excited about are what we call multiple-model use cases. Beatrice talked about it a little bit earlier, where she sees change and transformation in the industry. If you think about end-to-end call center analytics - how do we classify sentiment and really summarize a better and faster experience for consumers? This hyper-personalization using timely data and summarization to help be personalized in your interactions with people. A final one is around business process automation. If you think about how it can be so tiresome and tedious to search through tons and tons of structured or unstructured documentation to generate code to do what we call data queries. Automation has the ability to get those answers to reason upon that data faster, and then make a decision faster. Those are the multiple model use cases. What is the trend being most focused on right now? It's once people get from their first use case, they start getting quickly into this multiple model use case, which is really changing what I would call the value chain in interaction and engagement with customers getting to outcomes. 

Streeter  

Clearly, lots of opportunity ahead. Now I'd like to give you the pretty hard task of peering into your crystal balls and looking at the horizon we're facing. How do you see responsible generative AI developing over the next few decades? Bea, do you think we will be able to balance the benefits and harms of AI for individuals and communities?

Sanz Saiz  

Yes, I'm very positive by nature. This technology has been quite visible in the market since February or March. Even the number of debates and discussions around responsible AI is a very positive sign. I'm very confident again. We will have a couple of years probably where we will see that discontinuity really starting to happen quite visibly around industries. And that gives the regulators, the big tech companies and the government the space to regulate and to put these rail guards in place. I'm very positive, and I think we will definitely see the deployment of responsible AI alongside the big transformation and the big disruption that is coming forward in the next few years.

Streeter  

Well, good to hear, Bea. And how do you see the horizon ahead, Nate?

Harris  

In this particular case, I'll refer to a blog post. So there's a gentleman at Microsoft named Brad Smith, he's our Vice Chair and President. And he recently was talking about, how do we advance AI governance, for example, in Europe and internationally. What does that look like? I echo, Bea's positivity and also my hope and belief in the technology. We're in what we consider, certainly from a Microsoft perspective, and I personally consider, the era of AI. When I think about what really is important? Well, there are a couple of key steps that are important. And I'll talk about just a little more of what I see in the in the technology side going forward. This particular blog post was from June 29. In it, he was talking about this five-point blueprint for governing AI. And it really talks about from a government perspective, from a framework and policy perspective, how do we build the right responsible direction for this technology? And so some examples of this five-point plan would be - we do need to implement and build a new government-led AI safety framework; that's a really important policy item. And we do need government really to help lead in this. We need to require safety brakes for AI systems that control critical infrastructure. I think that's an imperative. And it goes back to, do you have to have the safety brakes in place back to earning trust? I think we need to develop broader legal and regulatory framework, I think we need to promote more transparency, when you think about ensuring academic and public access to AI. Access to technology is a really important item of that inclusive principle. Maybe the last piece, is we do need to look at new public-private partnerships and how can we use AI as an effective tool to address a lot of the societal changes that we know are going to come with this new technology. And so if you said that, regarding responsible AI, what do I see the future and a framework for really leading? How do we do this? A little bit of your crystal ball question. And technology-wise, we'll continue to see these AI technologies really continuing to develop and advance. We saw this, you know, really in the recent model progressions of GPT, 2, 3 to 4.0, and accuracy rates, really accelerating at a different level in a number of different areas. We'll see these models continue to get better, we'll see them continue to even go into special-purpose scenarios. What that will require, we need an AI computing platform that can really power and enable this technology in the most comprehensive way. Then the last item is really advancing AI research and making AI more accessible. So, this democratization of AI so that everyone can benefit. It goes back to that direction of humanity. So, while not necessarily end statements, to your question about what I see, they are more or less, what do I think we need to do? And what are the trends we'll see?

Streeter  

Well, thank you very much for ending that on a call to action. Clearly, a lot of work still needs to be done to really harness the opportunities available. Thank you all for a really fascinating discussion. These are really super useful insights on the responsible use of generative AI and what the future might hold. Thank you so much for your time, Bea and Nate.

Harris  

It's been a pleasure. Thank you, Susannah. Thank you, Bea.

Sanz Saiz  

Thank you Nate.

Streeter  

And a quick note from the legal team. The views of third parties set out in this podcast are not necessarily the views of the global EY organization nor its member firms. Moreover, they should be seen in the context of the time in which they were made. I'm Susannah Streeter I hope you'll join me again for the next edition of the EY and Microsoft Tech Directions podcast. Together, EY and Microsoft empower organizations to create exceptional experiences that help the world work better and achieve more.