EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
How does growing influence of artificial intelligence (AI) on the law intersect with ethical obligations such as ensuring transparency in decision-making and data privacy? Join host Jeff Saviano in a thought-provoking conversation with Dazza Greenwood, an expert on AI ethics and founder of CIVICS.com, as they explore the complex relationship between AI and the law in this episode of Better Innovation.
Drawing from Dazza's expertise as a researcher at MIT Media Lab and Lecturer at MIT Connection Science, the discussion explores the mission of the MIT Task Force on Responsible AI, chaired by Dazza, and the ensuing guiding principles (which can be found at law.MIT.edu). These principles provide a crucial framework for navigating the ethical landscape of AI and the law.
Tune in as Jeff and Dazza discuss emerging trends in AI ethics within the legal industry, the importance of individual and organizational actions in promoting responsible AI applications and offer invaluable insights for leaders navigating the ethical complexities of AI.
Key takeaways:
Generative AI (GenAI) has tremendous transformative potential in providing greater access to justice and empowering ordinary citizens to better navigate important legal matters.
The legal industry must remain vigilant in addressing ethical issues arising from AI applications, emphasizing the importance of principles in ethical decision-making and associated actions.
Prompt engineering, the practice of composing effective GenAI queries, plays a critical role in integrating GenAI effectively in legal practices.
Competence in GenAI is becoming increasingly essential for lawyers to effectively serve their clients.
Lawyers have a duty to maintain human oversight over the development and deployment of AI applications in the law. This includes a responsibility for the quality and appropriateness of AI systems used in legal work.
For your convenience, full text transcript of this podcast is also available.
Intro
Meet the people behind today’s leading innovations – from the boardroom to the halls of government. Join Jeff Saviano, a global innovation leader at EY, to hear from the trailblazers reshaping our world. You’re listening to Better Innovation.
Jeff Saviano
Hey Better Innovation, its Jeff. As generative AI is proliferating all around the world it is shining a bright light on some profound ethical questions. Leaders and organisations are grappling with these complexities. How do you navigate this intricate web of ethical implications that come with deployment of this powerful technology. Today we are going to explore these dilemmas and how responsible AI principles can help guide us along the way. Joining us today in the studio for what was such an insightful conversation is my friend Dazza Greenwood. Dazza is a true thought leader in the field. He is the founder of the civics.com, it’s his consulting firm specializing in legal tech and tech strategy, Dazza also is a researcher at the MIT Media Lab and he is a lecturer at MIT connection science. He is at the fore front of advancing computational law and thinking about generative AI for legal applications. If you want to learn more you can go to law.mit.edu to learn all about the amazing work Dazza is doing at MIT. Throughout our conversation you will hear us dig into generative AI affects and is focused on the law and a task force that the Dazza chaired and that I was so proud to participate. Here we go, I couldn’t wait for this conversation today , lets dig right in. You are going to get valuable insights from my friend Dazza Greenwood.
Dazza welcome to the show
Dazza Greenwood
Thanks so much for having me, Jeff. You know, I'm a long-time listener, first time caller.
Saviano
I have to say, that resonates with me. I am a long-time radio talk show listener. I can't sleep without a radio under my pillow at night. So, I love that you are a listener. Thank you. We appreciate that.
Greenwood
I always look forward to it. You are one of the podcaster, Better Innovation is one of the podcasts that I not only mark as up next, but I also download it just in case I'm on an airplane or something because, you know, I got to hear it. It's so good. I'm so grateful to finally be on.
Saviano
We have a great team and it's such a dream team effort, but we're going to cut that and use it in all of our promotional. So, thank you. Appreciate that and this has been long overdue, getting my friend Dazza on the show; so much for us to talk about. You and I have had dozens and dozens, probably hundreds of conversations at this point about legal tech and about AI and responsibility and ethics. And this will be a wonderful conversation. So, I appreciate it and before we dive into our conversation, provide our listeners with a brief overview of who is Dazza.
Greenwood
Well, thank you for that gracious introduction. And I agree. We're well-situated to have a deep dive in this topic. So, who I am is, fundamentally a technologist with a deep passion for law and, you know, I started out my career basically doing technology in the political context for campaigns and then later as a legislative aide. And I would do the work, you know, arena of campaigns and political action committee. And, you know, I did that legislative aide work, that sort of thing. But the way that I would do it, I fundamentally was an applied technologist. So, everything I could script, I would script, you know, if I could use a database, I would use a database, you know, and I was early adopter of, you know, kind of page layout things for when it was time to leaflet and all this was pretty new and like the late eighties and the early nineties. So, actually being able to have some capability with technology was a game changer. It was a real advantage in terms of being able to in a very agile way respond to things. And I just was always, I'm interested in it, but I also recognize how important it was. And later lawyer, you know, one thing I noticed from using technology in political and, you know, quasi legal and business contexts was there was always legal issues that came up about, you know, what was permitted and what was not permitted and how a law was interpreted for what we were trying to do, campaign finance on and on and on. And I found that, you know, I needed to get lawyers advice because only lawyers are permitted to know, you know, the answer to what is the law and how does it apply here? And that I found uncomfortable in particular, when more than one lawyer would give me a different answer. And at a certain point, I just want to put my foot down and say, I want to know myself what the law is. And I went to Suffolk University Law School and they told me exactly what the law was, and they explained to me how to learn what the law was. I felt it was a very good, practical education in law. It scratched the itch, so happy now that I can understand and apply it. And I did practice for a time. But fundamentally, I went back and after several years of practice of in-house counsel to technology, which is where I've remained to this day from around 1999, where I run a consulting company in the technology and innovation space called Civics.com and we gravitate towards areas where there's some heavily regulated kind of transaction or somehow there's a legal wrinkle in the roll out of the technology in order to achieve a business goal. So that's the sweet spot. And I'm really fortunate to also be able to be affiliated like you with MIT, where I'm a researcher in the MIT Media Lab and in Connection Science. And my area of focus there is kind of the same as my focus in regular life. It's law and technology. We call it computational law. The big difference is that we can think of thought all the way through on its merits, and we don't necessarily have to conform ourselves to like the deliverables in a contractor, the constraints of, you know, quarterly profits and things like that, which are critical for work, but for, you know, the merits of an idea. It's great to have another outlet in academia for that.
Saviano
And to go back, I appreciate telling our audience about you Dazza and going back to one of the first things you said about your early days in legal tech, I feel like you were in legal tech before there was legal tech, before they called it legal tech. And that's what's so cool about your background of how you have applied that passion for technology with passion for the law and the benefits that come from applying technology to the law, for example, providing greater access to justice. And when we share that connection, frankly, the devotion of the connection science family, right. Sandy Portland's connection science team, that's how we met when you were in Cambridge. Then you abandoned me and you moved across the country, but you come back quite a bit and still very active within the MIT Connection Science community, and that's meant a lot to us.
I know it means a lot to you too, doesn't it?
Greenwood
It really does. And you know, it's true that I I've physically moved my residents from Cambridge, back to Oakland, where I was born in the Bay Area. But, you know, I don't know if abandoned.
Saviano
That maybe a little strong.
Greenwood
You know, another way to look at it is But I mean, if you think of it now, we still relate to each other, but in a different way, really. In some sense, it's sort of extended the facets of our collaboration.
Saviano
True.
Greenwood
Also, I've created an outpost here in the Bay Area where things are off the hook with generative AI, where I hope you'll come and visit more often that we can have some other kinds of adventures that are possible in the Northeast.
Saviano
I will. Yeah, I will. I appreciate that. I do. And I also appreciate and full disclosure, you've been so helpful to us here that at EY, you are and you will never say this. You are the best prompt engineer that I've met your ability to create unique legal prompts. And when we started our generative AI training at EY, we decided the first place we wanted to train thousands of employees was to be better prompt engineers. And I loved the phrasing that you gave us Dazza is it's a user manual to how to use this technology. So, thank you. You came in and taught a few thousand people on how to ask better questions. And isn't that really what it's about, deriving their utility from systems like this? So, we appreciate that.
Greenwood
Thank you so much for recognizing that. You know, it means a lot to me. You know, we did spend some months together in EY organization and with me with me in the role of assisting with basically propagating the fundamental skills and the kind of extended capabilities of prompt engineering across the workforce. And I learned so much about what this, the capabilities of this technology are, what I could see, how it played out across the enterprise, if you why you're into so many interesting things. The legal part, I sort of felt like I mostly understood tax fascinating. Obviously at its core, that's where you were at the time audit, whole another world, consulting you know really interesting strategy and transaction stuff and like you're into so many different things and it was fascinating to see how some things are fundamental like utility prompts that you summarize and extract and then certain things where it like it just the, the what it makes possible in the context of lines of business are almost limited only by your imagination and the context that you exist with. And that's what I really learned from working with you all.
Saviano
And we've mentioned this team at MIT that you lead called computational law. And for many our audience, they will know that MIT does not have a law school and to have a computational law lab, computational law team within the media lab, for some they may be scratching their heads a bit about, well, what's the connection to a school like MIT that doesn't have a law school? Explain the mission of computational law Dazza.
Greenwood
Thank you. It's true. MIT does not have a ABA accredited law school or any law school, and that is very deliberate. You know, from time to time it's come up at MIT over the decades. I mean, I know, you know, Joe Swanson, who's a kind of informal historian of MIT, and he can, you know, regale you with stories of times over the decades when like a law school is maybe become available for acquisition like one in New Hampshire and other times when, you know, Boston area law schools have undergone some kind of material change and there's been an opportunity to add a law school and with MIT relatively easily and the answer is always been no. And that's because MIT is not that kind of place, MIT is an engineering school, you know, proud and loud. You know, its science, its technology, it’s engineering and it has some humanities. But that's not the point of MIT. It's not the center of gravity. And so very much in line with that. The stuff that I do at MIT under this banner of law.MIT.Edu, which is where the computational law researchers that we've talked about, isn't the sort of thing that would happen at a law school. It's the sort of thing that would happen at an engineering and a kind of a science school of engineering first.
Saviano
That's a big difference for engineering first. That's a really good point. If you parachuted that into another law school, it would look dramatically different. I think that's such an interesting way to put it, Dazza.
Greenwood
Thank you. I appreciate that. And, you know, we used to call it legal engineering in terms of what our approach was, and it was pretty similar to GitHub, when I was practicing law, which is I want to see how far I could get in doing things similar to the technologists and the engineers in the enterprise where their workflows were awesome. You know, it was just so thoughtful and, you know, they could measure everything and they could, you know, kind of reuse components. And it was just terrific. And they had proper version control across huge teams in GitHub. Whereas, you know, meanwhile, I'm stuck in what felt like the Middle Ages on, you know, conference calls for the first 20 minutes across five lawyers would be, you know, what version of Microsoft Word are we in and then unfolding, you know, all the way down through it and we couldn't do the information management in the in the workflow in the and the engineered processes as well as others could. And that's what I always tried to do in my own practice. And that's partly what drove me out of the law, honestly, is like there was in the nineties there was a recalcitrant I don't I can't say like laws like the leading edge of technology adoption sometimes I think that's changing now. And I went to technology. I learned a lot of great things. I really immersed myself and I would consider best practices and learned proper systems, engineering and everything like that. And now and when I started doing this, the research at MIT, it was still ahead of the curve. It was still a little bit of a curiosity in the law. People in engineering loved it and they got it. Now it's finally I feel like the sun is rising and it's a new day. It's a new possibility for the proper adoption and use of technology as part of the process and the practice of law and generative AI has blown the ceiling off what was already a supple environment.
Saviano
I love the phrasing, Dazza. Keep going with it. Paint a picture of the current AI landscape for our listeners today.
As we were getting ready for today's show, you said something so interesting you were touching on the profound shifts that are happening, not just in our economy but across society at large, all connected to the emergence of generative AI. Elaborate on that, please, Dazza, why is this such an important moment in time in terms of AI and technology adoption?
Greenwood
So, you know, there's certain kinds of things that are relatively easy to apply technology to in law, you know, and then there were certain kinds of things up until this moment that have been just difficult or more like impossible. Just technology doesn't do that. So, when I would work in contracts, which was a big part of what I've done in my life, some things were great like, if you had sort of a standard contract with a click through, you know, you could absolutely, you know, kind of automate certain kinds of processes and, you know, and, you know, treat it like any other kind of data driven system. But if you were looking to do some sort of legal analysis or something that involved more kind of fuzzy application of lot of facts that required some sort of creativity and, you know, being able to look at things that you haven't scripted in like sort of like definitive, deterministic, almost like pure logic, through some code base. It just didn't do that. Like you didn't know how to deal with an education, hadn't already been programed to address. Well, that's a heck of a lot of law. That's what a lot of what happens in the brains of lawyers, like when we're practicing course at the top of our license and where we're counseling clients and we're trying to see what defenses might apply or how to structure a transaction, this sort of creative work that requires expertise, automation, you know, kind of deterministic classical programs didn't just, didn’t know how to do that. So, it was a different class of legal work that that we could do that we could apply technology to. Now, what's different to your question is the current class of generative AI think Chat GPT, Claud, Barrd, just by way of examples of there are many open source examples as well, these transformer-based models that are kind of instruction trained and everything else seem to have, they have demonstrably surpassed some invisible barrier that we could never get past before. And now we can see like the use of humanlike language, humanlike reasoning, and the capability of generating options and new and ways where you actually can begin to apply law to facts. You know, we did some of that in the prompt engineering. And there's a whole industry in legal tech that's rising up now with all sorts of apps and products and services that are exploring this. So, now at last we opened up the top tier of use cases of what it what the law, what law practice and law processes are, are now capable of being addressed by technology for the first time. And to me, it's like a candy shop. These are things I've dreamed about for decades and that I could sort of entertain and science fiction and speculative fiction. Not anymore. I wasn't expecting them at the end of an API.
Saviano
And it's exciting to think about it as you and I have had so many conversations about the dozens and dozens, hundreds of applications to the law, how can you better enable lawyers? How can you make lawyers better at what they do? The law, of course, is complicated. And we have tombs. We have tombs of the law across legislation and case law and other administrative and many, many applications. But it's frankly, it's the access to the law and what generative can mean for everyday citizens who can't afford lawyers. And I'm always stunned by the statistics of how few people can actually access lawyers when they're dealing with issues that are incredibly important to them and their families but this equal access to justice in this country is still a problem. Are you as optimistic as I am that these great tools can be helpful to clarify and bring meaning of the law to the ordinary citizens in the world Dazza?
Greenwood
So, I take it from your, the form of your question that you are optimistic.
Saviano
I am optimistic. I am very much so.
Greenwood
Are you? Yeah. Are you very optimistic? Are you cautiously optimistic or like where you at?
Saviano
I'm, that's okay. I'm very optimistic because I feel that that there are enough of the legal technologists who are starting to address it. I think there have been some interesting pilots in the world. I think because of the ubiquity of generative AI tools that just threw some very basic not like Dazza-like prompt engineering, but just some very basic prompt engineering. I'm already hearing stories of ordinary citizens that are able to answer questions and able to generate answers to some really troubling issues that they have just through some ordinary prompt. So, I think it's already happening. So yeah, I'm very optimistic.
Greenwood
Okay, great. Thanks for the clarification. It's also good to know where you're at. We hadn't talked about that exactly before. So, the question, am I as optimistic as you about the prospects for achieving access to justice?
Saviano
Yes.
Greenwood
Based on the advent of this technology. And I have to say I wish I could say yes, but I'd have to weigh in at something more like somewhat optimistic or on a on a bad day, maybe cautiously optimistic is where I would put my like, you know, like Moody's optimism rating on the technology people. And that's because on the one hand, I agree with everything you've said and it's just, it's manifestly obvious if you just watch a process, a person try to figure out how to go to, you know, a divorce proceeding or something else, at the end, this technology unleashes things that were hitherto impossible. And so, it's if the benchmark is what's happening to people that are really disenfranchised now and how are they getting into the court system without legal counsel and doing their best in the language that they have an understanding, they try to get into the system. It's way better. However, I don't know. But when I think about the future, I don't only measure it based on the difference between today and the future. I try to look at it almost from a from like a hot, a further out zoomed out view and say, okay, how are things coming together and what will be the new kind of dynamics and balance of the system that will exist in the future? Not only that people will be better than they were now at being able articulate things like we're better than and lots of things that we could do in the 1800s. But meanwhile, we don't measure our ability to turn the lights on just against that. We measure it based on, you know, a whole bunch of factors in terms of, you know, energy and the economy and what we're doing and labor. And so, what I see in this, what I see a need for and the reason of what I'm cautious about is that the judicial system and the counterparties and everything else are also moving targets. And it's and there's going to be new sorts of rules that are needed that could make things better or worse in the judiciary when it comes to access to justice. Some of the first reactions and I use the word reaction very deliberately here are two like it almost feels like to ban the technology or to require a kind of a lawyer like disclosure and a fact checking of everything, even by pro say clients. I've been seeing on LinkedIn, people have been citing two different judicial, you know, kind of like guidance and rules. And I sort of wonder if, you know, if we keep going that direction. You didn't have access to a lawyer to start with and you can't necessarily evaluate the output or go and find, you know, second, to find the case that it's citing do or anything like that, then, you know, are that people really going to be better off? And so, I think what we have to keep an eye on is how is the rest of the system adapting to the technology, not just the people that are currently disenfranchized and make sure that we do it in a way where that where we can spread the benefits of the technology more evenly than the benefits are spread right now.
Saviano
It's true. And when you think about and you raise a really good point, you think about the application within the judicial system and there's a lot of checks and balances. And of course, we had the early case of hallucinations that has everybody on edge, because sometimes this technology just make stuff up. And that happened to a lawyer and that's been a story has been famously told now many times. So, I appreciate the caution within the judicial system, although, you know, even to take a step back and the most basic applications of just think how many times an average citizen will think of the question of do I have a valid claim, something happened in my life, is that do I have a valid claim? Is there a case here if I were to bring a legal action or I need a document that summer, I need somebody in my family needs a will and we can't afford an attorney. And so, getting something that is generated from a tool is better than nothing. And so, just the very basics like the access to justice of the knowledge and that's what's exciting to me is that it's affordable because there's free versions that are amazing. And it's not just low code, it's no code. I probably said this on the show. I love the trivia question. What was the hottest programing language in 2023, English, English. Because you don't need to understand anything, anything more advanced than that. So, you raise good points. Okay, let's pivot a bit to response.
Saviano
May I say one other thing about that.
Saviano
Yes, please.
Greenwood
Number one, you know, I would never trade this capability for anything else, although there's going to be challenges in the future for access to justice. This is overall a game changer. And it's long, it's overdue. It's critically needed. It's very beneficial overall, in my view. The other thing is there's just looking at the access to justice for people that don't have enough access now and looking at it through that lens, you know, the root cause of a lot of the problems here that that really plays out with generative AI in particular but has been plaguing people, you know, for centuries now in the United States in particular, is that we don't have access to law. And access to law precedes access to justice.
Saviano
You mean knowledge, knowledge of the law.
Greenwood
Sure. Like knowledge is sort of big, but I just mean like the literal data itself. And then we can have knowledge and wisdom and there's like, with different words. But what I mean is just very rudimentary and literal.
Saviano
The fundamentals.
Greenwood
The fundamental, like in order to search cases and to search statutes and to see when things have changed fundamentally, we need to go to a Westlaw or to a Lexus or to a Vlex. Like, it's not really possible to do that unless you have because the laws, the law itself, like the written law, has been privatized fundamentally and in terms of it being usable. And now we're held to know the law. Ignorance of the law is no excuse. If the law were accessible as data as it should be, then we could feed it into models that could do a better job, you know, working with the law without worrying about the copyrights or anything else. People could quickly fact check things that came out, you know, in ways that you can't right now if you're not a lawyer, etc. So, I want to ascribe a serious, troubling challenge in terms of how access to justice in a bunch of the rest of law and the economy is going to play out to a root problem, which is we don't have access to law. That's the quintessential public record, and that has to be fixed.
Saviano
In a user-friendly way, in an interface where it's again, back to the word access, it is accessible and you don't have to spend an inordinate amount of money to the privatized versions of it. And I think part of that is the obligation of government to make those treatises and the law available, but also that what this technology will do is serve as the medium for ordinary citizens to access it in a way that's understandable, because it's not as though if you provide access to a ten-page tax statute, that ordinary citizen is going to understand it. But these tools to help in that understanding. Did I say that right? Would you agree with that?
Greenwood
I agree, yeah, I agree completely. And the two things to keep in mind are, number one, for regular people to be able to get the correct tax code into the prompt as part of the context, you need access to it. Number two, when you're getting something, if you're showing up at, you know, court, probate or whatever it is or write it, it's not fair. And then it's telling you have a valid claim and here's the reasons why you have to be able to duck to site check that, you know, and to confirm it somehow and that these so these are examples of things that aren't possible. That would be possible if and I really hope when the law is set free, when we have access to the law.
Saviano
Well said! I love it. when the law is set free. But these tools are powerful and we need guardrails and we need responsible use of these tools. Let's pivot to responsible use of AI. Let's talk about ethical, responsible AI principles. I was very pleased to join a task force that you formed as the chair at MIT on this topic of responsible AI for professionals. And we're both lawyers and there was a heavy legal emphasis. Talk about the mission of the task force Dazza.
Greenwood
Thank you. And it was so wonderful having you on that task force. You really helped make it. The success that that it that it became. The MIT Task force on responsible use of generative AI for law and legal practices or legal processes, rather, that's the official name of it. The mission of that was to help create a framework that legal professionals could rely on to integrate generative AI into their practice responsibly. So, one of the assumptions there is this is good technology, it's useful, it's a good idea, and we recommend lawyers becoming good at it and using this part of law practice, that's sort of a prior assumption of it, which was controversial when we started the task force. You know, there's quite a few naysayers saying like, this is totally inappropriate for law. And for every
Saviano
There was. Yeah, absolutely.
Greenwood
That but so I think it doesn't go without saying that we were beyond that at that time and at the very same moment it has limits. It has risks. You know it has flaws. And so, we have to understand those and the implications for our continuing ethical and other kind of canons of responsible use as professionals, the things that we're held to. And so, we wanted to surface. Those were to understand them so that we could keep them in mind and to be able to better utilize the technology and get the benefits of it without risking, you know, unintended or, you know, kind of like negative, potentially catastrophic outcomes. And so, you know, the real fear comes from ignorance. And we think what we really wanted to do was shed light on what the sources of risk were, what the harms could be, and then to bring it inside of a framework for the responsible use so that we could fundamentally engineer how best to leverage this new technology.
Saviano
And why did you decide to form it? There was and we've talked a bit about this off air, that there was a gap in the world that you were trying to address. And this was some time ago, but not that long ago. And it's advancing very, very quickly. But what was the gap that you saw that led you to say it's time, like you have to step up? And we all we appreciate; your leadership appreciate you taking that first step. It takes that the first step is so important. And we were I was really quite pleased and humbled to participate. What was it what did you see in the world that led you to say, I want to form this task force?
Greenwood
Thanks for asking it just that way. And so, you sort of like gave the answer a way, I think a little bit earlier.
Saviano
I'm sorry I did that. That's a leading question for the lawyer in you.
Greenwood
Lead a way. You know, we felt we danced our way through.
Saviano
I'm sorry, counsel, you.
Greenwood
Yeah. So, and that's what that is. That's what I was paying attention to. A great nothingness. Like a gap to use your word. Like when I was looking to see what is, what are the implications for, like, I sort of remember it because I used to practice law, like there's confidentiality and some fiduciary duties and other things which maybe we'll get into as like it seems like those are totally going to come up and I could think of situations that I was just experiencing as I was kind of playing with capability, testing the technology, red, teaming it a little bit in a kind of ad hoc way. And then I would just Google search to see like, you know, what's the EPA saying about this? They're sound asleep, but well, what did the state Bar Association doing? I don't know. They're off on a cruise somewhere, not looking at this and like nobody had anything to say about this. And so, part of this was just based on a strong desire to use the technology to embrace it within our research and within our practice and to propagate it by telling everybody the good news about what this new technology is and what it means to have intelligence as a service, in a sense, and how we could use that to supercharge our own capabilities and to have better outcomes, but then to not do so in a way that was, you know, almost like willfully ignorant about what the risks were and how to mitigate them in a practical way. And when I would search for ethics and things, I would see interesting stuff. And there's still a whole quasi-industry about ethical use and responsible use of AI. It's all a very kind of what it's really like high up in the sky and certainly possible like transparency and, you know, good like broad values. But it seemed rather abstract to me, especially back when we started the task force. And what I wanted was something that was just very, very vigorously practical. And what we, I think the best thing that we did was we started with the rules that currently apply to the practice of law, and we enumerated and then we just crosswalk them. So, that was the gap that we wanted to fill. And that's what gave rise to convening the task force.
Saviano
There was a big gap in the world, I felt, and that's why you got the reception for those of us who came together, quickly will talk about who else is on the task force. But I have to comment on your ethics point and I, I couldn't agree with you more. It is a soapbox issue for me. And you and I have talked about this, that there's a problem with just stating responsible principles and you look back to in 2020 there were only 80 organizations that stated here are our responsible AI principles and now there's literally thousands of everybody is doing it. The problem is, is that they're stopping at principles and we just published, I mentioned on the show started some work with Harvard in the Safra Ethics Center studying AI ethics. And we publish a blog every Friday. You can search for it on business ethics, Harvard, Safra Ethics Center. And we talk about these issues that everybody in the world is stopping at principles, but we believe in the notion of applied ethics, practical ethics. You've got to make it real. What are the actions you can take to protect society and the world from this powerful technology? And what I loved about what you did with the task force is we went right to the canons of professional responsibility. That we are bound by as lawyers. If it's okay Dazza, I'd like to hone in on we're not going to have time. There were seven key principles that we scoured as a task force, and I went back actually preparation for the show to some of my notes from our discussions. And there were three of those that actually felt like most of our debate. And we had some spirited debate on some of these questions. Is it okay, let's, is it okay, let's run through those three and talk a bit about each one. How is that?
Greenwood
Great? Okay. All right.
Saviano
Let's start with the first one. The duty of client notice and consent. Explain this principle and what the recommendation was from the task force. I want to I want to pause and talk a bit about the debate that ensued on this particular cannon and this particular duty.
Greenwood
Sure. So, well, first of all, let me just fill everybody in on some of what we're talking about. So, if you're if you're trying to follow along at home, you can go to law.MIT.edu/ forward slash AI and kind of scroll not far and you'll see this kind of a card or like a big link to this task force we're talking about. Go ahead and click on that and you'll see the seven principles that that just referring to. And in principle, number one is and by the way we kept them principles. But they're all based almost all of them are based on duties are very objective.
Saviano
Yes.
Greenwood
Existing obligations of attorneys that that we somewhat know how to measure because, you know, people have violated them and shown that they haven't violate them. So, we're going to have some we have some data here. And the number one is duty of confidentiality. And then we added T to the client in all usage of AI applications. But it's we're really focused on generative AI applications and what we one great thing we did here and this was if I remember correctly, I think it was your suggestion on the task force, is that what why don't we in the guidance part show examples that are consistent with the rules of how to do it in a way that you're not violating the rule? Examples.
Saviano
Good and the bad.
Greenwood
Yeah, and that was just a real masterstroke Jeff and it really helped us organize our own thinking as well and to make the work product more useful. So, the example that's inconsistent is you share confidential information of your client with a service provider through prompts that violate your duty because, for example, the terms and conditions of the service provider permit them to share the information with third parties or use the prompts as part of training their models. So that would be called bad. That could violate your duty of confidentiality. And an example that's consistent is ensure that you don't share confidential information in the first place, such as by adequately anonymizing the information in your prompts or possibly ensure contractual and other safeguards are in place. You know, like with a service provider, for example, something that we didn't say there but I know we talked about and that I would add now, or if we revisit this in a future version, you could just run an open-source model yourself, like in your own stack. And, you know, with Mistral and Mistral and some of the other really powerful ones, like the 70-billion parameter models, especially when you when you train them and have retrieval augmented generation, they can be perfectly good for a lot of client work and no risk of breaching confidentiality when it's all happening on premises.
Saviano
And one of the issues that that I felt was probably the most hotly debated that we had as a task force was this duty that may exist to tell your client just using it in simple language, if you're using a generative AI tool to enable you as a lawyer in serving your client, do you feel an obligation to tell your client that part of the work was enabled by a generative AI tool and I felt like that we had a spirited discussion as a task force on that particular question. Am I remembering that right of our discussion about, do we feel as though in every instance that the world is ready for a carte blanche duty for lawyers if you're using a general aid tool, you should tell your clients that you're using it.
Greenwood
You are right that this was and frankly, I think it remains a somewhat debated question about, you know, how this duty plays out. And we actually put a star like an asterisk next to it with some further kind like know disclaimers and whatever about that. But first of all, let me just say, you know, lawyers and other producers will probably know this, but for anyone else that's maybe not super up to speed, lawyers are fiduciary to their clients. And one of the core fiduciary duties you have as a fiduciary agent to a principal for the client in this case is a duty of candor and disclosure.
Like you have to tell the principal everything that's relevant to them, unless there's some specific exception or reason not to like they said, don't give me, you know, daily updates on this kind of stuff. Only tell me that. But for the most part, if you know something as the agent, then the principal is held to know it too. And a print and you're supposed to tell them and they're supposed to ask you. So, like that's the starting point. It sort of assumes like perfect information parity. That's a lot, in practice everybody knows that almost never happens. You know, like principals don't want to be the reason they hired you, which so you go and do stuff and that you could just tell them the material things or the stuff they ask for and so forth. And so, people have to find ways to balance that. Like when does something rise to the point where you have to disclose it to a client? Like a really a typical kind of thing way in this place is that someone gives an offer to settle the case to the client. You have to bring that to the client. It's not like a judgment call, but like you're flipping through case law and you discover something or you're trying a new app out when you're figuring out you know how to do, you know, some like huge discovery request, you know, administrative task. Do you have to, like, wake the client up and say, “hey, I just noticed this configuration menu, you know, deep down and, you know, this discovery platform, and I just want to make sure you knew”, that that's part of your case. And there's some sort of like continuum here. Yeah. Sometimes it's do you have to tell them when you're using generative AI? People aren't sure whether this is a big thing or a small thing. Is it expected you're doing it? Is it surprising? So, just to pull it together that in that continuum people have different opinions about the place of generative AI and what a client would reasonably expect starting.
Saviano
Yes.
Greenwood
Sorry to interrupt.
Saviano
You didn’t interrupt and I appreciate that explanation. I suppose it comes down to Dazza, is the application of generative AI to a client's legal issue fraught enough with peril that as a default you probably should tell your client that you were using it to enable their work. Now, what makes a complicated is that I saw this statistic the other day by later 2024, some are estimating that something like 75% of all software code will be generative AI influence. Like that's not what we're talking about when I'm talking about the underlying code that may have been generative AI influenced by a software developer. We're talking about a lawyer using a generalized tool in order to serve that client as a task force that I think where we came out is that if it was ordinary and usual in your relationship with that client, to disclose such things with technology, that you probably should disclose it. I was probably a little bit more on the conservative side, frankly just between us and all of our listeners that I feel like in these early years, still these early days of the technology, I think that more often than not, it would probably behoove a lawyer to disclose it to their clients, that was my position. That is still my position until we get further along in the development. Do you think that's reasonable?
Greenwood
Completely. And I think your point of view, I think ultimately, we found a consensus of how to address that. I think we did some course that reflected everybody's point. I think everybody had a really insightful, you know, like aspect of the question. We were able to blend them together, for those of you that that don't feel like clicking on the page and reading all the guidance, the, the word that we used to encapsulate a lot of this was, well, it's right here. It's so the guidance is, how do you breach the duty of client notice and consent about the use of generative AI if the client would be surprised, surprised is the word by and object to the way in which the attorney used generative AI for their case. That would absolutely be a pretty clear example of like you should have told them.
Saviano
Yeah, that's helpful language. Yeah, we've heard that from people too, that the feedback from the task force that's been great. People appreciate that there was at least a standard to apply. That's one. One standard, right? A minimum standard. I would say.
Greenwood
A minimum standard. And it's an example like we couldn't cover every single data point. We can’t see the future, but we could start to give some contours to how to look at this. And there's more language or it's consistent and consistent. But one of the things I liked about that way of kind of them, you know, whatever, like, you know, rounding the square was that word standard is a good one. So, in January 2023 clients might be very surprised to hear about what's happening here. In January 2024, maybe in January 2025., they might be surprised if you're not using it.
Saviano
And so, it's a really good way to put it. Yeah, yeah, yeah, I agree with you Dazza. That is well said. It's happening that quickly. Right. I think what you just forecasted was phase two of the task force I think is what you just said. But we'll come back to that. Okay. I want to give you one more that we had spirited debate on, the duty of competence, the duty of competence in the usage and understanding of AI. Okay. So, this actually may have been it may have been somewhat contentious. The question was, in order to be, quote, “a competent lawyer” are we at the point where you must consider how a generative AI tool could be used to better serve your client, not to definitely use it definitively, but to consider it? And you should also know, I'm not sure if told you this, but I pulled the work of the task force into the class. I teach at Boston University School of Law and we spent a lot of time as a class talking. So, these are young lawyers. They'll be newly minted lawyers now in a matter of months. And on this question of competence, I can tell you where they came out. But what's your recollection of on this question? How do you feel about the question of do you need to at least consider these tools to be a competent lawyer?
Greenwood
If you're a lawyer, hear me now, you need to be competent in the use of generative AI in order to be minimally competent at the practice of law.
Saviano
That's clear language. Okay.
Greenwood
That's pretty bold. And I believe that is. That's the fact. Jack.
Saviano
I don't think you were that bold six months ago. I think that that's, I think the way that you articulate. I agree with you, by the way. I think, I am with you. I think that we've already shown how powerful this technology can be. Keep going. Why do you feel that strongly?
Greenwood
So time has shown now that we've now that this technology has been available widely for more than a year and it has advanced considerably in the time that it's been available, what was widely available at the end of November of 2022 with GPT 3.5, which radically surpassed what GPT four and now as of yesterday or a couple of days ago, we have Gemini Ultra and I'm not supposed to name products, but these are models, I'm describing technology models that happen to be connected with certain companies. But anyway well, forgive them that. But the point is the capabilities of these models based on objective benchmarks and evaluations are, for example, you know, like acing the bar exam. And are performing all kinds of fundamental reasoning or what would be reasoning if a human was doing it tasks and that all kinds of remarkable capabilities that are and this is really critical, also the capabilities that constitute the practice of law are some of the capabilities that constitute the practice of law. And there was a paper that just came out recently called Better Call GPT, sort of like a funny quip, right? Better Call Saul. And it basically, you know, it didn't have like the hugest, it didn't have like the most airtight methodology and, you know, we can have a bigger sample size and we could quibble about it. But fundamentally what it showed was that this technology was comparable or better in a lot of poor legal tests than actual lawyers that were doing the tasks that that's what I got out of it. And so that and so you can see where I'm going, if I may. Like what this means is that we have this powerful tool that is for the practice of law where legal, all legal tech, like Microsoft has it in office, like it's all out there. It's happening now. It's a hand-in-glove fit for law practice. It will make your job faster, cheaper and better when you use it right. And that is why I'm saying and everyone else is using it. And so like, if you're not using it, you're now officially behind and you are not competently practicing law if you don't know what it is and how to use it.
Saviano
It's that comparison. I love the language and it's the comparison to the lawyer that perhaps is already using it in order to be as competent as that lawyer. I can tell you, I mentioned that we incorporated this into my class and I feel like universally my class of 24 students. Like they got it. This is kind of funny. I had a few students updated their resume to incorporate like they are an AI enabled lawyer, like they are the future and I think they see that is the next generation of lawyers that are coming right around the corner. Okay. I want to talk about another of the seven that I found that it was particularly interesting was the professional responsibility, duty of supervision. There's a is part of the canon of professional responsibilities, a duty of supervision that was crafted in the context of human supervision. You think of supervision, you think of more experienced lawyers, supervising junior attorneys. How does the notion of, quote, “supervision” apply to an AI system?
Greenwood
Yeah, that is such a provocative question. And so, I think there's there are different points of view on that very question. And, you know, it starts with what is how do we regard what the generative A.I. system is? The more you regard it is like a tool, I think the less applicable the duty of supervision is because we have other ethical duties for looking at tool use in terms of like competence and etc., Like is the tool breaching confidentiality. It's a human duty to look at your tool to make sure it's not breaching confidentiality, like have you updated the security patch, etc., whatever. The more we look at it as the word I would use here is agentic. Kind of all the more we regard as a type of agent that's in processed of agents, the more we look at it as being able to and the more we use and configure it and rely on it as something that's going a certain distance at without point by point human review and approval. So, when we start putting into like more automated processes and decision training, some things like that, the more we do that, the more attentiveness I think the better fit potentially either the duty of supervision or some ancillary duty that maybe we use a slightly different word when it's applying to non-human, like when it's applied to pure technology and still slightly uncomfortable, anthropomorphizing it with this sort of supervision. But a duty that's like that might be more appropriate. I should say since I moved to California, I've been really psyched, really stoked to hang out with these California lawyers and California State Bar California has this legal ethics group called CoCrack. It's that acronym. And they took our MIT guidelines and they use them as the starting point for guidance to California lawyers. I think that was one of the successes of our group. And I was honored to serve as a as an advisory member of the working group that drafted those. When they really looked at through and they tried it different ways and different drafts. You'll notice in the final guidance, which we link to from the MIT task force site, that they didn't apply the duty of supervision to the use of General Tab because they didn't regard it, at least at this point in history as a gentle act. They regard it more as a tool and therefore the things that we would use, the same stuff that you have to do under the duty of supervision, you still have to do. But it sounds more like in in the other duties we have to our client with respect to the use of our tools and our processes in our work product.
Saviano
Yeah, and I think you're right. I think and it is, it is great to see and we've talked about the application to the California bar and we consider that such a victory that that that's why we participate in these task forces, that we don't want it to sit on a shelf. We don't want it to sit in a GitHub folder solely. We want people to take it and to use it and to improve lives as a result of it. And, I think that it's interesting that as I came away from and I appreciate your explanation of the jury supervision, I had even a more basic application that I think benefited some lawyers I'm hearing that have been interfacing the duty that they have to know that there's an obligation, if they're using technology, that they're responsible for how that technology is used and that they've done some, quote, “diligence” on whether there's quality output and it rests on their shoulders. And just as the duty of supervision applies to a junior attorney working on their team, the more senior attorney has that responsibility. And I think that, what I'm hearing from lawyers Dazza, who interface with our work, that they appreciated the reminder that if they're using any technology system, not just an AI system, they're on the hook for it. Is that too basic of read of this particular duty? Because I've heard that people got some benefit just through that reasoning.
Greenwood
I'm glad to hear it. And that was, after all, why we ended up including it as like it made it just, I think about the line we added duty number seven, duty of. And the way we phrased it, by the way, I think really gets to what you said, and I was behind it. Like, I do not regret having had this as part of our guidance at the time that we put out that version, I guess like April of last year. And, and while well, I appreciate what the California bar, the way they looked at it, you know, I think that's also valid. I think for our purposes of kind of our purposes are somewhat different than a state bar association. We're trying to surface all the issues we want to we want to be helpful and constructive. We want to give people more resources to make their own decisions. And we did it earlier than the California State bar. And the way we phrased it was different at the California State Bar was also somewhat constrained, but they were applying like a very specific, you know, promulgated set of regulations. The way we were looking at it was somewhat broader and it included the accountability and supervision. And I know this sounds very deeply and in EY, in other places where in other certain firms where the practices for certain types of matters or for a certain you know, if the associates below a certain number of years like a senior more senior attorney has to sign off like their name on that person's work, literal accountability is what it means. It's not just like mentoring and the other kind of dimensions of supervision that sometimes come up. So, what we said was duty of accountability was the first word and supervision too. And then we really doubled down and the rest of the sentence to maintain human oversight over all usage of air applications.
Saviano
I think. Forget about the people.
Greenwood
Yeah, yeah.
Saviano
Unforgettable people. The human is necessary. Dazza, as we are nearing the close of our conversation, this great conversation today, I want to make sure that we get to some real concrete actions that our audience can think about and I'd like to go back to because I actually think and I mentioned this is where we started at EY, and it was very concrete for us. I want to spend just a few minutes on prompt engineering and as we described it as a user manual for asking better questions and using these systems more appropriately. What is prompt engineering? Explain what that means and do you think that is a good concrete action that some can take away about how do I get better at asking those questions.
Greenwood
That is such essential practical advice, I think for lawyers and for everyone that's listening to, to this podcast is the number one action item you ought to take up is to get educated. Now about how to use the technology. And the way that we use it as users is fundamentally through what we put in the prompt before we hit submit. And, you know, we do that in different like Chat GPT and in Bardens or Gemini, rather. There's a little prompt box. Sometimes it's more infused as part of the application, like we're doing it as part of a, a set of like office apps and other things like that. But but something that's common is there's a prompt in the prompt is user generated at least some part of it is there may be system prompts and other things that are introduced into it as well. But the part that that's the part that we control. So, understanding how to compose effective prompt, understanding how to evaluate the output based on your prompt and how to iterate that prompt is in order to get highest quality, most relevant, most effective outputs is kind of like the critical skill of our age. I don't think that's put into to find of a point on it. And it's a learnable skill as we as we've demonstrated together and as I've been out in the economy, you know, working double time, doing workshops and talks and getting embedded in teams and doing all sorts of things, I've seen that people can learn how to do this. One of the best things is what you said early on, which is the way we do it, is through talking. And so, in some ways, lawyers have an unfair advantage. Lawyers are talkers we’re trained to use.
Saviano
It's true. And now I appreciate that.
Greenwood
You know, But, you know, but it's not just but what you find is there's a dialect of language that we have to become competent at to understand and competent and skillful at its use. And that includes like ways to instruct, like to affirmatively instruct the model of what it's supposed to do, the sorts of things you can instruct it on that it's good at the sorts of things where you, you have to give it the additional context with the process absolutely correctly and so forth. You know, I've been doing some stuff with the Department of Defense recently, which I'm proud of. And one of the things I'm starting to notice and, my dad was 20 years in the Navy and he sort of knew this sort of knew this in the back of my mind. But I'm now clear that there's a whole art and science of how to give an order. Like when you're giving an official order to somebody like you take these torpedoes and move them to a bay number three or whatever, like you're, you're giving them an instruction that is super clear, it's authoritative. And there's like this all like language around it. There's sometimes we have things similar in law. We're like, if you're if we're doing an injunction or like, there's certain times when we have the magic incantation that makes the thing happen. Think about prompting almost like that, like, put that hat on and you'll get much better results.
Saviano
It's such a good vantage point. It's so helpful. My dad was a Navy guy, too. I didn't know that. I never knew that about your father. That's cool. I want to just I also want to say that if our audience is interested in this, that you posted some videos to law.mit.edu of some sample prop, some simple legal prompts, and when you helped us and I still take away your example of if your CEO was leaving an organization and you're thinking of bringing a suit, the magic of persona based prompting, imagine you assume the persona of an aggressive, I'll use New York City. People will write me and get me out of this. But if you're an aggressive New York City litigator, I mean that with all due respect that with assuming that persona, you will assume the strategy. However, what if you want to preserve the relationship and you want to seek mediation? How would you go about that dispute? Okay, generative AI to compare and contrast those two approaches. So, we learn a lot from you about the magic of persona based off. You need to explain something complicated. Assume the persona of a high school teacher that may be particularly good at explaining it. So that's like there's so much value there. The other example that you gave, I feel like so many people are still using these tools for single questions, but that's not how we work as professionals and in life we work in projects, we work in.
could be a three-month project and you have like 100 questions, right? And so, keeping a version of the tool and asking it multiple question, it gets better and better and better. That's something that we really took away from that training too. So, if you're interested as an audience, am I right? Am I okay to say that there are some cool videos at law.mit.edu that they can see those examples.
Greenwood
Checks in the mail. Yes. Like, please shout it far and wide. And the reason we put that stuff in at law.mit.edu on the site under Creative Commons for free is specifically in a charitable, you know, sort of for charitable purposes so that we can share information far and wide free and open to all. And so, I encourage everybody to look at that. Most of what I do in the private sector is secret things like I can't tell anybody this, that there's all kinds of great solutions and really interesting things that are happening out there. All sorts of companies and government and other entities are picking this stuff up and making breakthroughs. You know, in the fullness of time, I think information will be spread more widely. But for now, when I see something that isn't that I'm not prohibited from sharing, I go right back to law.mit.edu in the editorial team and everyone else doesn't disagree. We try to get as much out there as possible and I encourage everybody to share your nonproprietary kind of nonsense and appreciate that prompt so we can all we can float everybody's boat, you know, all of all the better.
Saviano
I feel like that's like, that's a connection science principle as well. And we appreciate that you do it and you look across the connection science side and we see that. I want to, we're really getting close to the close. You had said something to about bioethics, and I just have to ask, do you feel that the legal industry is now paying enough attention to ethic issues that emanate from AI and I hope that through our task force that they've learned something and that that it helped to educate some. But do you think it's still a bit of a work in process, or are you encouraged by the respect of these ethical principles by lawyers? Particular to AI.
Greenwood
You know, it's a work in progress still, and that's understandable. You know, as we're taping this, it's February, you know, early February 2024, it takes time for institutions appropriately to adapt to new technologies. This is still relatively new. The uptake among lawyers is still somewhat novel. So, in my assessment is that we only have a tiny fraction in the United States of State bars that have that are provided guidance. It's really California is a number one, which I think happened to believe it's like really great, extremely usable guidance in the context of California, you know, codes of conduct and responsibility. Florida has something, it's a bit more partial. You know, it doesn't sort of go across all the rules and apply them to degenerative AI. And then most states are still somewhere in the process of coming up with guidance and some haven't really in earnest started yet. So, I would if you're if you're in a bar association or you're active in a state that doesn't have adequate guidance yet, come on down to the law.MIT.edu for AI
Saviano
Love it.
Greenwood
And check out our task force page. We have links to the California. Argentina actually just sent me something in a telegram channel where they're sharing tapas principles to some extent in public stuff. So, we're link to examples that you can look at to get you started. But I think if you're in a place that doesn't have this yet, go ahead and get started now. It will really help the attorneys in your jurisdiction to responsibly adopt and use the technology.
Saviano
Dazza, what a great conversation today and just can't tell you how much I appreciate you coming on the show. We, as you know, as a long-time listener of the show, we have a way that we'd like to close out each of our interviews. We have three quick questions, quick answers. What do you say? You're up for it.
Greenwood
Let's do it, though. I want to do it.
Saviano
All right. Here we go. Let's do it. First question, What's a book that has greatly impacted you?
Greenwood
Snow Crash by Neil Stephenson. Really great speculative kind of science fiction. And now I know that people back on the audio podcast can't see it, but look what I've got the Apple Vision Pro, which I know.
Saviano
You do.
Greenwood
The first day I got up at five in the morning, it clicked my way to happiness and so now we're finally, you know, the sort of mixed reality capabilities are upon us. And we have we actually have a generated avatars and we've got a lot of interesting stuff happening with spatial computing, you know, and that's going to keep getting better and better. And Snow Crash is a terrific exploration of what that world might look like. And who are we in the face of it? In the nineties, when I first started teaching at MIT, Snow Crash and Neuromancer were required reading for my grad students and I continue to think it's a terrific book. And Neil.
Saviano
Wasn't that where they first coined the phrase the metaverse, does that right? That was first from that from that book? or am I getting that wrong?
Greenwood
It was cyberspace and Neuromancer. Okay. Or maybe it was I have to go back to a word.
Saviano
Yeah, I'm going to check that out tonight, too. All right, Dazza, here we go. Question two. What piece of advice would you give to a younger version of Dazza?
Greenwood
I would tell my younger self to always remain adaptable and to really prioritize continuous learning and to maintain a focus on ethical considerations. You know, when I was very young, when I was much younger, I fascinated by ethics specialties in politics. I took a course on Ethics and Harvard Divinity School just to really get clear on it, because there's a lot of gray areas in politics. And I felt ethics were sorely tested. You know, I do. And then when I was in law school, I wrote a lot of papers about legal ethics. And like, it's always been somewhat fascinating to me. I assumed that once I understood it, I could like, check the box and move forward. It's actually a lifelong journey of learning and exploration. How one applies ethics to oneself and one's own conduct. I'm not talking about, you know, judgment about other people. I'm talking about applying ethics to ourselves. And I think I would tell myself; you know, stick with it and keep at it.
Saviano
It's such good advice. I've learned so much this past year from working with the Harvard ethics team, and I've learned that ethics is about asking questions and it's about resolving tensions. And if it's a 9010 tension, most people will get that right. But it's the conflicting priorities. It's the 5545 tough calls that I feel like people need some help with. Dazza, last question, what areas or industries do you think are ripe for innovation, let's say, in the next 3 to 5 years or so?
Greenwood
3 to 5. Okay. So that's a longer horizon, but maybe slightly shorter than that or more quickly. I think public services, you know, at the state level in particular, where they're somewhat more nimble, I'm hearing some really good things from state CIOs. And I'm starting to see like AI, you know, SaaS and I officers and AI kind of the people taking a second look at these old clunky bureaucratic processes and to the extent we can open that up to citizens and businesses and residents so that they can understand and work with at the state level, you know, the government that includes the judiciary, it includes all the administrative agencies. Good. The DMV, like there's so much there at that, more local state level of government. I think that's a place where we could see better transformation more quickly than the federal level. And the local level may not have the resources. So, I've got my eye on state governments and I've got my fingers crossed and I'm hopeful.
Saviano
Laboratories of democracy. And they are testing ground oftentimes in this country for policy. Yes. So, and I hope they rise to that challenge in a traditional place again.
Greenwood
And I think early indicators are that they that they may and they are. The other thing I'd say is, you know, I would change the nature of the question a little bit. But you talk about industries and, you know, every industry is in various ways. I just saw another report yesterday that was that was that was talking about the deep transformation across all sectors. And, you know, for the trillions of dollars of savings of new values and new value in the next ten years that would be generated just from this technology. But the one that I think is more profound that people aren't thinking and talking as much about is the individual. So, like we as individual actors in the economy and in society and in our lives, I you know, the advent of personal AI that can make use of our personal data, that can maybe reside locally on our on our desktops and our laptops and on our smartphones, which you can like. I've already spun off an open source, somewhat powerful AI model on my phone. And I've done it in like a Jupyter like notebook. So, I think we're beginning to see the devolution or like the pushing down to the endpoints of this technology to individuals. And that adoption, I think, is going to be among the most profound in terms of unleashing the potential of people at last.
Saviano
It's so well said. I actually think that even within an organization, the value could be in the aggregate of how all of employees and people are leveraging these tools more so than enterprise-wide applications. I think the way you said it, you said it is perfect. There's a whole other show on how these tools are innate, this this hyper personalization that results from it. Dazza, my friends, this was very long overdue. I can't tell you how much I enjoyed this. I appreciate it. I miss you in Cambridge. I'm coming to the West Coast soon. It'll be good to see. Maybe we can play some tennis again. It's been a long time. That would be fun. Thank you so much for coming on the show and sharing all of your wisdom, all of your Dazza wisdom with our listeners.
Thank you so much.
Greenwood
And thank you so much for having me. It's, you know, I finally am able to check this off my bucket list of being on your podcast. And I'm just so thrilled to be able to have the ability to talk with you again in a longer form. And you know, you are always welcome here in the Bay Area where I think you'll be right at home. I'll take you to all the cool meetups.