EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
How to make finance and accounting AI-ready using semantics and ontologies
In this episode of the EY Microsoft Tech Directions podcast, explore why AI needs semantic context, curated metrics, and trusted data foundations to unlock enterprise use cases.
In this episode of the EY Microsoft Tech Directions podcast, Amir Netz, Technical Fellow at Microsoft, explains why artificial intelligence (AI) creates real impact only when grounded in business semantics — where clear context, shared definitions, and governed metrics turn raw data into trusted insights.
Joined by Michel Porter, a Partner at Ernst & Young LLP, The conversation underscores why trust is paramount for finance and analytics leaders, with explainable reasoning and controlled calculations at the core of confident AI adoption.Together, they explore how the future of data and AI blends human‑designed reports with intelligent AI agents that rapidly synthesize insights across complex, fragmented data environments.
Speakers:
Michel Porter, Partner, Ernst & Young LLP
Amir Netz, Technical Fellow at Microsoft
Key takeaways:
AI delivers value only when grounded in business semantics — context, definitions and curated metrics turn raw data into trustworthy insights.
Trust is everything: explainable reasoning and governed calculations are critical for AI adoption in finance and accounting.
Michel Porter
Well, Amir, thanks for joining me tonight. So, what's, very key in the market right now is a lot of companies are really thinking about the intersection of AI overstructured data as compared to, unstructured documents. And I think a lot of companies are really seeing that the deployment and the implementation of AI over their structured data and their data state is very difficult as compared to, having LLMs and generative AI kind of read more natural language type documents. Can you hit on maybe Microsoft's perspective on that a little bit and what you guys are thinking for that?
Amir Netz
We, we learned the hard way, like when the whole ChatGPT started three years ago, everybody thought, oh my God, I can just put the AI on double of my SQL database. And they would just ask a question, it will generate the queries, and everything will be solved. And who needs, yeah, it just works. It’s not working like this. And what we learned is that putting the AI in front of a database is like, imagine that you hired a bright person, you know, from Harvard Business School, and you just come in, it's top of the class IQ 165, ready to go. You put it in front of the SAP database and say, the boss is asking this question, gimme the answer. And you know what you do it, you'll stare at 10,000 tables, tens of thousands of columns. Uh, it'll not understand what the tables means, what the column means. It will not even understand the question of the boss. because that's not how we do it. We take the very, smart graduate and you give them orientation. You teach them, teach them what is the company, what do you do? What is the database, what the tables means, what the column means, all of that. You have to teach them. And if you don't have that context, you don't have the, if they, if that person doesn't have the, the semantics of what's going on, what the meaning of things, they will get confused. So, they, your smart, you know, um, business school graduate will raise their hand and say, I need help. I don't understand what is being asked. I don't understand what the database means. The AI doesn't have any hand, doesn't raise anything. It just hallucinates and give you a wrong answer. Now what we learn is you need to have a layer that really puts, explains things just like you would explain to a new employee. You have to explain it to the AI and that layer, we call it the fabric IQ. And this is really what you put in the mapping, the mapping between the tables and the columns to the business entities and the business concepts that you have. You explain how you measure things. You explain what the meaning of the business and how, what the goals of the business. Even like, just like you hire a new employee, you say, what are we doing here? What are we trying to achieve? You must explain the AI, so it'll make sense. And they can, the AI can reason and give you things and answers that actually meet your expectation. The same expectation here would have to human if you ask the same, the same question. So, all that, that, all this understand that you need to have the context and you need to have the semantics and the meanings for the AI to be able to reason is just a precursor. It is essential before they even have to say, okay, so now that I know all that, how do I write the proper SQL query that goes against a database? So, all that we learned, we built, uh, over the last three years, and I think we are really getting places now. Yeah.
Porter
It's funny the way that you phrased it, I feel like a lot of C-suites and specifically people within finance and accounting are learning the exact same lesson that you just mentioned. Right. Because you know, some experiences that have been shared with me is if you take a concept that's very key to accounting of doing like a basic balance sheet, flux explanation of, “Hey, why do these accounts go up?” “Why do these accounts go done?” Uh, they'll throw that into a static Excel document that's not linked to anything. And then ask copilot, why did cash go up? Why did AR go up? Et cetera, et cetera. But, then they're very frustrated when it doesn't know the answer. And so, when I've talked to them, what I've said is, well, hey, in your head when you're answering that question, you have knowledge over what is in Excel. Yes. But you also have knowledge under all over all of the other data layers that are connected to that. And that's all sitting in your head and kind of the semantic knowledge of, of that company. Right. Exactly. And that's what allows you to answer that question. And so, I think like when you put it into like plain English like that of like, hey, well you need to translate the knowledge that you actually need into AI having that knowledge, it becomes substantially easier to think about, right?
Netz
The moment you start thinking of AI as an artificial human and you say, what do I need for a real human to be successful? I need to give the same tools to the ai. And it's not just the tools of the technology, the tool of understanding the context of your business, the context of your data, the context of your goals, the context of your processes they need, you need to explain that because just like a human will not be successful without that context, the AI will not be successful without it.
Porter
It's so, it's so funny the way that you phrase that because I was having a presentation in my local market last week, and one of the concepts that we talked about is you need to train AI the same way you train an intern.
Netz
That's right. That's exactly right. It's exactly the moment you get into that mindset. Yeah. Everything becomes easier. The more you understand that you have to onboard AI the same way you onboard an intern or a new employee, then uh, you start boring from the world of HR to the world of ai. Microsoft announced today, agent 360 that really as kind of having a 360 view of all the agent and what they do and how they work and say, that is the HR department of ai. Right? Right. This is how you know all the employees and what they do and what authority they have and what they can approve on their own and what they have to ask other people to do it all that is, you know, all these what used to be human processes now become agent processes.
Porter
I think what what's also key from a structured data standpoint is how we talk about it from an accounting side is it's very difficult to understand what AI did to answer a question. So, for example, when you're asking a prompt associated with called a delta Lake or a specific table structure or whatnot, the observability kind of transparency of that result that comes back requires a certain skillset to be able to be, to interpret that. Right. And I think that that's been hard for people to think about from an accounting, finance and kind of tax perspective. Because as an example, if you asked it, uh, you know, read this lease and then subtract the or take out the key phrases of that and then provide citations, it's very easy for someone to be able to review that, click on the citation, takes you to the document, easy to read. When you start thinking about that of what they're having to understand is a SQL query or python in terms of what's coming back then that's a much bigger challenge, I guess. How do you guys think about that?
Netz
Exactly. A lot. But it becomes a trust game in some ways. And the question, are you ready to trust? Uh, just like if you, if you had a, a data analyst and you ask a question, the data analyst, and they come back with the answer and say, I run analysis and here's the answer. Now how much do you trust the answer? How much do you say, are you sure? How much do you trust that analyst? What do you do to come down to, to come out with, come up with that answer? Um, and if you work with that, that that data analyst for a long time, and you know, I know him like, I know that person or here, I know that person. I know that, you know, we ask many times, I know how they work, I know they're doing the right job, and I don't have to, I don't doubt it very much compared to somebody you never worked with before and they come with the answer. They say, hmm, I'm, I need, I need some proof, I need some verification. I need some validation that what you did is correct and you actually knew what you're doing. And I think that that same process is going to come with AI because again, AI is new and you ask complex questions and you say, how do I know that I can trust your answer? I don't know that you actually did the right set of steps, and you wrote the right code in Python, and you run the right SQL, uh, queries and your reasoning chain was correct. So, the AI is not, it's not sufficient to just give the data and not sufficient, just give the answer. You have to explain what path you took, what method you used, and why you did all of that to convince the recipient that it's actually a good answer.
Porter
Yeah, that makes total sense. To take a deeper dive just for a second, what, how do you, how do you think about kind of the efficiency of queries? So, when you're asking AI a question versus, say, if you were going to pick Power BI to answer the exact same question, I mean, how do you think about kind of the difference between this?
Netz
I like the analogy of how we worked with research of any kind. Like if I wanted to know anything about something, um, in the past three years ago, the starting point will go, Hey, I'll go to Google, put it search term in the, in there, and I get 10 bullets and start going through the links, reading the website and getting and assimilating the knowledge and getting, synthesizing something that I understand. Right? That's how I used to work. I don't do that anymore. I don't think you do that anymore. You go to ChatGPT you go to Gemini, you go to, uh, you know, put in the question and what happens under the cover, the AI goes to the, you know, does a search, goes to the website, we click on in its mind on the 10 rule links, read the content and come back with the answer. Uh, after you trade everything, synthesize everything, say, here's your answer. So, you save tons of time. Instead of maybe spending an hour or two on this, you just got the answer in like maybe a minute. Right? The world of BI is going to go through a very similar transformation. And so, when you have something that you want to learn about it today, many people say, okay, I have a bunch of reports. Let me go open the report page through their pages, go through the visuals, read the content, and now I kind of assemble kind of an understanding of the quantitative situation that I have. But I think that when it comes to the ai, it'll do the same thing. Instead of you going through the report, going through the pages, going through the visuals and trying to synthesize, you ask the question of the ai, it'll go through the report, it'll go through the page, it'll go through the visual synthesize and say, here is what I found. And give you a very detailed, uh, explanation of, here are what we found in these reports, here are the visual that are relevant to this. This is what, if you look at all these visual together, this is what you can conclude from the answer. And I think that's what you'll find. So, you'll find that it's actually doing what you had done, would've done manually, uh, one by one. It just saves you tons of time. But the kind of, the kind of inputs that it'll use, the kind of queries and visual that it'll use will be pretty much the same one that you would've used yourself.
Porter
Yeah. What what's fascinating about that is, um, you know, if you think about designing agents, agents to do work for you and whatnot is agents are only gonna be as good as the underlying knowledge base that they're trained upon, right? So, it kind of creates this circular, circular logic concept. Do you feel like what you just described changes the nature of how you would design business intelligence so that it can be better interpreted by AI as compared to how you would say design it for a human to interact?
Netz
Yeah, I think, I think so. Absolutely. I think that the, what we'll see is you still need the reports. You still need the dashboards because they explain to the ai, you know, just like you cannot imagine having the LLMs working any, any in a good way. If there were no websites in the world, imagine there would be somebody say, hey, nobody needs website. AI knows everything. But if you take away the A websites, the AI knows nothing. Okay. Okay. So same way. So, the reports are still needed, but they will be less user visiting the reports and more AI visiting the reports. But what the reports are providing is a very, very clear signal to the AI of how is it you want? Is it that you want to look at the data? What do you regard as a good visualization of, uh, that explain things? Okay. So, all this stuff is, is showing human intent of how do I, how do we understand data? So, when the AI goes and visit those reports, it understands how when they give you the answer back, it all looks familiar. It all looks intuitive because it's in the report that if the AI was not there, you would've had to visit yourself. Totally. Uh, so yes, reports will still be built, but I think the assumption will be that a lot of the visitors of the for the report will be more AI agents rather than humans directly going to report one by one and opening them. What I'm saying here is not a blanket statement, because there will be many reports that, uh, that you'll use on a regular basis and say, it's actually better for me to actually see the report and have it burned into my retina. Rather than constantly asking questions of the AI first, it's going to be more efficient. And also, just seeing the same report in the same layout day after day after day, I start noticing the small deviation. So, I say, oh, this is something changed here. It becomes your Knowledge Source. Exactly. Yeah. And it's kind of burning, like we train our brain to notice these visual differences, the small ones, and it kind of trigger some thoughts in our head, say, huh, I can see it's slowing down. I can see it's moving up. I can see something different about it this time. Okay. And so, um, so I think that a report that you use daily, you still go to the reports, but question that you have that needs to span many reports and maybe reports that you don't often visit, AI will be perfect for that.
Porter
Well, and I think it, it brings up a new persona as well, right? Because I think, I think one of the struggles that we've seen within EY and within the market is analytics and reports make sense to some people. And just like you're saying, like if you look at the same one every single day, you just know the intricacies, you know the ins and outs of that. It's very easy for you to decipher what's happening. Other people look at it, and they see a totally different language and can't interpret it. Sure. And so one of the cool things that I like about really the integration of chat into that concept is that it brings in a new dimension by which people can do analysis to, and maybe opens it up to a persona group that historically was unable to interpret what was going on, or unable to actually read the analysis.
Netz
Yeah. There, there's no doubt. Like not everybody is an expert in data. Nobody, not everybody has this intuitive understanding. And if you have, so if you have something like the AI that is almost like, you know, it's, it's makes it simple. It explains things. It's not just look at the visual, figure it out yourself, but the AI says, look, I look at this visual also look at this visual notice that there is a correlation that might explain what's going on. So, it just simplifies like having the your own personal data analyst sitting next to you and not just giving you, here's the raw material, figure it out. But I, let me actually, I looked at it and I can explain what's going on so you don't have to stretch your brain around it.
Porter
And what I, what I really like as well is the ability to, and this is super important in our domain called deterministic calcs that you're already comfortable with for your organization, right? Um, because I think when, whenever I've seen a lot of chat with your data concepts within the market and people are promoting, let the AI take over the actual calculations themselves. I think that that's where in our domain, from an accounting finance tax audit standpoint, that people start to get a little bit uncomfortable because open-ended prompts against data sets where you're handing over all the math leads to a variable accuracy rate. And it goes down to the quality of the prompts and the quality of how you, how you've constructed everything from a, um, from a data standpoint. So that, that's kind of like the delta that, or the, uh, two worlds that I really like is like being able to utilize AI to basically be the orchestrator of the information and find the information that you have already certified as an organization to say, this is the right math. Um, but then also if you do want generative concepts of, hey, find the anomalies as an example, it's probably gonna be able to do advanced analysis substantially better than anyone else would of just opening up a raw data set.
Netz
You talk About something very, very important here. There's the curation of the math, what we call metrics, right? And remember that the notion that we have to curate the metrics once so we can use it many, many times. It's not something that started with ai. The problem with those humans, people come, came to the same meetings with the same raw data, but the actual analysis came out with different results because they had this different math and this whole idea of, we need a single version of the truth. And we said we need to have some to curate the math once, and then everybody can do analysis slicing and es, but the underlying math needs to be the same. And we did it for humans. So, they don't make stuff on the spot, and they don't get inconsistencies between the humans. And frankly, the same problem exists with ai. If you give even two AI sessions with the same model, you ask them the same question, it actually come with two different formulas. Both of them reasonable, and both of them for different, different people might seem correct, but you cannot have two answers. You know, you can, you can have only one right answer consistent to this, otherwise everybody's confused. So that notion of curating the, the math, curating the formulas becomes just as important for AI as it is for humans and becomes even more important. Because if you don't even know what AI will do, you know, it's like it can go in also the weird direction. So, you don't want to let the AI be creative when it comes to the math of finances.
Porter
Correct, yes. What's also difficult is like, uh, some, some calculations require multiple parameters, right? And it's very difficult for someone to ask just a random question if they don't provide all the correct parameters. And to your point, you're gonna get a totally weird answer that you don't expect. And the AI is very, very good at making assumptions and without telling you about it and come up with something, here you go. That's the answer. You don't even know what the, what they did there. So yeah, it's hard work. Yeah. And I think, you know, it really resonates with me because, you know, our, our Helix application that I've talked to you guys about, um, a lot, you know, we have over 6,500 certified calculations within that, right? And every page has, you know, 20, 30 plus slicers that provide different parameters, and then you have chained pages together and chained measures and things like this. And so when we hear companies tell us, be like, oh no, you just ask questions of the data and it will tell you be like, that's just not the way it works.
Netz
It not, it is such a naive way, uh, to do it. And, and I think that's actually that, that's what we learned. Those sematic models that were so important for BI are now the lifeblood of ai. Yes. Uh, and that's, I think that's, that's this revelation of how important semantics are, is really what led us to take that concept and really make it into what we call Fabric IQ. Uh, really you have the curated, otherwise you are facing disaster.
Porter
So, if we transition a second, um, I'd like to kind of jump to how can people get started, right? Um, because I think where, where a lot of companies find themselves in the market right now is they're buying AI assistance and, you know, licenses for the enterprise type deal, but then they automatically just expect it to work with their data. They're obviously finding out that doesn't work. Um, and then they're taking a step back and they're realizing that they need the equivalent of a data platform, or they need to put everything into an ERP, which kind of has bolt on intelligence into it. And now there's the combination of both of those together as well. So, so for a company that finds themselves, uh, with a fragmented data state, systems everywhere, data everywhere, where should they even start?
Netz
Look, we know that fragmentation is poison for, for anybody who's trying to work with data. And we've been fighting this world for, for a very, very long time because organization, you know, this gravitates toward entropy of systems like you organization live for 40, 50 years. They've built so many data systems over the years. Uh, they created silos of systems. They created silos of platforms. Like the platforms don't talk very well with each other. Different day type of databases, different type of platforms. And it makes life so hard for humans that needs to reason over all the data that is locked in these silos that are, you know, segregated from each other. So fragmented. And AI doesn't have; it doesn't have it any easier if it's hard for humans. It's hard for ai, right? So, we've, we're telling people is, hey, the first thing that you have to do is just bring the data together. We call the unified the data state. We have this concept called one link. And one link is really all about making super, super easy to just have all the data regardless of where it is. It could be in different clouds; it could be in different databases. It could be structured data, it could be unstructured data, uh, it could be streaming data, it could be historical data, all data, bring it to one place, a single pane of glass where you have all your data that doesn't really have to mean that you move the data, the data can stay where it is, but you, but one leg allow to virtualize access to the data. So, it still looks like a single pane of glass for all your data. And that's based number one, before you do that, it's becoming really, really hard. And, and just like humans get confused, AI will get confused. Now, once you have that, then you have to layer the next layer. The next layer is really where you have to start making decisions because you have to start making explanation what the data means, what the, how do you measure things, what is the real customer record that is represent not, you know, that is consistent, it's golden. All of that. You have to start layering on top and make decision about what the right data is, how the data relates to each other, what are the right metrics. And also start putting in more st more layers that really talks about the rules. You know, what constitutes good and bad and what to do when things go bad. Uh, you know, and, and what's the right treatment of that? And the same rule that you would give to humans, you have to document those because the AI will need to know the same context. And you have to explain all these things. You have to put in real time signals because historical data is good for some things, but even you need to make decisions on the spot. You want to know the latest. So you have to put the real time signals in there. And then once you have that, we call this as the semantic curation, what you call fabric iq. Now you put AI on top, now you put the agents, and you start giving roles to the agent. They say you are a data agent that is specializing in tax, uh, you know, tax optimization. And you are, you are an agent that may be, uh, specializing in fraud detection. And you, and you and you each one of those, you just like, you have different roles of people in the organization where each one of them have special expertise. You build agents with special expertise and you have to teach them how, what does it mean to do a good job and how do you should operate and what you should look for, just like you would teach the, the new employee that comes, you know, the, the new MBA that joins you and uh, and, and you have to teach them how to do a specific job. So, you have agents that are at the data analysis side, we call those data agents. And you have data agents that are more, the operational side needs to react in real time when things go bad and constantly monitor what's going on and say, oh my God, this transaction, stop it, stop it, stop it. This is fraud. Okay. That’s okay. All these things, right? You build those agents and basically you get teams of agents that collaborate with each other. Each one has its own role and it's all the expertise. And that's the layer that you put on top of the semantics because they have to understand the business, the meaning of the data and how these things relate to each other.
Porter
You make it sound so simple. Oh, it's, It's incredibly hard. Yeah.
Netz
Two minutes. We, we build a product, it's so easy.
Porter
Uh, yeah. And then I think that that, that, that semantic layer is what is so important. Um, because that's what, you know, going back to my layers example from earlier, that's what's really telling it. Well, here's how this layer talks to this layer, talks to this layer, talks to this layer. And I think what, what accounting and finance organizations really struggle from is that they also have a concept of stages of data. Yeah. Uh, they not only have the raw data that sits in their system, but then they've had, maybe they're already creating some sort of transformation off that. Maybe they have a data product layer, maybe they have other semantic models that are sitting elsewhere. And worst case, which is what most accountants do, export everything to Excel. And now you have totally disconnected other data sets with those metrics.
Netz
With without sematic you strip the semantics, just raw data.
Porter
Yeah. But, but I think that that's where the rule layer and that semantic layer really comes in because even if you put all of that information into the model, really understanding the rules of the road, of the timing aspect and how everything should talk to each other in that process is what will ultimately govern your all overall accuracy.
Netz
Right. Exactly.
Porter
Well, hey, I, I know we're probably running up of time here, but it's been great catching up in general. Um, you're a genius as, as usual when we're talking, so I could talk to you for, for hours, but uh, really appreciate your time.