// . //  Insights //  Meet LenAI — A New Generative AI Tool

12:39

LenAI is a valuable tool in our consulting business. It’s being utilized by our private capital teams to streamline the digital intelligence work for private equity firms. So much of what our case teams do relies on rapid ingestion of vast amounts of data, including client and internal data, as well as first drafts. So, LenAI has been a game-changer

While many companies have shied away from fully engaging with generative AI, Marsh McLennan has embraced its potential. Firm leaders have experimented extensively with the technology to come up with ways it can be used to improve business operations. Their efforts, helmed by Chief Information Officer Paul Beswick, eventually led to the development of LenAI, Marsh McLennan's unique internal generative AI tool.

Beswick and his team created LenAI to allow for a low-cost yet secure environment for problem solving and experimentation. Among other applications, LenAI is assisting employees by summarizing meetings, extracting key data from documents, and writing drafts of emails and presentations. The opportunities to streamline processes and expedite time-consuming tasks will only continue to grow in the future.

The creation of LenAI would not have been possible without a long standing focus on managing risk. Close collaboration between multiple groups from across the firm were able to deploy this technology quickly because of the security standard that's already foundational to Marsh McLennan's business. 

For more details on how it came together, watch the conversation between Paul Beswick and Vivek Sen, the head of Oliver Wyman Digital in the Americas.

Paul Beswick

The first version of LenAI was up in a day. Now, we developed its capabilities a little bit more after that, but we were able to get there not just because this technology is actually particularly accessible, but because we'd already built a lot of the enabling components to let us do that in a way that is secure, in a way that meets compliance requirements. Those are pre-engineered into our platform and our solutions. And this was just a flavor of that implementation pattern.

Vivek Sen

Hello everybody. I’m Vivek Sen, head of Oliver Wyman Digital in the Americas. I’m here with Mash McLennan’s Chief Information Officer Paul Beswick, and we’re here to talk about generative AI. Paul, let's maybe start in November of 2022. Chat GPT comes out, you know, we have 80,000 colleagues across Marsh McLennan thinking about it, talking about it, wondering how it's going to change the way we work. We have clients across 130 countries asking us about it. And, of course, so much attention on the technology across the world. How did you think about that moment?

Paul 

AI has often been hype-ie, but I don't think it was ever hype-ie-er than it has been over the last the last few months. And there was undoubtedly, from the beginning of this year, a huge amount of excitement and enthusiasm about the potential that generative AI could have for us. I think AI itself isn't new. It's something that we've used within this business in a number of areas for many years. But there's a there's really been a step change, I think, with the accessibility of generative AI and how immediately useful to so many people who don't have a data science background, who aren't deep into the analytics to have an impact on their job. And so that really blew up for us as a huge swell of demand that we wanted to make sure we got out ahead of.

Vivek 

How did you think about organizing our response?

Paul 

We tried to make sure that we were balancing our ability to experiment and to learn, which we put a high degree of focus on with the fact that we wanted to make sure we were being very risk aware about the way that we did that. And I think in contrast to a lot of firms, we didn't immediately reach for banning blocking access to these things. We went out very quickly with a communication to everyone across the firm, really highlighting what the risks were, making people aware of them, and setting some clear guardrails about what people could and couldn't do, but actually encouraging some experimentation. We wanted to make sure that people were starting to think about how this could have an impact on the types of roles that they were doing. And the best way to do that was through hands on experience, but doing it with publicly available data, low risk sorts of use cases and just making sure people were finding their feet. So that was a starting point. From there we started to figure out how we could bring this capability to the firm very, very quickly and do it in a way that wasn't going to be reliant on external services. We could do something that was internal that was going to be secure, that was going to meet all of our very strict standards for this sort of technology. And create that capability. So, we started by making sure that we had access to the underlying services that we needed to do this and that we were able to do that in a way that was going to be completely private. But we'd make sure that none of the data that went into it ever left our control, ever ended up being stored on a long-term basis in a way that wasn't encrypted, for instance. And so, our sort of if our first step was experimentation with the public services under strict guardrails. Our second step was enabling of API based access for our more technical people to start to look at specific use cases where this technology could be useful. So, we actually implemented that with Microsoft. They have the ability to deploy private versions of the Open AI models. We were able to do that in a way that meant none of the data going into that was being kept or logged. And so, we had a number of teams playing with different use cases from that API based access as early as April or May. And then stage three was trying to put this back into the hands of our of people around the firm.

Vivek 

It’s everywhere now.

Paul 

It is now everywhere. I think we have about 12,000 people across the firm using our internal Chat GPT equivalent. Within the next couple of weeks, we'll have it out to all 90,000.

Vivek 

What are the use cases that have excited you the most?

Paul 

There are certainly use cases where translating text, getting a quick answer to a simple question are very useful. But we wanted this to be not just a place, not just another useful tool. We wanted it to be a place for experimentation and we wanted it to be, if you like, a sort of zero cost experimental playground. So, we added onto the basic capabilities, the ability to do document upload, summarization, and Q&A. Quite a lot of the work we do across the firm involves understanding, digesting, processing, extracting data from documents.

Vivek 

Sure.

Paul 

A lot of what we do is about understanding language, which is what these tools are extremely good at. So, we added that capability and that led a significant number of interesting use cases around the four businesses that make up Marsh McLennan to be things people could experiment with without having to involve any part of my technology organization early on. But meant we could have hundreds of experiments happening in different places and that the best of which we could then put some focus behind in terms of building out scale solutions. And we added Internet search because the hallucination problem that you can often have with generative AI is substantially improved if you can bring real timely context to things. And one of the ways to do that is to make sure we can bring up to date search high quality search results and from internet searches as well. 

Vivek 

It’s very useful in our consulting business. We're using it in our private capital teams to industrialize a lot of the way in which we do kind of due digital intelligence work for private equity firms. But so much of what our case teams do relies on rapid ingestion of a lot of data, client data, internal data, first drafts. So, it's been a bit of a game changer. I know you've been speaking to a lot of your peers, you know, other CIOs at large companies across the world. What is the conversation on this topic been like?

Paul 

From the conversations I've had, I think a lot of bigger companies are still trying to work through the security and risk side of this and haven't quite cracked how to deal with all the implications and controls and governance parts of it, including how to implement it securely. I don't think it'll take them long. I think we'll see a surge of this sort of capability being brought to a lot of organizations just because there's so much demand, but it's not a trivial thing to implement. So, I think that those sorts of the capability will be increasingly accessible. But the underlying challenges of how do you implement this securely, how do you make sure you deal with the hallucination problem, the data governance problem, the risk of leakage of information, not just outside of your organization, but often actually between different parts of your own organization. These aren't particularly simple problems to deal with. A lot of that challenge of how you push it out more broadly, actually comes down to trust and the type of the strength of culture that you have, especially the strength of the risk management part of your culture. We have spent a lot of time within Marsh McLennan, a lot of the work that we do is associated with the understanding and the management of risk. And we think we have a very strong culture on that front that's been quite helpful in feeling comfortable that we can put these sorts of tools into people's hands. And with the right guidance, we can trust them to make the right kinds of decisions about how to use them.

Vivek 

I imagine the other thing it is teaching us is that, you know, now it is Chat GPT. There will always be a set of emerging technologies that really make us question how we work, how we use these things, how we deploy them quickly. I imagine we've learned a lot about confronting something like that, getting out on the front foot, as you said. You know, disseminating it through our, you know, large global organization. What would you say are the biggest lessons?

Paul 

The frontier will evolve. And we've tried to make sure that we've built this to be future proof and that we can swap out the back-end models relatively easily as that marketplace evolves. More broadly, in terms of getting out ahead of this type of demand, it is very difficult, I think, in a technology organization, but also a very common position to be on the back foot where what the business wishes it could do is ahead of what you were able to provide to them. And we could have found ourselves in that position on this technology.

Vivek 

Sure, yeah.

Paul 

And I think by and large, we didn't. Actually, that's a culmination of a number of years’ worth of effort to not just be able to bring this technology in quickly, but to be able to increase the speed with which we can bring any technology in quickly and then we can build the apparatus around it that actually allows us to push it out to people. So, for instance, we have spent a few years building out a standard deployment platform. That means we can stand an application up from scratch in about 25 minutes. Once you've filled in four or five fields on a form. We weren't there if you go back two or three years.

Vivek 

We definitely weren’t there a while ago, yeah. So, Paul, you said this was done in a day. I'm sure it took an enormous amount of effort. Obviously, it continues well beyond that. I know there's so many teams working on it. Tell us a little bit about what's been happening in the background.

Paul 

Yeah, I think the reality is it's the culmination of the work of a lot of different people and that that last day was the push to the finish line and built on the foundations of everyone else. So, my security team got on top of understanding the risks associated with deploying these services way back in the early spring. We're starting to understand how we could deploy that securely. And were in problem solving mode, helping us design an approach that was going to keep things secure. I mentioned the work we've done on building a platform. Our engineering team has spent, you know, years making sure that platform is in place and ready to go and secure meets all our requirements. We have my development Center of Excellence has built the template application which and all the pipelines and the automation that let you get something up in 25 minutes. My core API team have made sure that all this stuff is wrapped up and secure. So, all of those pieces had to be pulled together to get us to the point where we could sort of rush to the finish line. So, our innovation center in Dublin has been behind a lot of the work that we've done on this stuff, and that's an area that we're building up an increasing amount of AI focused expertise to both launch this product and then build out from here for the LenAI roadmap and the capabilities that we’ll bring to that. But also, all those scalable, more industrialized solutions that are going to run off the same technology. So, it's taken lots of bits of the team, it's taken the team really pulling together across the different organizational groups. In a way we've got better and better at over the last few years and, you know, it's the ability to do this is actually a real reflection of the culture of the team and its ability to focus around a problem across many, many different parts of the team. To get to something like this quickly.

    While many companies have shied away from fully engaging with generative AI, Marsh McLennan has embraced its potential. Firm leaders have experimented extensively with the technology to come up with ways it can be used to improve business operations. Their efforts, helmed by Chief Information Officer Paul Beswick, eventually led to the development of LenAI, Marsh McLennan's unique internal generative AI tool.

    Beswick and his team created LenAI to allow for a low-cost yet secure environment for problem solving and experimentation. Among other applications, LenAI is assisting employees by summarizing meetings, extracting key data from documents, and writing drafts of emails and presentations. The opportunities to streamline processes and expedite time-consuming tasks will only continue to grow in the future.

    The creation of LenAI would not have been possible without a long standing focus on managing risk. Close collaboration between multiple groups from across the firm were able to deploy this technology quickly because of the security standard that's already foundational to Marsh McLennan's business. 

    For more details on how it came together, watch the conversation between Paul Beswick and Vivek Sen, the head of Oliver Wyman Digital in the Americas.

    Paul Beswick

    The first version of LenAI was up in a day. Now, we developed its capabilities a little bit more after that, but we were able to get there not just because this technology is actually particularly accessible, but because we'd already built a lot of the enabling components to let us do that in a way that is secure, in a way that meets compliance requirements. Those are pre-engineered into our platform and our solutions. And this was just a flavor of that implementation pattern.

    Vivek Sen

    Hello everybody. I’m Vivek Sen, head of Oliver Wyman Digital in the Americas. I’m here with Mash McLennan’s Chief Information Officer Paul Beswick, and we’re here to talk about generative AI. Paul, let's maybe start in November of 2022. Chat GPT comes out, you know, we have 80,000 colleagues across Marsh McLennan thinking about it, talking about it, wondering how it's going to change the way we work. We have clients across 130 countries asking us about it. And, of course, so much attention on the technology across the world. How did you think about that moment?

    Paul 

    AI has often been hype-ie, but I don't think it was ever hype-ie-er than it has been over the last the last few months. And there was undoubtedly, from the beginning of this year, a huge amount of excitement and enthusiasm about the potential that generative AI could have for us. I think AI itself isn't new. It's something that we've used within this business in a number of areas for many years. But there's a there's really been a step change, I think, with the accessibility of generative AI and how immediately useful to so many people who don't have a data science background, who aren't deep into the analytics to have an impact on their job. And so that really blew up for us as a huge swell of demand that we wanted to make sure we got out ahead of.

    Vivek 

    How did you think about organizing our response?

    Paul 

    We tried to make sure that we were balancing our ability to experiment and to learn, which we put a high degree of focus on with the fact that we wanted to make sure we were being very risk aware about the way that we did that. And I think in contrast to a lot of firms, we didn't immediately reach for banning blocking access to these things. We went out very quickly with a communication to everyone across the firm, really highlighting what the risks were, making people aware of them, and setting some clear guardrails about what people could and couldn't do, but actually encouraging some experimentation. We wanted to make sure that people were starting to think about how this could have an impact on the types of roles that they were doing. And the best way to do that was through hands on experience, but doing it with publicly available data, low risk sorts of use cases and just making sure people were finding their feet. So that was a starting point. From there we started to figure out how we could bring this capability to the firm very, very quickly and do it in a way that wasn't going to be reliant on external services. We could do something that was internal that was going to be secure, that was going to meet all of our very strict standards for this sort of technology. And create that capability. So, we started by making sure that we had access to the underlying services that we needed to do this and that we were able to do that in a way that was going to be completely private. But we'd make sure that none of the data that went into it ever left our control, ever ended up being stored on a long-term basis in a way that wasn't encrypted, for instance. And so, our sort of if our first step was experimentation with the public services under strict guardrails. Our second step was enabling of API based access for our more technical people to start to look at specific use cases where this technology could be useful. So, we actually implemented that with Microsoft. They have the ability to deploy private versions of the Open AI models. We were able to do that in a way that meant none of the data going into that was being kept or logged. And so, we had a number of teams playing with different use cases from that API based access as early as April or May. And then stage three was trying to put this back into the hands of our of people around the firm.

    Vivek 

    It’s everywhere now.

    Paul 

    It is now everywhere. I think we have about 12,000 people across the firm using our internal Chat GPT equivalent. Within the next couple of weeks, we'll have it out to all 90,000.

    Vivek 

    What are the use cases that have excited you the most?

    Paul 

    There are certainly use cases where translating text, getting a quick answer to a simple question are very useful. But we wanted this to be not just a place, not just another useful tool. We wanted it to be a place for experimentation and we wanted it to be, if you like, a sort of zero cost experimental playground. So, we added onto the basic capabilities, the ability to do document upload, summarization, and Q&A. Quite a lot of the work we do across the firm involves understanding, digesting, processing, extracting data from documents.

    Vivek 

    Sure.

    Paul 

    A lot of what we do is about understanding language, which is what these tools are extremely good at. So, we added that capability and that led a significant number of interesting use cases around the four businesses that make up Marsh McLennan to be things people could experiment with without having to involve any part of my technology organization early on. But meant we could have hundreds of experiments happening in different places and that the best of which we could then put some focus behind in terms of building out scale solutions. And we added Internet search because the hallucination problem that you can often have with generative AI is substantially improved if you can bring real timely context to things. And one of the ways to do that is to make sure we can bring up to date search high quality search results and from internet searches as well. 

    Vivek 

    It’s very useful in our consulting business. We're using it in our private capital teams to industrialize a lot of the way in which we do kind of due digital intelligence work for private equity firms. But so much of what our case teams do relies on rapid ingestion of a lot of data, client data, internal data, first drafts. So, it's been a bit of a game changer. I know you've been speaking to a lot of your peers, you know, other CIOs at large companies across the world. What is the conversation on this topic been like?

    Paul 

    From the conversations I've had, I think a lot of bigger companies are still trying to work through the security and risk side of this and haven't quite cracked how to deal with all the implications and controls and governance parts of it, including how to implement it securely. I don't think it'll take them long. I think we'll see a surge of this sort of capability being brought to a lot of organizations just because there's so much demand, but it's not a trivial thing to implement. So, I think that those sorts of the capability will be increasingly accessible. But the underlying challenges of how do you implement this securely, how do you make sure you deal with the hallucination problem, the data governance problem, the risk of leakage of information, not just outside of your organization, but often actually between different parts of your own organization. These aren't particularly simple problems to deal with. A lot of that challenge of how you push it out more broadly, actually comes down to trust and the type of the strength of culture that you have, especially the strength of the risk management part of your culture. We have spent a lot of time within Marsh McLennan, a lot of the work that we do is associated with the understanding and the management of risk. And we think we have a very strong culture on that front that's been quite helpful in feeling comfortable that we can put these sorts of tools into people's hands. And with the right guidance, we can trust them to make the right kinds of decisions about how to use them.

    Vivek 

    I imagine the other thing it is teaching us is that, you know, now it is Chat GPT. There will always be a set of emerging technologies that really make us question how we work, how we use these things, how we deploy them quickly. I imagine we've learned a lot about confronting something like that, getting out on the front foot, as you said. You know, disseminating it through our, you know, large global organization. What would you say are the biggest lessons?

    Paul 

    The frontier will evolve. And we've tried to make sure that we've built this to be future proof and that we can swap out the back-end models relatively easily as that marketplace evolves. More broadly, in terms of getting out ahead of this type of demand, it is very difficult, I think, in a technology organization, but also a very common position to be on the back foot where what the business wishes it could do is ahead of what you were able to provide to them. And we could have found ourselves in that position on this technology.

    Vivek 

    Sure, yeah.

    Paul 

    And I think by and large, we didn't. Actually, that's a culmination of a number of years’ worth of effort to not just be able to bring this technology in quickly, but to be able to increase the speed with which we can bring any technology in quickly and then we can build the apparatus around it that actually allows us to push it out to people. So, for instance, we have spent a few years building out a standard deployment platform. That means we can stand an application up from scratch in about 25 minutes. Once you've filled in four or five fields on a form. We weren't there if you go back two or three years.

    Vivek 

    We definitely weren’t there a while ago, yeah. So, Paul, you said this was done in a day. I'm sure it took an enormous amount of effort. Obviously, it continues well beyond that. I know there's so many teams working on it. Tell us a little bit about what's been happening in the background.

    Paul 

    Yeah, I think the reality is it's the culmination of the work of a lot of different people and that that last day was the push to the finish line and built on the foundations of everyone else. So, my security team got on top of understanding the risks associated with deploying these services way back in the early spring. We're starting to understand how we could deploy that securely. And were in problem solving mode, helping us design an approach that was going to keep things secure. I mentioned the work we've done on building a platform. Our engineering team has spent, you know, years making sure that platform is in place and ready to go and secure meets all our requirements. We have my development Center of Excellence has built the template application which and all the pipelines and the automation that let you get something up in 25 minutes. My core API team have made sure that all this stuff is wrapped up and secure. So, all of those pieces had to be pulled together to get us to the point where we could sort of rush to the finish line. So, our innovation center in Dublin has been behind a lot of the work that we've done on this stuff, and that's an area that we're building up an increasing amount of AI focused expertise to both launch this product and then build out from here for the LenAI roadmap and the capabilities that we’ll bring to that. But also, all those scalable, more industrialized solutions that are going to run off the same technology. So, it's taken lots of bits of the team, it's taken the team really pulling together across the different organizational groups. In a way we've got better and better at over the last few years and, you know, it's the ability to do this is actually a real reflection of the culture of the team and its ability to focus around a problem across many, many different parts of the team. To get to something like this quickly.

    While many companies have shied away from fully engaging with generative AI, Marsh McLennan has embraced its potential. Firm leaders have experimented extensively with the technology to come up with ways it can be used to improve business operations. Their efforts, helmed by Chief Information Officer Paul Beswick, eventually led to the development of LenAI, Marsh McLennan's unique internal generative AI tool.

    Beswick and his team created LenAI to allow for a low-cost yet secure environment for problem solving and experimentation. Among other applications, LenAI is assisting employees by summarizing meetings, extracting key data from documents, and writing drafts of emails and presentations. The opportunities to streamline processes and expedite time-consuming tasks will only continue to grow in the future.

    The creation of LenAI would not have been possible without a long standing focus on managing risk. Close collaboration between multiple groups from across the firm were able to deploy this technology quickly because of the security standard that's already foundational to Marsh McLennan's business. 

    For more details on how it came together, watch the conversation between Paul Beswick and Vivek Sen, the head of Oliver Wyman Digital in the Americas.

    Paul Beswick

    The first version of LenAI was up in a day. Now, we developed its capabilities a little bit more after that, but we were able to get there not just because this technology is actually particularly accessible, but because we'd already built a lot of the enabling components to let us do that in a way that is secure, in a way that meets compliance requirements. Those are pre-engineered into our platform and our solutions. And this was just a flavor of that implementation pattern.

    Vivek Sen

    Hello everybody. I’m Vivek Sen, head of Oliver Wyman Digital in the Americas. I’m here with Mash McLennan’s Chief Information Officer Paul Beswick, and we’re here to talk about generative AI. Paul, let's maybe start in November of 2022. Chat GPT comes out, you know, we have 80,000 colleagues across Marsh McLennan thinking about it, talking about it, wondering how it's going to change the way we work. We have clients across 130 countries asking us about it. And, of course, so much attention on the technology across the world. How did you think about that moment?

    Paul 

    AI has often been hype-ie, but I don't think it was ever hype-ie-er than it has been over the last the last few months. And there was undoubtedly, from the beginning of this year, a huge amount of excitement and enthusiasm about the potential that generative AI could have for us. I think AI itself isn't new. It's something that we've used within this business in a number of areas for many years. But there's a there's really been a step change, I think, with the accessibility of generative AI and how immediately useful to so many people who don't have a data science background, who aren't deep into the analytics to have an impact on their job. And so that really blew up for us as a huge swell of demand that we wanted to make sure we got out ahead of.

    Vivek 

    How did you think about organizing our response?

    Paul 

    We tried to make sure that we were balancing our ability to experiment and to learn, which we put a high degree of focus on with the fact that we wanted to make sure we were being very risk aware about the way that we did that. And I think in contrast to a lot of firms, we didn't immediately reach for banning blocking access to these things. We went out very quickly with a communication to everyone across the firm, really highlighting what the risks were, making people aware of them, and setting some clear guardrails about what people could and couldn't do, but actually encouraging some experimentation. We wanted to make sure that people were starting to think about how this could have an impact on the types of roles that they were doing. And the best way to do that was through hands on experience, but doing it with publicly available data, low risk sorts of use cases and just making sure people were finding their feet. So that was a starting point. From there we started to figure out how we could bring this capability to the firm very, very quickly and do it in a way that wasn't going to be reliant on external services. We could do something that was internal that was going to be secure, that was going to meet all of our very strict standards for this sort of technology. And create that capability. So, we started by making sure that we had access to the underlying services that we needed to do this and that we were able to do that in a way that was going to be completely private. But we'd make sure that none of the data that went into it ever left our control, ever ended up being stored on a long-term basis in a way that wasn't encrypted, for instance. And so, our sort of if our first step was experimentation with the public services under strict guardrails. Our second step was enabling of API based access for our more technical people to start to look at specific use cases where this technology could be useful. So, we actually implemented that with Microsoft. They have the ability to deploy private versions of the Open AI models. We were able to do that in a way that meant none of the data going into that was being kept or logged. And so, we had a number of teams playing with different use cases from that API based access as early as April or May. And then stage three was trying to put this back into the hands of our of people around the firm.

    Vivek 

    It’s everywhere now.

    Paul 

    It is now everywhere. I think we have about 12,000 people across the firm using our internal Chat GPT equivalent. Within the next couple of weeks, we'll have it out to all 90,000.

    Vivek 

    What are the use cases that have excited you the most?

    Paul 

    There are certainly use cases where translating text, getting a quick answer to a simple question are very useful. But we wanted this to be not just a place, not just another useful tool. We wanted it to be a place for experimentation and we wanted it to be, if you like, a sort of zero cost experimental playground. So, we added onto the basic capabilities, the ability to do document upload, summarization, and Q&A. Quite a lot of the work we do across the firm involves understanding, digesting, processing, extracting data from documents.

    Vivek 

    Sure.

    Paul 

    A lot of what we do is about understanding language, which is what these tools are extremely good at. So, we added that capability and that led a significant number of interesting use cases around the four businesses that make up Marsh McLennan to be things people could experiment with without having to involve any part of my technology organization early on. But meant we could have hundreds of experiments happening in different places and that the best of which we could then put some focus behind in terms of building out scale solutions. And we added Internet search because the hallucination problem that you can often have with generative AI is substantially improved if you can bring real timely context to things. And one of the ways to do that is to make sure we can bring up to date search high quality search results and from internet searches as well. 

    Vivek 

    It’s very useful in our consulting business. We're using it in our private capital teams to industrialize a lot of the way in which we do kind of due digital intelligence work for private equity firms. But so much of what our case teams do relies on rapid ingestion of a lot of data, client data, internal data, first drafts. So, it's been a bit of a game changer. I know you've been speaking to a lot of your peers, you know, other CIOs at large companies across the world. What is the conversation on this topic been like?

    Paul 

    From the conversations I've had, I think a lot of bigger companies are still trying to work through the security and risk side of this and haven't quite cracked how to deal with all the implications and controls and governance parts of it, including how to implement it securely. I don't think it'll take them long. I think we'll see a surge of this sort of capability being brought to a lot of organizations just because there's so much demand, but it's not a trivial thing to implement. So, I think that those sorts of the capability will be increasingly accessible. But the underlying challenges of how do you implement this securely, how do you make sure you deal with the hallucination problem, the data governance problem, the risk of leakage of information, not just outside of your organization, but often actually between different parts of your own organization. These aren't particularly simple problems to deal with. A lot of that challenge of how you push it out more broadly, actually comes down to trust and the type of the strength of culture that you have, especially the strength of the risk management part of your culture. We have spent a lot of time within Marsh McLennan, a lot of the work that we do is associated with the understanding and the management of risk. And we think we have a very strong culture on that front that's been quite helpful in feeling comfortable that we can put these sorts of tools into people's hands. And with the right guidance, we can trust them to make the right kinds of decisions about how to use them.

    Vivek 

    I imagine the other thing it is teaching us is that, you know, now it is Chat GPT. There will always be a set of emerging technologies that really make us question how we work, how we use these things, how we deploy them quickly. I imagine we've learned a lot about confronting something like that, getting out on the front foot, as you said. You know, disseminating it through our, you know, large global organization. What would you say are the biggest lessons?

    Paul 

    The frontier will evolve. And we've tried to make sure that we've built this to be future proof and that we can swap out the back-end models relatively easily as that marketplace evolves. More broadly, in terms of getting out ahead of this type of demand, it is very difficult, I think, in a technology organization, but also a very common position to be on the back foot where what the business wishes it could do is ahead of what you were able to provide to them. And we could have found ourselves in that position on this technology.

    Vivek 

    Sure, yeah.

    Paul 

    And I think by and large, we didn't. Actually, that's a culmination of a number of years’ worth of effort to not just be able to bring this technology in quickly, but to be able to increase the speed with which we can bring any technology in quickly and then we can build the apparatus around it that actually allows us to push it out to people. So, for instance, we have spent a few years building out a standard deployment platform. That means we can stand an application up from scratch in about 25 minutes. Once you've filled in four or five fields on a form. We weren't there if you go back two or three years.

    Vivek 

    We definitely weren’t there a while ago, yeah. So, Paul, you said this was done in a day. I'm sure it took an enormous amount of effort. Obviously, it continues well beyond that. I know there's so many teams working on it. Tell us a little bit about what's been happening in the background.

    Paul 

    Yeah, I think the reality is it's the culmination of the work of a lot of different people and that that last day was the push to the finish line and built on the foundations of everyone else. So, my security team got on top of understanding the risks associated with deploying these services way back in the early spring. We're starting to understand how we could deploy that securely. And were in problem solving mode, helping us design an approach that was going to keep things secure. I mentioned the work we've done on building a platform. Our engineering team has spent, you know, years making sure that platform is in place and ready to go and secure meets all our requirements. We have my development Center of Excellence has built the template application which and all the pipelines and the automation that let you get something up in 25 minutes. My core API team have made sure that all this stuff is wrapped up and secure. So, all of those pieces had to be pulled together to get us to the point where we could sort of rush to the finish line. So, our innovation center in Dublin has been behind a lot of the work that we've done on this stuff, and that's an area that we're building up an increasing amount of AI focused expertise to both launch this product and then build out from here for the LenAI roadmap and the capabilities that we’ll bring to that. But also, all those scalable, more industrialized solutions that are going to run off the same technology. So, it's taken lots of bits of the team, it's taken the team really pulling together across the different organizational groups. In a way we've got better and better at over the last few years and, you know, it's the ability to do this is actually a real reflection of the culture of the team and its ability to focus around a problem across many, many different parts of the team. To get to something like this quickly.