RSM's talkBIG Podcast
talkBIG is RSM's business finance and economics podcast, helping you save, create and protect your wealth. This podcast delves into real-life stories and inspires listeners to talk and think BIG. This is edutainment at its finest, suited for financial geeks or newbies. Tune in and subscribe to get your hit of personal and business money talk.
RSM's talkBIG Podcast
AI and Data Governance
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of talkBIG, Andrew Sykes explores the critical role of data governance in AI with experts Srdjan Dragutinovic and Gerard Sayers from RSM Australia. Discover how effective governance transforms AI into a strategic enabler, preventing risks like bias and reputational damage.
Learn practical steps to align governance with your strategy, ensuring AI success and innovation. Tune in to gain insights on embedding governance into your AI initiatives for a competitive edge.
Key Highlights:
- Good governance transforms AI from a compliance chore into a strategic enabler that accelerates innovation while managing risks.
- Traditional frameworks are ill-equipped for unstructured and shadow data driven by AI.
- AI's speed amplifies risks like bias, making proactive ethical planning essential.
- Clear ownership and AI literacy within the organization are crucial for effective governance.
- AI-specific regulation is evolving; proactive alignment with emerging standards is key.
- Automated AI systems require active human oversight to ensure ethical and effective outcomes.
Thanks for listening! Visit the RSM Australia website to ask the hosts a question.
<font color=#636664FF>Joining me today are two experts in this area from RSM Australia, Srdjan Dragutinovic and Gerard Sayers.</font><font color=#636664FF>Srdjan is a partner in the Data Analytics division at RSM Australia with over 20 years global experience in advanced analytics</font><font color=#636664FF> to support strategic, operational and decision making.</font><font color=#636664FF>Srdjan assists clients in becoming data-driven and insight-led, linking insights to outcomes</font><font color=#636664FF>and driving business value through the application of data and analytics.</font><font color=#636664FF>Gerard is a Senior Manager in Data and Analytics and AI at RSM Australia.</font><font color=#636664FF>He leads RSM's responsible AI initiatives, helping clients adopt AI safety standards and build trust in emerging technologies.</font><font color=#636664FF>He specialises in developing proof of concepts and guiding organisations through responsible AI implementation, </font><font color=#636664FF>ensuring ethical, transparent and effective use of AI.</font><font color=#636664FF>Both very, all very topical at the moment.</font><font color=#636664FF>How are you Gerard and Srdjan?</font><font color=#636664FF>Very well, thank you, Andrew.</font><font color=#636664FF>It's a pleasure to be here and talk to you today.</font><font color=#636664FF>Looking forward to the conversation.</font><font color=#636664FF>Yeah, it's really a very interesting topic.</font><font color=#636664FF>You can't get through a day anymore without talking about AI.</font><font color=#636664FF>So AI is no longer a futuristic concept.</font><font color=#636664FF>It's here and it's reshaping how business operates.</font><font color=#636664FF>And that's why strong governance is essential to ensure it's effective, ethical, and it's aligned with organisation goals.</font><font color=#636664FF>How does AI change the way leaders make decisions at board level?</font><font color=#636664FF>Well, I think a lot of the boards are really grappling with this balancing act of governance and productivity uplift.</font><font color=#636664FF>So it's questions like, how much autonomy do you give AI and how do you govern that without really slowing down too much?</font><font color=#636664FF>A recent McKinsey report really highlighted this gap where probably two-thirds of organisations are still really playing in that pilot phase.</font><font color=#636664FF>This raises that discussion at board level of how do you actually operate,</font><font color=#636664FF>operationalise these initiatives in order to reap the benefits and move beyond really that pilot phase.</font><font color=#636664FF>One of the other things that we're seeing is this physical reality that's hitting boards too.</font><font color=#636664FF>We've watched how the global hyperscalers, the Googles and the Metas, they're cornering the market in GPUs and memory</font><font color=#636664FF>given the vast computing that's required to build these</font><font color=#636664FF>generative AI models.</font><font color=#636664FF>That then leads onto another interesting fact, which is by 2030, Australian data centers are projected to consume about 6% of our national energy grid.</font><font color=#636664FF>So if you're on the board of a company, you're not just thinking about your AI strategy and the practical implications for your business.</font><font color=#636664FF>You're looking at how do you balance these massive computing needs against your net zero commitments.</font><font color=#636664FF>So it becomes a</font><font color=#636664FF>an energy strategy in addition to your AI strategy.</font><font color=#636664FF>So what we're talking about there, I'll ask you, why is governance not just a compliance exercise and how is it a strategic enabler?</font><font color=#636664FF>Well, I think that really governance needs to wrap around AI because it's the guardrails that allows you to accelerate your AI initiatives.</font><font color=#636664FF>Without that governance, it's the Wild West and you're going to run into a lot of problems as a lot of organisations</font><font color=#636664FF>have that have jumped on the AI bandwagon without that</font><font color=#636664FF>governance in place.</font><font color=#636664FF>So you look at the likes of Deloitte,</font><font color=#636664FF>who had a recent faux pas and obviously didn't have that governance in place where there was human oversight and human in the loop,</font><font color=#636664FF>if you like, which is one of the 11 guardrails</font><font color=#636664FF>that's recommended in Australia.</font><font color=#636664FF>So they published a report for the government which included references to legal case studies that were fictional.</font><font color=#636664FF>The AI had obviously hallucinated.</font><font color=#636664FF>So really having that governance in place is hugely important if you want to reap the benefits of AI.</font><font color=#636664FF>It really comes back to reputational risk, Andrew.</font><font color=#636664FF>So with this, governance provides you a safety net.</font><font color=#636664FF>I won't say it's not foolproof, but it provides you some protections against particularly reputational risk, </font><font color=#636664FF>which is what we're seeing from the likes of whether it's Deloitte or Woolworths even last week as well.</font><font color=#636664FF>But there's a lot of organisations, plenty of examples where we've seen these sorts of issues coming through and</font><font color=#636664FF>The governance helps to bring trust to these AI processes, which is a really important aspect of it and should be part of any strategy.</font><font color=#636664FF>Yeah, that's it is a super important part of AI, isn't it.</font><font color=#636664FF>It's very easy to look at AI and think it's it's almost a bit of a miracle product that will do a lot of our work for us.</font><font color=#636664FF>But it's not always right.</font><font color=#636664FF>It's not always accurate.</font><font color=#636664FF>And so is it fair to say that a good corporate governance regime over the top is how we stop being an example in the media?</font><font color=#636664FF>Yeah, I think so.</font><font color=#636664FF>And there's a few pieces there in terms of important aspects of that strategy, including making sure you've got someone who</font><font color=#636664FF>is accountable for AI across the organisation.</font><font color=#636664FF>And then also thinking about the guardrails.</font><font color=#636664FF>So there are voluntary standards, for example, there are other standards, ISO standards as well, </font><font color=#636664FF>which are similarly related, using these within the organisational </font><font color=#636664FF>context of AI and</font><font color=#636664FF>how you're going to use it, they're there to protect you</font><font color=#636664FF>from some of these risks.</font><font color=#636664FF>Just emphasising the point I made earlier, it really allows you to go at full speed with those initiatives because</font><font color=#636664FF>you know, and you've got confidence that the guardrails are in</font><font color=#636664FF>place to stop some of the things that we've been seeing happening in your organisation.</font><font color=#636664FF>That is that argument of risk versus opportunity.</font><font color=#636664FF>So while AI opens up incredible opportunities, it also introduces new risks, ethical, legal, reputational.</font><font color=#636664FF>And we then get the question of how leaders strike the right balance between innovation and their responsibility in all those areas.</font><font color=#636664FF>AI, as we are starting to see, has a whole bunch of ethical risk biases in our algorithms,</font><font color=#636664FF>the fairness in outcomes and also legal and compliance.</font><font color=#636664FF>The examples you raised before showed the reputational risk.</font><font color=#636664FF>And then we're trying to balance the speed of innovation at the moment with responsible business practices.</font><font color=#636664FF>So if we take all of those considerations into account, what are the top risk executives should be aware of when deploying AI?</font><font color=#636664FF>I think you mentioned a few there, Andrew.</font><font color=#636664FF>There's obviously hallucinations that we just talked about with the Deloitte example.</font><font color=#636664FF>And I think we'd be here all day if we started talking about all of the examples that we're continually seeing.</font><font color=#636664FF>You've got the Robodebt example where there was obviously a lack of human oversight in</font><font color=#636664FF>how the models were issuing debt and had done so illegally.</font><font color=#636664FF>You've got models that are built on biased data.</font><font color=#636664FF>So there are examples in the HR space and recruitment space where models were obviously fed unintentionally,</font><font color=#636664FF>biased data that resulted in the AI training and learning on that</font><font color=#636664FF>data and implementing biased decisions around who and who they weren't going to recruit.</font><font color=#636664FF>So that's an issue.</font><font color=#636664FF>And that may not be intentional.</font><font color=#636664FF>It's that the nature of AI makes it</font><font color=#636664FF>difficult to spot those biases and without the guardrails in place and the processes in place to test the AI models.</font><font color=#636664FF>It's very difficult to pick up.</font><font color=#636664FF>Early in my career, I had it drummed into me that if you can't explain a model, you shouldn't be using it.</font><font color=#636664FF>You should be able to explain to a client, to the business, what are the factors that are driving that model</font><font color=#636664FF>and why is the model coming to the outcome that it is.</font><font color=#636664FF>That's still important 30 years later.</font><font color=#636664FF>It's just more difficult now with AI to unpick what those drivers are.</font><font color=#636664FF>I remember when neural networks were introduced in the late 90s, made it very difficult to try and unpick those compared to traditional</font><font color=#636664FF> machine learning techniques that we used before.</font><font color=#636664FF>So think a lot of those things that were</font><font color=#636664FF>around 30 years ago is still relevant today.</font><font color=#636664FF>Yeah, and you also, if we reflect on the different business environment with social media,</font><font color=#636664FF>you know, when we all started our careers, you could almost fail </font><font color=#636664FF>in private a little bit.</font><font color=#636664FF>But now it is certainly very fast paced innovations, very public and social media means that it can blow up very quickly.</font><font color=#636664FF>So as a leader, how could you foster innovation through the use of AI in that kind of environment without</font><font color=#636664FF>compromising ethics or compliance?</font><font color=#636664FF>I think if you look at it from the, as I mentioned about the voluntary guardrails and also standards,</font><font color=#636664FF>they give you an indication or set you on the right path, if you like, in terms</font><font color=#636664FF>of what you might need to consider from an ethics perspective.</font><font color=#636664FF>The voluntary AI guardrails are aligned with the AI ethics principles and</font><font color=#636664FF>I guess some of the key pieces of that is around human rights, for example, around bias.</font><font color=#636664FF>So how you might...</font><font color=#636664FF>Whether it's discrimination, whether it's impacting individuals and their livelihoods, for example,</font><font color=#636664FF>are aspects of you don't want to have these adverse outcomes </font><font color=#636664FF>to people if we had</font><font color=#636664FF>an outcome like what we saw with Robodebt.</font><font color=#636664FF>Not that I think Robodebt was based particularly on these sorts of models.</font><font color=#636664FF>It wasn't.</font><font color=#636664FF>It was something more fundamental than that.</font><font color=#636664FF>But we wouldn't want to see those sort of outcomes again, eventuate through successive models of what's seen being developed.</font><font color=#636664FF>But again, those frameworks provide you with those, the ethics compliance frameworks for where you need to head with it.</font><font color=#636664FF>I just going to add to Gerard's comment there that really building these guardrails into workflows is a good way to </font><font color=#636664FF>make sure that those guardrails are followed so that they're</font><font color=#636664FF>not voluntary.</font><font color=#636664FF>They're not in the sense that it's up to a person whether they're going to follow that process or not.</font><font color=#636664FF>So ensuring that bias is checked for in models in an automated way and that gets flagged to a human to review, for example, is a good way to ensure that</font><font color=#636664FF>bias isn't creeping in.</font><font color=#636664FF>So you both mentioned voluntary guidelines there.</font><font color=#636664FF>Are there any relevant regulations or any laws that we are mandated to stick to in this area?</font><font color=#636664FF>Yeah, I might jump in on this one, is there are the approach for the government has been to rely</font><font color=#636664FF>on a lot of existing laws that exists such as privacy laws, data protection,</font><font color=#636664FF>consumer laws, for example, these one, these laws already exist.</font><font color=#636664FF>And the idea is not to duplicate or to supersede those and having specific laws for AI.</font><font color=#636664FF>There was an approach that was considered to introduce a mandatory guardrails.</font><font color=#636664FF>So those voluntary guardrails would have become required in certain contexts and high risk contexts.</font><font color=#636664FF>But the direction has been or the direction of the government has been not to hand those down and not to regulate those as not to stifle innovation.</font><font color=#636664FF>But there certainly does set a pretty good expectation, if you like, in terms of where we might head in future</font><font color=#636664FF>from a regulatory point of view and what we might expect to be</font><font color=#636664FF>regulated in future.</font><font color=#636664FF>And we can see that, for example, in the EU, there is regulation of a high-risk use of AI.</font><font color=#636664FF>And again, it gives you a good lens to consider those areas where it has been regulated overseas as to where</font><font color=#636664FF>they might look to regulate in Australia as well.</font><font color=#636664FF>But at the moment, there isn't AI regulation as such.</font><font color=#636664FF>There's a few requirements on some government departments to have</font><font color=#636664FF>a responsible person in place, but aside from that, there's not any particular ones around the use of AI.</font><font color=#636664FF>That approach by the Australian government really emphasises the point we talked about earlier,</font><font color=#636664FF>which is that grappling between stifling innovation, managing risk.</font><font color=#636664FF>So you can see the government is really grappling with that same question that boardrooms are as well.</font><font color=#636664FF>Given the importance of AI, it wouldn't be surprising to see a whole suite of regulation come down from government over time.</font><font color=#636664FF>It's such a changing area.</font><font color=#636664FF>And it doesn't, if we look at changes, it's also changing how we use data.</font><font color=#636664FF>So AI just doesn't use data, it depends on it.</font><font color=#636664FF>And that dependency can create new pressures on how data is governed.</font><font color=#636664FF>So if we just unpack the unique challenges that AI presents, what are they?</font><font color=#636664FF>What are some of the new challenges that AI is introducing to traditional data governance models?</font><font color=#636664FF>Well, I think that there's certainly been an uptick in interest in getting those foundational layers right, if you like.</font><font color=#636664FF>So data governance, data quality.</font><font color=#636664FF>But what the explosion in AI has done is really increased the amount of unstructured data that can now be used.</font><font color=#636664FF>So previously, very difficult to use handwritten documents,</font><font color=#636664FF>PDFs, images, videos, it could be done, but it's time consuming, difficult.</font><font color=#636664FF>That's now being democratised and there's just been an explosion in the availability and the use of that data.</font><font color=#636664FF>So traditional data governance, probably not designed with that in mind because it wasn't really a thing when</font><font color=#636664FF>data governance frameworks were designed.</font><font color=#636664FF>So that's one of the areas where it needs to be adapted.</font><font color=#636664FF>So when you're looking at AI governance, that's a big factor.</font><font color=#636664FF>You've also got shadow data, which is a big threat as well.</font><font color=#636664FF>And that's linked to that unstructured information.</font><font color=#636664FF>So all of this information that’s sitting around that really wasn't being used, organisations have always had this data.</font><font color=#636664FF>It's just now it's a lot more accessible.</font><font color=#636664FF>It's a lot more accessible to people to use</font><font color=#636664FF>in good ways, but also in negative ways.</font><font color=#636664FF>So that creates additional risk.</font><font color=#636664FF>It's also interesting because it is more accessible in some ways, but it's less accessible in another.</font><font color=#636664FF>So as you said, with shadow data, it might be sitting on your individual profile or your individual machine, but it's not shared across the organisation.</font><font color=#636664FF>And it's a varying quality as well and consistency between that.</font><font color=#636664FF>So given those unstructured documents can be in all sorts of forms.</font><font color=#636664FF>And there can be drafts and duplicates and all sorts of</font><font color=#636664FF>it's not well curated data that we're talking about here.</font><font color=#636664FF>So if you want to get use or power out of the AI, you want to be able to curate that data into a central repository,</font><font color=#636664FF>into the finalised state if you’re trying to think about how do</font><font color=#636664FF>you use it for, say, proposals in an organisational context like ours, you might say, just want to see the final version of those proposals.</font><font color=#636664FF>We don't want to know about all the drafts.</font><font color=#636664FF>We don't want to have them creeping into the</font><font color=#636664FF>inputs of the model.</font><font color=#636664FF>So you've got to bring that information together.</font><font color=#636664FF>And that's where it's different to where we would have done with, data governance before, instruction systems,</font><font color=#636664FF>it's in ERP systems, it's in our systems at record.</font><font color=#636664FF>There is data across the organisation and then Srdjan said it can be in physical documents as well.</font><font color=#636664FF>So we might have them in physical archives.</font><font color=#636664FF>Now, there might be a tremendous amount of data in there, value in that data that's there historically.</font><font color=#636664FF>We haven't used it.</font><font color=#636664FF>We've just always put it in the back of the filing cabinet and never accessed it.</font><font color=#636664FF>But if we could draw that out, there could be tremendous value in certain domains, certain business domains.</font><font color=#636664FF>So for example, we talk about occupational health and safety data, for example.</font><font color=#636664FF>So incidents that have happened 10 years ago, those risks are just as relevant today as what they were then, or could be,</font><font color=#636664FF>for a mining company, for example.</font><font color=#636664FF>So if you can access that information, bring it forward and use it for</font><font color=#636664FF>within an AI context and use that for context within your AI system, it could bring a lot of value to your business.</font><font color=#636664FF>So, Gerard, when you're talking there about ensuring the data quality and integrity that AI systems</font><font color=#636664FF>are learning and evolving on, is part of that ensuring that you're not just</font><font color=#636664FF>bringing all sorts of data in from outside of your organisation?</font><font color=#636664FF>or it's as much within your organisation as well as outside as well.</font><font color=#636664FF>You want to be able to in the AI to draw on good quality information, whether it's external or internal.</font><font color=#636664FF>It doesn't particularly matter, but it needs to have relevant context to provide you with a good answer.</font><font color=#636664FF>You also need that data lineage to that point, Andrew.</font><font color=#636664FF>So knowing what that lineage of data is and how models are acting on that data.</font><font color=#636664FF>So you can't satisfy the ethics principles in the guidelines without knowing the lineage of that data</font><font color=#636664FF>because you don't know what the models are making the decision on.</font><font color=#636664FF>And you've also now got agentic AI, which is accessing</font><font color=#636664FF>data, bypassing traditional governance processes that were very much centered around how humans access that data.</font><font color=#636664FF>You might have an agent that's going around the organisation pulling data from here, there, and everywhere.</font><font color=#636664FF>So traditional governance frameworks may not be up to the task of tracking and picking that up.</font><font color=#636664FF>Yeah, so there's some of the challenges and I think it'd be good to talk some solutions and discuss what does good</font><font color=#636664FF>governance in AI look like in practice and how we can make it</font><font color=#636664FF>actionable rather than theoretical.</font><font color=#636664FF>So what are some of those essential pillars of a strong data governance framework?</font><font color=#636664FF>I think as Gerard said earlier, that having ownership, so having an AI owner,</font><font color=#636664FF>traditionally in data governance, you might have a data governance council or data</font><font color=#636664FF>stewards within the organisation.</font><font color=#636664FF>So having similar counterparts within the AI sphere that work with those people in data governance is a really important step.</font><font color=#636664FF>That creates ownership of</font><font color=#636664FF>AI and data governance within the organisation.</font><font color=#636664FF>Without that, people are pointing their fingers and nobody has that ownership.</font><font color=#636664FF>Another important point is literacy, both of the data and the data quality, but also around the AI use cases within the organisation.</font><font color=#636664FF>Just on that point there, there's very much making sure that there is a key point of responsibility for the AI and the data within an organisation.</font><font color=#636664FF>It just can't run, you can't deploy it and let it run on its own.</font><font color=#636664FF>Yeah, that's right.</font><font color=#636664FF>It's an important point.</font><font color=#636664FF>One of the guardrails we talked about, and we mentioned guardrails before, but one of those is human oversight.</font><font color=#636664FF>And the important aspect of human oversight is having a meaningful interaction with it, being able to interpret results</font><font color=#636664FF>or outputs and being able to meaningfully act on that</font><font color=#636664FF>information.</font><font color=#636664FF>having that, we talked about lineage, being able to trace it back and saying, this of good quality?</font><font color=#636664FF>Can I rely on the outputs that are being produced here?</font><font color=#636664FF>That enables that meaningful oversight as much as anything.</font><font color=#636664FF>So very important aspect of it.</font><font color=#636664FF>The other one I wanted to mention was tying governance back to strategy.</font><font color=#636664FF>And that is that the governance you put in place really aligns with the strategic outcomes you want to achieve.</font><font color=#636664FF>you need to understand not AI is an outcome for, AI is not the outcome.</font><font color=#636664FF>AI is an enabler.</font><font color=#636664FF>What is it going to help you achieve?</font><font color=#636664FF>Is it going to help you achieve more revenue?</font><font color=#636664FF>Launch new products into new markets?</font><font color=#636664FF>Are you going to look at your customers, work with your customers differently and have better conversations with them?</font><font color=#636664FF>Are you going to be reducing your costs in your call centers or in your workflows?</font><font color=#636664FF>How are you going to use or where do you see the opportunities within your organisation?</font><font color=#636664FF>How does it align strategically with those strategic initiatives?</font><font color=#636664FF>And then thinking about the governance from that perspective of how does government support us in achieving those outcomes?</font><font color=#636664FF>I think that’s a very important aspect to it.</font><font color=#636664FF>That sounds very much to me like that.</font><font color=#636664FF>You work out your strategy and then you implement your AI and the government sounds like it's key</font><font color=#636664FF>to keeping control of and tying it back to the strategy to make sure that it's</font><font color=#636664FF>being implemented.</font><font color=#636664FF>Is that fairly correct?</font><font color=#636664FF>You might even say your strategy is your accelerator and your compliance mark the brake.</font><font color=#636664FF>But the brakes are there to keep you safe and to allow you to stay on the road, right?</font><font color=#636664FF>So they're just as important as each other.</font><font color=#636664FF>And really AI is no different to, as Gerard said, any other enabler. Analytics and AI have been around for a long while and the process still hasn't changed.</font><font color=#636664FF>It's about understanding what you want to achieve as a business, what those highest value use cases are,</font><font color=#636664FF>how do you prioritise those and then working back from there.</font><font color=#636664FF>Okay, is it AI that's going to help us achieve that?</font><font color=#636664FF>Is it BI reporting?</font><font color=#636664FF>Is it something else?</font><font color=#636664FF>So AI is purely the way to achieve those strategic objectives that you set at an organisational level.</font><font color=#636664FF>Yeah, and it's always been a challenge.</font><font color=#636664FF>Governance and compliance has always been a challenge.</font><font color=#636664FF>Have you got some advice on how we can make governance practical and not just a policy document?</font><font color=#636664FF>I think just going back to the point I made earlier about having ownership, this is probably, at least in data governance, the most challenging aspect is not</font><font color=#636664FF>actually building the framework or creating the documents, it's actually having it work in practice.</font><font color=#636664FF>So is a change management piece.</font><font color=#636664FF>It's a sign of responsibilities across the organisation around ownership of data and the governance.</font><font color=#636664FF>It's about actually kind of living the data governance and evolving that over time.</font><font color=#636664FF>It's not a static thing.</font><font color=#636664FF>It's a continually evolving process.</font><font color=#636664FF>Which is going to keep on changing with the pace of AI and as new AI has been implemented.</font><font color=#636664FF>Very much, if we went back before the previous wave of AI, we very much had control of the technology</font><font color=#636664FF>and systems that were used in our organisations.</font><font color=#636664FF>Now, with mobile devices, AI on them, we don't always have control of that.</font><font color=#636664FF>Does that impact on the governance needed?</font><font color=#636664FF>Look, think that's, know, devices are just one aspect of, you know, what our governance might look like.</font><font color=#636664FF>So it's just something that needs to be built in as AI becomes more accessible on not just, you know, computers,</font><font color=#636664FF>handheld devices, also in other devices around your</font><font color=#636664FF>organisation or home.</font><font color=#636664FF>Governance is going to have to cover all of those technologies that are around now and in the future.</font><font color=#636664FF>Something that's going to need to be dealt with.</font><font color=#636664FF>And I think a lot of this governance area, if we don't deal with it and something goes wrong,</font><font color=#636664FF>there's going to be a lot of questions as to why businesses or organisations didn't.</font><font color=#636664FF>If we're not getting in front of it and putting the frameworks in now, there could be some significant consequences.</font><font color=#636664FF>So if both of you just had one piece of advice you would give to organisations thinking about</font><font color=#636664FF>AI adoption, what would it be?</font><font color=#636664FF>it's really important to get across.</font><font color=#636664FF>It's that it starts with understanding your organisational strategy and what you're trying to achieve</font><font color=#636664FF>and understanding and working through systematically where AI can help you achieve those strategic objectives.</font><font color=#636664FF>And then working backwards, creating a small pilot to test and then scaling that,</font><font color=#636664FF>where you find the value is being delivered.</font><font color=#636664FF>Yeah, think similarly, it's the governance needs to align with the strategy.</font><font color=#636664FF>So you're not going to be wanting to put in additional governance layers where you don't,</font><font color=#636664FF>I guess, where you don't have the risk and the risk is really associated with you taking</font><font color=#636664FF>advantage of the opportunity.</font><font color=#636664FF>So just making sure there's alignment there between the organisational objectives and the strategy and the governance, I guess, is key to it.</font><font color=#636664FF>And then thinking about those</font><font color=#636664FF>assets within the organisation which you haven't perhaps thought about before and how you might and this might open the door</font><font color=#636664FF>to you to tap into those assets which you haven't</font><font color=#636664FF>considered previously.</font><font color=#636664FF>Yeah, thank you for those and some really interesting points raised today.</font><font color=#636664FF>It's very easy to get excited about implementing new systems, new AI, but we certainly </font><font color=#636664FF>think that you need to include the governance and compliance aspect of it.</font><font color=#636664FF>Srdjan and Gerard, thank you very much for your insights today.</font><font color=#636664FF>I invite all our listeners to tune into the next episode of talkBIG.</font><font color=#636664FF>Subscribe on your favorite podcast platform.</font><font color=#636664FF>And thank you for joining us.</font><font color=#636664FF>If you found this episode helpful, please subscribe and leave a review.</font><font color=#636664FF>Thank you and join me on the next episode of talkBIG.</font><font color=#636664FF>I hope you've enjoyed being on our podcast.</font><font color=#636664FF>Any final words from either of you?</font><font color=#636664FF>And thanks for having us, Andrew.</font><font color=#636664FF>Really appreciate it.</font><font color=#636664FF>Thanks, Andrew.</font><font color=#636664FF>I appreciate it.</font><font color=#636664FF>It's been a pleasure to talk to you.</font><font color=#636664FF>Terrific, thank you.</font>