MetaDAMA - Data Management in the Nordics

3#9 - The EU AI Act - Balancing AI Risks and Innovation (Eng)

January 22, 2024 Laiz Batista Tellefsen - PwC Norway Season 3 Episode 9
MetaDAMA - Data Management in the Nordics
3#9 - The EU AI Act - Balancing AI Risks and Innovation (Eng)
Show Notes Transcript Chapter Markers

«Companies are already wanting to position themselves ahead of the legislation, because they see the value of actually adaption best practices early on and not waiting for enforcement.»

Prepare to dive into the risk-based approach of legislation for artificial intelligence with the insights of Laiz Batista Tellefsen from PwC Norway, who brings her expertise in AI from a legal perspective to our latest episode. We tackle the imminent European Union's AI Act with its sophisticated risk-based approach, dissecting how AI systems are categorized by the risks they pose.

Norwegian companies, listen up: the AI Act is on its way, and it's time to strategize. We discuss the necessary steps your business should consider to stay ahead of the curve, from embracing AI literacy to reinforcing data privacy. Laiz and I dissect the balance between innovation and risk management, and we shed light on how cultivating a culture of forward-thinking can ensure safety doesn't come at the cost of progress. This segment is a must for businesses aiming to turn compliance into a competitive edge.

Zooming out to the broader scope of AI governance, we offer advice for maintaining the delicate dance between compliance and cultivating innovation. Discover the vital guardrails for capitalizing on AI's potential while readying for the unknown risks ahead. We peel back the layers of the AI Act's impact on the legal sector, unearthing the nuances of intellectual property rights and data transfer laws that could reshape your organization's approach to AI. Join us for a conversation that promises to leave you not only prepared for the AI Act but poised to thrive in an AI-centric future.

Here are my key takeaways:

  • Looking at AI from a risk perspective is the right way to tackle the challenges within.
  • Risk based approach makes sure that development is not freezed.
  • Our job as experts in the field is to demystify compliance within the use of AI systems.
  • Find the right balance between compliance and innovation, by assessing potential risks.
  • "The AI Act is part of the European Digital Strategy and is the first comprehensive legal framework for AI in the entire world.»
  • CE marking forces you to have constant monitoring and compliance of the system, as well as registration in a register.
  • Have a holistic approach to AI: How does it fit in the wider setting of my company, both from a data, business and cultural perspective?
  • There are big differences in companies maturity to operationalizing AI for value creation.
  • The focus on risk and safety does not correlate to the need for speed in AI adoption.
  • It’s not about starting from scratch, but about understanding the actual use-cases and needs.
  • The AI Act can foster innovation, because you know what your framework is.
  • "Make sure that the date you are using reflects the diversity and the reality of the people and situations that the AI system will encounter."
  • Observe and control data quality and distribution continuously.

What to consider now:

  • Make sure the company has very good control of known risks, like privacy.
  • Make data risk awareness part of your culture.
  • Understand roles and responsibilities in our organization towards data risks.
  • Have your policies updated.
  • Ensure your stakeholders are well trained.
Speaker 1:

This is Metadema, a holistic view on data management in the Nordics. Welcome, my name is Winfried and thanks for joining me for this episode of Metadema. Our vision is to promote data management as a profession in the Nordics, show the competencies that we have, and that is the reason I invite Nordic experts in data and information management for talk. Welcome to Metadema. Today we're going to talk about a topic that has been discussed for years. A lot of us have waited patiently. Others have been a bit afraid, a bit uncertain of what's going to happen. What we're going to talk about today is the European Union's AI Act. We're not just going to talk about the AI Act as the act itself, but we're also going to talk about, in a broader picture, about how you can approach data and AI risk-based. So with me today I have Lys Baptista Telesson. Welcome.

Speaker 2:

Thank you. It's a pleasure to be here. I've been looking forward to being featured on this podcast. I'm really glad to join you today.

Speaker 1:

Glad to have you with us. So quick intro to the topic. So the AI Act has been a long journey and I think it started already in 2018, five years back when the first AI expert group called for a European-AI alliance. And what do we want to achieve? I think the problems have been imminent for years, seeing that there is a certain need to get a foundation and to get some valid conditions for how do we want to develop and use AI technology going forward and how can we, at the same time, ensure that the benefits of that AI development outweigh the risk. So at this stage, we are looking at the AI Act that might be adopted as early as 2024, maybe later, but probably by the end of this year. We get a clearer picture of what is ahead and Lys is working with the AI Act, preparing companies for what is coming, and I'll give you the chance to introduce yourself, thank you.

Speaker 2:

Yes, so I work at BWC Norway, in the law firm, and we assist, as you said, customers on being compliant with the use of technology, to make sure that they can harness that in a strategic way also, in a way that prevents, mitigates the risks and ensure that they can really create value from the use of the technology. I'm originally from Brazil, but living in Norway for the past 10 years soon. Yes, that's me in a nutshell.

Speaker 1:

How have you adapted to the Norwegian weather? Are we going into a cold period now?

Speaker 2:

I would say I'm quite adapted to Norway by now and I actually love that there is so well-determined phases so you can enjoy summer, winter. It's really cozy to see the seasons changing so clearly. We don't get that in real-world. It's basically like a never-ending summer slash mild summer when it's wintertime.

Speaker 1:

I have the feeling that it's never-ending autumn in Norway, like a month of autumn.

Speaker 2:

Yeah, perhaps.

Speaker 1:

What are you doing when you don't work with AI and data? What are your hobbies?

Speaker 2:

I love reading, as you would expect from a lawyer. I love being with my family, going on walks in nature and just the usual, no eccentric hobbies.

Speaker 1:

That's too bad. We have a thing in the podcast trying to figure out how many people are actually playing instruments.

Speaker 2:

I don't play any instruments. I'm terrible at it. I do swimming. Nothing more exciting than that.

Speaker 1:

With your law background, where did the interest for data in AI come from?

Speaker 2:

It wasn't planned. When I moved to Norway, the experience I had in Brazil was actually with tax law. In Brazilian tax law it's quite complex. It was also something that I worked with. Briefly, when I first arrived, I started working for a tax company, a startup, which was really interesting. After that, I did a second master's at University of Oslo in public international law, where I also took subjects within tax law. Then it started there. After that, I worked for five years at a local technology company that is artificial intelligence-based Boost AI. It was just natural because I was faced with the problems and then I had to understand more. The better you become at it, the more interested you get. The more you understand, the nicer it is to work with it. It was just a natural path.

Speaker 1:

We prepared a quite complex topic today the AI Act and the statuses. That's one part. We wanted to do a broader perspective to it. Talk a bit more about risk-based approach to data in AI. From your experience, from your work that you are doing, how can we think of challenges and opportunities within data and AI from a risk perspective?

Speaker 2:

I think this approach that the European Parliament has adopted of regulating AI systems based on risk it's quite smart. I think it's the right way to tackle both AI development and data management. I think it saves us a lot of resources and protects us much more effectively when we set the right priorities based on how much connection how much tracks actually pulls is to fundamental human rights.

Speaker 2:

Obviously, the tasks when it comes to both data management and development are always they always seem they seem a bit overwhelming at first. It's a very good approach. When you first understand what the systems are being used throughout an organization, then you can really categorize them based on risk and you can set the correct guardrails depending on which risks are those and use more resources, more investment, more time to really tackle those that are most threatening. I guess that was also a tentative to make sure that development is not freeze, that innovation can still be fostered, that you're not going to have just as strict requirements for a spell checker or a puzzle as you will have for, you know, medical use of AI systems or the assistance being used in the education. So I guess I was a really well thought out and smart.

Speaker 1:

Since we already talking about the AI act and the approach to it, maybe you can give us some more of an introduction to what is the act where we stand in today.

Speaker 2:

Sure. So, as you said before, and the act as of now is a proposal for regulation and it's anticipated to be adopted by the end of this year, beginning of next year expected, and then after that it's expected to take its full effect within 24 months after entering to force. So the act is part of the digital strategy and it's the first comprehensive legal framework for AI and the entire world, which makes this act very special in the sense that it's tax global standards and it's already influencing other jurisdictions. It will also apply to all AI systems placed in the European market or affecting people in Europe. It's a little bit similar to the GDPR, so it does go beyond borders, which makes it so interesting and emblematic not only for Europe but Work by and so it's.

Speaker 2:

The act has this risk based approach and it's also horizontal, which means that it applies to every sector and industry, but it will impose different levels of obligations and restrictions depending on the level of risk that the system poses to the fundamental rights. So, in my opinion, one of the greatest contributions of this act, it's actually the creation of a proper governance structure that will force member states of the year to invest in and build the necessary competence to monitor AI systems and to enforce compliance with both already existing laws and the act, because the act forces for the creation of national supervisor authorities, notified bodies and much clear processes on how AI systems will be monitored. It will is also bringing a conformity assessment aspect, which is like the C marketing that we have on toys as well in the EU. So it will force states to be better prepared to enforce regulation, to oversee AI systems, and it will also force companies and business to be more aware and it will give it gives a certain clarity of what's expected, which which I find very positive.

Speaker 1:

Thank you for a great introduction. That's one thing I wanted to talk a bit more about, and it was that confirmative assessment that we talked about. It was some talk about the act being just another version of GDPR, just for artificial intelligence, but within GDPR, we never went as far as going to in a confirmative assessment the way that the act is talking. So who is this confirmative assessment for and what are we expecting to gain from it?

Speaker 2:

Yes, it's true that GDPR does not have a similar process, the C marking.

Speaker 2:

It's only for highly risk AI systems, which are those that are subject to the most stringent obligations under the AI Act, some of the main ones being the obligation to register those systems in that EU database, which will be publicly accessible, and also, as we talked about, the requirement to obtain the certificate of conformity for our one notified body.

Speaker 2:

So, in order to circulate high risk AI systems with the EU, you're going to have the need to fix the C marking. I think the intent with that is to have a clear process, both for authorities to be able to recognize which high risk AI systems will be placed and also to have a much more stringent monitoring. And once you have the need for C marking, it's something that forces you to have a continuous testing and monitoring of the system for performance and compliance and have all the documentation in place. And I guess one of the reasons that the AI Act has that is because normally, high risk AI systems do pose very real, considerable threats, and this is one way of minimizing those and trying to mitigate the risks associated with that.

Speaker 1:

Just to be clear, this is something that everyone who is acting in the European market has to apply to. It's not just for European companies or European organizations, but for everyone actively participating in the European market. So, from the Norwegian perspective, what a fact do you think the AI Act will have on Norwegian companies' organization?

Speaker 2:

The Norwegian government sent a position paper to the European Commission back in August 2021, which is a good reason for us to assume that the authority is in order to consider the AI Act to be relevant for the European economic area.

Speaker 2:

And before the AI Act will be introduced into Norwegian law, it has to be included in the AI agreement.

Speaker 2:

So the timeline for this it's a bit difficult to estimate, but we know that the Norwegian government has actually set up so-called fast-working group and we know that artificial intelligence is a priority in the Norwegian government agenda. So I think it's reasonable to hope that the AI Act will take effect in Norway very shortly after or almost at the same time as in the year. So it will likely affect Norwegian companies just as much as any other European organizations and, as you said, it's more. I think it's already affecting. I see that only daily work, how companies are already wanting to position themselves ahead of the legislation because they see the value of actually adopting best practices early on and not waiting for enforcement to be ready. The value on that it comes from two sides Not only, of course, you're going to be compliant by the time that the regulation is enforceable, but also the fact that you are protecting your company, because those best practices are there for a reason they really can be effective on mitigating risks and creating value in their organization.

Speaker 1:

But let's stay a bit on this one, because I think this is quite interesting. There's one really huge advantage of being a consultant is that you really see patterns emerging. Why do you can recognize certain patterns in certain companies or organizations that will eventually emerge. So, with your insight and what you have seen already through your work, what do you think should Norwegian organizations consider to prepare now?

Speaker 2:

I think it's very important. Like a lot of, we work with big corporations, like oil companies, for example, and obviously they already have existing risk management systems, which is a very good starting point. I would make sure that the company already has a very good control of their privacy. For example, make sure that their routines are actually effective. They're not only a bunch of policies and strategies that you created back in 2016, and then you update them just by clicking update and don't make sure that your employees are really aware of the content of all those policies, that they really know the processes around. So, for example, privacy.

Speaker 2:

It's a big concern and I see that a lot of organizations still have difficulty in making that part of the culture of the company and making sure that all stakeholders in the organization understand their role in implementing privacy.

Speaker 2:

I guess that's a very good start, because some of the requirements that are common to A lot of the requirements of GDPR are common to the AI Act, for example, the obligation of informing users, the obligation of using how are you going to use the data adequately and protect users' privacy? So I think I would start there making sure that your privacy controls are in place and that they're effective and that your stakeholders are well trained, that your policies are updated to tackle new tracks that are already present. The artificial intelligence is already being used. It's not waiting for the AI Act to come into force, so the tracks are already there, both cyber-trads, both internal tracks of human error. Tackling those not only by creating the right policies, but also making sure that they fit your organization, that everybody is well aware of how they function that's key and that's a continuous process, right? It's not something you do once and then you put it to rest. It's really building a culture of constantly monitoring that your risk management system is effective and it's working and that your privacy controls are in place.

Speaker 1:

Really interesting and I think I really like the point of connecting a certain risk awareness in the company with data and AI literacy. And we talked about data literacy programs for years. Now AI literacy is coming up as a topic. So what understanding does everyone in the company need to have on AI systems to kind of balance out that risk part on the one side and the opportunity part on the other side? So how can that combination of literacy programs trying to get a knowledge and understanding of AI be combined with that risk awareness that you talked about?

Speaker 2:

Yeah, I think it's all one. At BWC, we have a very holistic approach to an AI journey. We work alongside business technology, cyber and legal. So I think number one risk of an AI system is that you don't identify the correct use cases of the system. For example, that's going to mean that you're going to spend a lot of money to create zero value and even if that AI system is extremely compliant, it will never see the light of day because it's not giving any return on the investment. So I think having that holistic approach, it's very important understanding the entire journey all the way from identifying to preparing, to executing and to continuously monitoring. It's all, it's all chained together and that's what really makes a successful AI adoption.

Speaker 2:

Another step a lot of companies are now using AI already and they don't have perhaps like a good overview of that or understanding of the AI systems that they have. What type of risks are they exposed to and also what missed opportunities are there because they don't really identify the right use cases Like. There's so much money to be made and so much money to be saved when you find the right AI system to do the right task and you would do that implementation in a compliant best practice manner. So I think there's a lot of tapped potential here. Some companies are ahead in that journey, while others are still very immature.

Speaker 1:

But we had a quite interesting conversation about risk and AI in the previous episode I talked about and especially with AI, there is kind of the notion of timers, of the essence, what we need to move fast, create value, be ahead of the curve and what we talked about in that episode was that the focus on risk and safety does not correlate with the pace of change in industry. Right, so far-fiving cultures are more effective and therefore have a better risk focus without slowing down, because a lot of companies look at risk as kind of a stepping on the brakes, right? What is your perspective on that?

Speaker 2:

My perspective on that it's different. I believe that, obviously, identifying and integrating risk has to be a central part of any successful implementation of technology. In general, we, for example, at PWC I use generative AI on my daily work with one of the most advanced tools for the legal sector, which is Harvey, developed on top of opening AI technology. The problem is not using the technology. It's using the technology without being aware of its limitations, aware of what are you actually using it for and the risks that it poses. So having a risk perspective does not mean that you're going to be slow or not take the technology into use. It's quite the contrary. You're going to use it more effectively. You're going to use it for what it's intended for. You're going to balance users' expectations once the output that you can actually expect, what's the value that that tool will create, at the same time being aware of the risks Some risks you can live with, and then you create strategies around it to mitigate Some you have to avoid. But most of all, it's a risk in itself. Obviously, not taking that technology into use as of now, Genitive AI will come to change the world in the sense of a lot of the tasks that are very fundamental to a lot of professions and a lot of the value that's created by organizations today can be, and should be, automated to a certain extent. And in order to do that effectively, you have to have risk management at the center of the journey for sure, because we always say that in the business, it's true, you take very little value from the AI system.

Speaker 2:

You cannot trust it, and in the sense of trusting it doesn't mean that it's a perfect system that will only give correct answers and that's only trained in the perfect data. But it's like it's the system where you can trust because you understand its limitations, you understand what it's being used for and you have a very clear set of rules on how to use it and a guidance on how. How can you inform users documentation about the system. You try to get that clarity as much as you can, because essentially, when you're using large language models, they are like built on nonlinear math all the way down. They are mysterious black boxes in a way. But that does not forbid us from creating guardway up the rails, creating fine-tuning solutions, creating the right use cases, right algorithms, cleaning the data, preparing for a really good, clean use where you're going to inform effectively the users and mitigate risks from all, always that you can, and then you can still use the technology to the best of its potential. So that's the perspective we have, at least.

Speaker 1:

This is fantastic, and now you're moving into a lot of other topics that we could focus on, like trustworthy AI systems, responsible AI systems, but that's one thing I wanted to get your insight on, because you already mentioned the generative AI, and when the AI Act first was talked about 2018, said at the beginning, gen AI wasn't a topic at that point, as much as it is today. So did anything change after? Well, november last year?

Speaker 2:

Yes, there has been some changes in the latest proposals, especially in terms of need to disclose source of intellectual proprietary rights that are used to train the models, which I believe it's going to might become a second trams 2K. So I don't know if you're familiar with all the issues of transferring data to the US because of the limitations of the data. So, basically, in one side, europe is trying to protect very from the mental human rights which obviously have to and should be protected. On the other hand, it's very difficult for these big technology companies to comply from an actually essentially technical perspective. So it will be interesting to see how that's going to actually be enforced in real life. So I think, in terms of the coming of GenRT-AI during the negotiations still of the proposal for the AI Act, I guess that's the most relevant one. The requirement for disclosure of use of intellectual property rights when training the models will be probably the most challenging to overcome for companies.

Speaker 1:

I think poisoning of models has been a topic for the last couple months, and especially when you train models on source data that you can't really control, like training models on data from the internet and I think you are spot on on this one and there are so many risks involved in doing that they don't really outweigh the benefit. Let's talk a bit more about exactly that benefit and risk and, from your risk background and your knowledge about how to establish good risk systems and management, what would you say are basic tools and basic methods that you should use also when it comes to AI?

Speaker 2:

I think, some very practical steps that you can take towards more trustworthy AI.

Speaker 2:

It's really start by selecting and preparing the data that you're going to use to train and evaluate the systems. Obviously, that's very complicated, as you say, when you're talking about large language models, which are trained on all sorts of data public data but when you're going to use for a specific use case, it's important to make sure that the data that you're going to be using reflects the diversity and the reality of the people and the situations that the AI system will encounter. And also make sure that you keep track and check for the quality and the distribution of the data throughout both the development and the deployment of the AI system, and constantly adjust the data as necessary and test it for bias to make sure that it's not unfairly favoring or harming some groups over others. Also, be aware of which algorithm you're going to choose to produce the outcome that your AI system aim to achieve. So picking a suitable algorithm for the kind of data and tasks that you have at hand can really deal, can really like mitigate a lot of the risks that are inherent from these systems.

Speaker 1:

And I think that with the development of AI, there's also a development of AI governance mechanisms that are more and more vivid, and there's also a great collection of things that you could do and implement to ensure that young models are clearer. You have a control over the lifecycle of your models. You could also have, for example, some private modeling practices that you could implement to protect sensitive data. So there are a lot of options that you can choose from, and, from my perspective, I think that AI governance should be part of the data governance program and should be implemented maybe even earlier than you're thinking about trying to gain value from.

Speaker 2:

Absolutely. I think, like at car jobs as experts in this field is also somehow dismissed. If I comply with within the use of AI systems, you're not going to have to burn all your current processes and policies and strategies and start from scratch. It's a lot about understanding the actual use cases, actual threats, and adapting your existing systems to tackle that, to adapting your existence management routines to effectively mitigate the new risks that are coming. So I think there's a lot of mystery around it, but it's actually. There are, as you say, very practical stacks, both imposed by regulations, but also by ISO standards, different industry practices, so there are very clear steps and guidelines to take if you want to have a good AI compliance and governance built in your organization.

Speaker 1:

And I think that that is a good and practical approach to it, so I really enjoy that. You talked a bit about you used to work out rails. I think that's a good way of thinking of this. It's not about trying to get all your work into a compliance mechanism that is actually limiting your ability to innovate, but it's basically setting up some guardrails and some points that you don't want to cross and you can hold on to.

Speaker 2:

Yes, absolutely I agree, and that's how I see it, and that's how you can actually, in the end of the day, harness most value from those solutions.

Speaker 1:

And this is a topic that we have discussed before in the previous episode about how we're losing innovators' potential through regulating that industry. And are we regulating a technology or are we regulating the results from AI? And those questions have come up throughout the podcast, but also throughout the work that the European Union has done on the AI Act. I just wanted to see what your perspective is on AI. What are your practices on exactly that topic about limiting innovation versus setting up guardrails, as you said?

Speaker 2:

Yeah, well, there's always a risk that regulation will limit innovation to a certain extent. I think the way the AI Act is built limits that exactly because it has a risk-based approach and there is very few of the requirements that are completely new. The only thing that is relevant is around more effective monitoring, and I see it as a bit of like it's like having a window with no blasts. Do we really want to cross? Do we want to innovate to the edge of what we can do? I think it's a limit perhaps, but it's more protective than harmful in my opinion.

Speaker 2:

Of course, it will be adjusted with time the way it's going to be actually enforced. If there are big challenges for compliance, I think those will be settled with time. On the practice side, I think it's much more risky not having any framework, especially because I think it actually can foster innovation when you know what your framework is and what are the guidelines, and I think it actually helps more organizations to dare to innovate and dare to use systems because they understand what's expected from them in terms of compliance. I see the challenges, but I think I rather have regulation than no regulation for sure, and that's not only because I work with that it's actually because I really do believe it's better for society.

Speaker 1:

We were uncovering quite a lot. We talked about the act itself and how it will affect us, both in Norway, but also in the European Union. We talked about risk-based approach to data and AI. What we see is the challenges with that, but also the opportunities within. So, at the end of our talk, is there anything that you think we should have mentioned or talked about?

Speaker 2:

Well, I think we covered most of it, and I guess my call to companies are exactly that looking to your existing systems, start preparing now. Don't wait until the regulation is here to start building a culture of understanding of the best practices around the use of AI.

Speaker 1:

Thank you so much.

Speaker 2:

Thank you, this is lovely.

The European Union's AI Act
AI Act's Impact on Norwegian Companies
Risk Management in AI Implementation
AI Governance and Limiting Innovation
Exploring Data and AI Regulation