In this edition of CIONET Trailblazer, we are excited to present an exclusive interview with Bart Windal, Country General Manager of IBM Belgium/Luxembourg. We dive into IBM's approach to ethical and responsible AI, showcasing how the company is leveraging its legacy as a trusted technology partner to set new standards in AI governance.
While IBM might not immediately come to mind as an AI leader within our community, the company has been steadily building a comprehensive framework for AI governance, drawing on over a century of expertise. Their strategy today centers on four key areas: Hybrid Cloud, AI, Automation Platforms, and Quantum Computing.
As Artificial Intelligence continues to transform industries, IBM embraces the immense responsibility that comes with this power. Guided by ethics, accountability, and transparency, their approach ensures that AI systems are developed and deployed with trust at their core.
Join us as Bart Windal shares how IBM is shaping the future of AI governance, balancing cutting-edge innovation with the critical need for responsibility and trust.
So, what is IBM’s approach to AI Governance, as a company?
IBM’s approach includes six key elements. Let me explain.
At the heart of IBM's AI governance strategy is the establishment of a clear ethical framework for the use of AI. This framework addresses concerns such as privacy, security, fairness, and accountability. By setting out guidelines that address these critical aspects, we aim to create a culture of ethical AI development and deployment.
Explainability is a fundamental principle in IBM's AI governance approach. Explainable AI refers to systems designed so that their decisions can be understood by humans.
As we think about AI as a technology, we recognise the importance of human oversight in AI systems, especially in sensitive or high-risk decisions. We advocate for mechanisms that allow for human intervention, creating a synergy between human intuition and AI efficiency.
This approach ensures that AI decisions can be overridden or adjusted when necessary, maintaining a balance between innovation and compliance with governance policies.
Additionally, we emphasise the need for regular audits and monitoring of AI systems to identify and correct biases or errors that may arise over time. This requires ongoing testing of AI systems against new data and scenarios to ensure their decisions remain fair and accurate. By implementing robust monitoring processes, IBM aims to maintain the integrity and reliability of its AI systems.
From an organisational point of view, IBM promotes a cross-functional, collaborative approach to AI ethics, involving all stakeholders in the AI ethics process. This holistic approach requires a diverse set of skills and perspectives to ensure that ethical considerations are embedded in every phase of AI development and deployment.
In this kind of matter, we are not alone and isolated in this world, therefore, we expand the approach beyond the organisation by identifying and engaging key AI-focused technology partners, academics, startups, and other ecosystem partners. This collaborative approach helps ensure that ethical principles are consistently applied across different AI systems and platforms, fostering a culture of ethical AI development and deployment.
Thank you for this insight. As you are working with many companies, what do you see as the hardest struggles today when it comes to AI?
Today, we see three main reasons why organisations worldwide struggle to adopt AI: lack of confidence in operationalising AI, challenges in managing risk and reputation, and difficulties scaling amid growing AI regulations.
The biggest issue, in my opinion, is the lack of confidence in operationalising AI. Many organisations struggle when adopting AI. According to Gartner, 54% of models are stuck in pre-production because there is not an automated process to manage these pipelines and there is a need to ensure the AI models can be trusted. There are many reasons for that: an inability to access the right data, manual processes that introduce risk and make it hard to scale, multiple unsupported tools for building and deploying models, platforms and practices which are not optimised for AI. A lot of work still needs to be done there.
Success in delivering scalable AI necessitates the use of tools and processes that are specifically made for building, deploying, monitoring and retraining AI models. We have seen too many failures, as the pipeline is not under control or lacks essential capabilities.
Another challenge we see is risk and reputation. We as a client, employees, citizens or shareholders, expect organisations to use AI responsibly, and government entities are starting to demand it. Responsible AI use is critical, especially as more and more organisations share concerns about potential damage to their brand when implementing AI. Increasingly we are also seeing companies making social and ethical responsibility a key strategic imperative.
Today, we must adhere to the EU AI Act. How do we tackle this?
A framework and tool to make this task automated and easy helps a lot to do this in an efficient way while having peace of mind on this topic. I see too often organisations that are ‘easy’ with it until the invitation from the regulator for an audit comes in. Panicking at that moment is not the right approach.
You better prepare to build a framework and have automated processes to the maximum to make sure you are always up to date.
How can IBM make the lives of our members easier and make sure they can use AI in a governed way?
As you know, IBM can help with services and/or technology to bring the right solution. For AI Governance, I am talking about IBM‘s watsonx.governance technology as a cornerstone for AI Governance. It is a framework that uses a set of automated processes, methodologies and tools to help manage an organisation’s AI use.
As the reality is that many organisations have made a choice on (multiple) AI workbenches, watsonx.governance can help drive an AI governance solution without the excessive costs of switching from your current data science platform.
Let’s start with lifecycle governance. It involves operationalising the monitoring, cataloguing, and governing of AI models at scale. You can start manually, but soon, you will be challenged to keep it all consistent and up to date.
Automate the capturing of model metadata, across the AI/ML lifecycle to enable data science leaders and model validators to have an up-to-date view of their models. This is much appreciated by our clients.
In risk management, you want to be consistent, you need to give appropriate capabilities to your Risk Management department as you need to manage risk and compliance with business standards. As IT and Risk Management need to work together and both need to be addressed in their specific language, watsonx.governance helps to establish that.
Proactively addressing more specific compliance with current and future regulations proactively is the challenge here. Watsonx.governance helps to translate these external AI regulations into a set of policies for various stakeholders that can be automatically enforced to address compliance.
Would you like to see a short demo?
Click here to see three short videos (max 2 minutes) to give you an impression.
Curious about your AI Maturity?
Please click here to assess your maturity.
--
No Comments Yet
Let us know what you think