For Good AI Governance,
We Need to Be Pragmatic

Reflections from the London AI Summit 2024
Yiannis Kanellopoulos
CEO and Founder | code4thought
The London AI Summit 2024 was an invaluable opportunity to engage with AI thought leaders, industry experts, and practitioners and discuss the ever evolving landscape of this field. If we had to give an one-liner about what we learned or sensed there, this is the need for pragmatic AI governance.
Combining what was discussed during the Summit alongside with what see in practice in our line of work, pragmatic entails the following:
  • Balancing innovation with ethical considerations and risk management,
  • Ensuring that people within your organization are becoming AI/digital literate and
  • Most importantly, aligning of technological choices with business needs.
Defining the key elements of pragmatic AI governance was a dominant topic in the discussions during the Summit, emphasizing simplicity, transparency and alignment with business needs. Let’s unwrap the issue as viewed by the AI community today, enriched by our practical experiences in the Trustworthy AI domain.

Change Management in AI Implementation

In many discussions it was agreed that a critical first step in AI implementation is understanding the problem before selecting AI solutions. Too often, organizations become enamored with the technology itself, overlooking the specific issues it should address. In other words, all too often organizations tend to become the hammer which is looking for a nail. A problem-centric approach ensures AI solutions are tailored to meet real business needs, enhancing their effectiveness and relevance. Simply speaking, organizations shall keep in mind that AI is just a tool that given the right treatment can be very useful.
Rapid implementation of nimble, cost-effective AI systems can generate value quickly compared to legacy systems. These modern approaches are more adaptable and efficient, allowing organizations to see benefits sooner. This fact might deserve an article on its own.
For now we need to keep in mind that effective change management goes beyond technology; it involves leadership and empathy. Understanding what the change means for individuals involved is crucial. Leaders must balance technological innovation with the concerns and expectations of their teams, fostering a culture that embraces change.

AI Governance and Risk Management

Pragmatic AI governance needs a solid, safe baseline and then one can build slowly. We tend to agree that this phased approach ensures foundational security and stability before introducing more complex applications. Implementing structured processes for innovation and mandatory training is essential to ensure compliance and safety.
In several sessions there was a consensus that establishing internally an AI Risk council or committee is a foundational step in AI governance. This type of organizational structure oversees the governance framework, manages risks, and protects against competition. An AI register, another critical component, imposes clear criteria for AI projects, ensuring transparency and accountability. By maintaining a comprehensive record of AI initiatives, organizations can monitor progress, manage risks, and ensure regulatory compliance.

Challenges in AI Transformation

Transforming businesses with AI presents numerous challenges, often resulting in high failure rates. We heard during the Summit that it is estimated that currently 80% of AI projects are failing and in the case of GenAI this percentage goes as high as 90%. Many AI transformations fail due to a lack of alignment between business value and technology capabilities, governance driven by technical aspects rather than business needs, and waning commitment over time.
It is also essential to realize that an effective AI transformation cannot succeed without data readiness. As such, data governance is crucial for successful AI transformations. Organizations must ensure data accuracy, accessibility, and security. This involves monitoring for exceptions and enhancing data accessibility through compliance agents. Establishing clear dimensions for data quality and implementing processes for data reconciliation and monitoring are key steps in maintaining robust data governance.

AI in Operations and Compliance

Operationalizing AI requires proactive monitoring and testing to scale AI regulation effectively. Comprehensive documentation and straightforward frameworks are essential for teams to manage AI implementations.
Governance should also prioritize business value rather than the underlying technology, ensuring AI solutions are tailored to specific operational needs. For instance, utilizing AI in practical business needs, such as marketing, customer service and contact centers, demonstrates the importance of operational simplicity. AI can summarize notes for efficient query resolution, enhancing service quality and improving customer satisfaction.

Regulatory Considerations

The soon to be operationalised AI Act, introduces transparency obligations and conformity assessments to ensure safe and responsible AI deployments. However, concerns about misalignment and uncertainties with overlapping regulations are prevalent. Organizations must develop clear structures to ensure compliance with non-European and/or sectoral regulations, focusing on specific use-cases and a phased implementation approach over two to five years. This is not an easy task but a good starting point for compliance is to focus on practical, real-world applications in order to address regulatory complexities effectively.

Responsible AI and Ethical Considerations

Responsible AI deployment is crucial for building trust and ensuring long-term success. Organizations must conduct regular audits to assess the risks associated with their AI systems and avoid over-governing, which could stifle innovation. Training and awareness programs help stakeholders understand the importance and urgency of responsible AI.
Positioning responsible AI initiatives close to business functions decreases risks and enhances success rates. This involves defining clear governance frameworks and ensuring they are practically implemented and regularly reviewed. Ethical considerations must be integrated into the organizational context, fostering a culture of responsibility and awareness.

Technical challenges in Responsible AI

Technical challenges in AI testing and quality assurance are common, with many organizations relying on manual processes and outdated tools. Our belief here (and this is exactly what we’re working on at code4thought) is that specialized testing and auditing tools are essential for overcoming such hurdles. By focusing on practical, effective governance, organizations can ensure the quality and reliability of their AI systems.


The main conclusions from the AI Summit in London are:
  • Pragmatic AI governance is essential for the successful and responsible deployment of AI technologies. By focusing on simplicity, transparency, and alignment with business needs, organizations can navigate the complexities of AI implementation effectively. This approach ensures AI initiatives are both innovative and responsible, fostering a future where AI is used ethically and effectively.
  • A pragmatic approach involves starting with a clear understanding of the problem, building a solid foundation, and maintaining straightforward governance structures, incorporating also tooling automation for testing and auditing the AI systems/models. Balancing innovation with risk management, ensuring effective data governance, and focusing on practical applications are key to successful AI deployment.
  • Ultimately, being pragmatic about AI governance allows organizations to harness the transformative potential of AI while maintaining trust and accountability. This balanced approach ensures compliance and safety, fostering innovation and trust in AI technologies, and paving the way for a future where AI can be fully and responsibly integrated into society.