Tags: AI, policy, government IT
Artificial intelligence (AI) is often presented as the innovation to make organisations more productive and efficient. While few still question this claim, the gap between promise and reality often remains wide.
Of course, safety and ethics must be safeguarded. Developing and implementing new, appropriate policies also requires a careful approach. At the same time, there are critical voices suggesting that progress is being stifled by bureaucracy.
Why is AI so challenging in the public sector?
1. The pursuit of generic policy is paralysing
The instinct to create a comprehensive policy framework for AI is understandable – but it can also be paralysing. Attempts to set universal rules for all AI applications often lead to lengthy processes, vagueness, indecisiveness, and even delays in implementation. In contrast, effective use of AI requires speed and flexibility.
2. Risk aversion stifles innovation
Executional bodies in government are often highly risk-averse. The fear of making mistakes – such as treating citizens unfairly or attracting public/political backlash – leads to excessive control mechanisms. Meanwhile, commercial organisations show that small steps and controlled experiments can drive meaningful innovation. The public sector must adopt a more solution-focused mindset toward AI and learn to tolerate and manage a certain level of risk.
3. Differences in subjectivity block policy
What seems like a harmless application to one person may feel ethically problematic to another. Policymakers want to avoid (unintentional) subjectivity, but often get stuck on differing views of what is ‘safe’ or ‘responsible’. The range of personal and cultural perspectives makes consensus difficult, hindering even the implementation of relatively straightforward AI applications.
4. Bias remains an invisible enemy
AI reflects the data it’s trained on. Biases in datasets – such as discrimination in hiring processes—are easily reproduced by AI models. The issue isn’t just the technology, but also the organisation itself: many lack the capacity or time to critically assess their own processes, data, and algorithms.
5. Political fear blocks leadership
The shadow of past scandals – like the childcare benefits affair – looms over every AI initiative in government. Leaders seek certainty, often lack a complete picture of the risks, and avoid them altogether. This can lead to indecision and a lack of progress. But inaction is still a decision: the choice to do nothing.
The reflex to create a comprehensive policy framework for AI is paralysing.
The way forward: from policy to practice
1. Start small, learn fast
Rather than aiming for all-encompassing policy, we should break AI projects into manageable, low-risk applications. Begin with ‘harmless’ tools like automatic document recognition with proper checks, or enhance secondary processes. Expand as knowledge, experience, and confidence grow.
2. Encourage controlled experiments
Allow space for pilots within safe boundaries. Create sandboxes where government services can test, learn, and adapt without immediate consequences. Working with AI is almost a craft or elite sport – it’s something you master through repetition and refinement. Never lose sight of the goal: what’s tested in sandboxes must be relevant and valuable to the organisation. Keep all stakeholders involved so everyone can learn. Mistakes aren’t failures – they’re essential learning moments.
3. Make policy modular and digestible
Segment policy and implementation by application type and risk class to increase clarity and actionability. Some AI tools require little complexity, while others demand more time and care. This approach also helps manage differences in perception – policy becomes more transparent, executable, and controllable.
4. Reflect on data and models
Government data scientists shouldn’t just analyse, develop, and innovate – they must also challenge and reflect. Where does the bias in an AI model come from? Could existing processes be the root cause, and how can they be improved? This requires collaboration between experts, policymakers, and ethicists.
5. Invest in governance and flight hours
Just like a pilot needs experience to fly safely, government needs “flight hours” with AI. Set up centres of excellence – as some administrations already do – and ensure that knowledge is developed internally. Governance is key to creating structure and building the trust needed to effectively deploy AI.
Finally: the moment is now
AI is no longer a future concept – it’s already here. The question is not if, but how the public sector will engage with it. The biggest risks don’t lie in the technology itself, but in stagnation, fear, and indecision. Start small, think big, and develop smart policies that allow for growth while providing protection where needed. Ask yourself: where do we want to be in the future? Then move toward it step by step. Involve process experts, ethicists, policymakers, data specialists, and technicians. Measure and report progress, and follow it.
Reach out
At Valcon, we’re committed to helping banks navigate this complex and evolving landscape. If you’re facing any of these challenges – or simply want to explore how to turn them into competitive advantages – please get in touch with us at [email protected] We would love to exchange ideas.