From my collaborations with government agencies, complaints boards, and public IT projects, my experience is that the public sector is already quite advanced when it comes to using AI in control functions, case processing, and handling certain administrative tasks.
For example, AI can be used to identify similar cases in appeals systems, thereby streamlining case handling. In other contexts, AI is used to predict which cases may become complex and require special attention. In call centers, we are already seeing the first steps toward AI-assisted citizen dialogue and note-taking. The next step could be more proactive customer service based on citizens’ data and behavior.
Generative AI – from tool to gamechanger
Generative AI is a branch of artificial intelligence that doesn’t just analyze existing data. It creates new content, such as text, images, code, video, or audio.
The technology is powered by advanced language and image models that recognize patterns and generate outputs resembling what a human might produce. This makes generative AI especially useful in contexts where large volumes of text, documents, and decision-making materials are involved. Something the public sector deals with extensively.
Generative AI is transforming the very foundations of how we use technology at work. We’ve moved from prediction to the creation of content at a level of quality and speed that was unthinkable just a few years ago.
In the public sector one factor makes generative AI particularly relevant: the sheer volume of text. Many public institutions are essentially case-handling factories. And casework often involves repetition, judgment, and text comprehension. This is where language models fit perfectly.
What’s holding some back when we know this is the way forward?
In the public sector, there is naturally a strong focus on data security and the responsible use of technology. Many AI solutions today are built on large language models and cloud platforms developed by global tech companies. This gives access to powerful tools, rapid innovation, and strong system integration. But it also raises valid concerns about data handling, compliance, and governance, especially when working with sensitive citizen data.
That’s why it is crucial for public institutions to establish clear frameworks for how the technology is used — regardless of the chosen platform. Here, responsibility, transparency, and traceability play a key role in building trust both internally and with the public.
What does it take to start using generative AI?
To realize the potential of generative AI, it takes more than access to technology. It also requires the right organizational conditions and ensuring that humans and machines work together in a meaningful way. Broadly speaking, I see three key factors:
- Trust and transparency
If AI is going to replace manual processes, employees must understand what the technology is doing and why. - Realistic projects
Start small, with clearly defined tasks where AI can make a real difference and scale from there. - Experience and experimentation
We need to be willing to learn by doing. At twoday, we stay close to the technology and actively share experiences across projects and clients.
AI isn’t about the future — it’s about taking responsibility today
AI is no longer something we’re waiting for. It’s already shaping the structures, decisions, and workflows that define our society. The question is not whether the public sector should use this technology. but how to do so wisely, responsibly, and meaningfully.
When we talk about generative AI, we’re talking about a new way of working, understanding, and acting where machines don’t make decisions for us, but help us make better ones.
The institutions that take the lead now are not simply digitizing existing processes. They are helping define what quality, accessibility, and trust will mean in the public services of the future.