Article

Responsible AI starts with data – and with you

-
By Carl Strøm & Morten Bøtkjær Nilsen
Responsible AI starts with data – and with you

Artificial intelligence has moved from hype to habit. Today, more than 70 percent of Danes use tools like ChatGPT, Copilot or other generative AI platforms – and nearly all expect to keep doing so.

AI is now woven into both our work and our daily lives. But as organizations feed these systems with ever more data, the risk of misuse grows just as fast. The question is no longer if your employees use AI – but how they use it. And whether they do it responsibly, with approved and secure tools.

 

When good intentions lead to data leaks

Across industries, we see the same pattern: AI use inside companies far exceeds what management knows about. Even smaller organizations – with as few as 50 to 100 employees – typically use between 20 and 100 different AI applications. Few know which tools are approved or how data flows across them.That lack of visibility creates real risk. A single, well-meant prompt containing personal or confidential information can push sensitive data outside your control.

We’ve seen it happen. In our work with Microsoft Purview, employees have entered customer data into unsecured versions of Copilot — sometimes to analyze information, other times just to translate a text. In both cases, the data left the company’s safe zone.

  

Three steps every company should take 

Using AI safely isn’t about slowing innovation. It’s about creating a responsible foundation that lets innovation happen securely. Here are three steps that every organization should act on now:

1. Educate and raise awareness

Responsible AI starts with awareness. Many employees believe that if they use company devices, their data is automatically protected. It’s not.

Just as you train staff in IT security and GDPR, AI education should be part of that foundation.
An AI awareness program should explain what data can be shared, how AI systems process input, and why small mistakes can have large consequences. Follow up regularly. Use real cases and internal discussions to turn compliance into habit – and shared accountability.

2. Know your data

You can’t protect what you don’t understand. Define what counts as sensitive data, and map where it lives.

It’s not just national ID numbers or personal information. It’s also business-critical assets — contracts, incentive plans, strategy documents. Data classification and consistent labeling make it easier to monitor and protect information. Tools such as Microsoft Purview help apply these classifications across Outlook, Teams, and Copilot — keeping data secure within the Microsoft 365 environment.

3. Implement technical safeguards

Even the best training can’t prevent every mistake. That’s why technical safeguards are essential. Policies can automatically warn or block users who attempt to share sensitive information in AI tools.

With Data Loss Prevention (DLP) in Microsoft Purview, organizations can detect misuse of confidential data and either stop it or record it for review. This isn’t about restriction — it’s about support. These mechanisms make it easier for everyone to do the right thing and keep data where it belongs.

 

AI loves your data – but do you? 

AI is no longer a futuristic concept. It’s part of everyday business.
That makes responsibility non-negotiable. The question isn’t whether to use AI, but how to keep its relationship with your data safe, ethical, and compliant.

If your employees start using generative AI tomorrow, how will you ensure that your organization’s data stays within EU boundaries — and isn’t used to train external models?

At Twoday, we help organizations build confidence in the new AI landscape by:

  • Mapping data and AI use across the business

  • Implementing clear, automated security policies and technical controls

  • Training employees to use AI safely and effectively


AI thrives on your data. It’s up to you to make sure the relationship stays healthy — built on trust, transparency, and control.

  

 

 

You might also like

No related content