The Worst Kept Secret in Professional Work – and Why We Need to Just Admit It

I'm writing this article with the assistance of AI. If I hesitate to say it, despite being someone who works with Responsible AI, imagine how others feel.
There’s also a meta part to it. I drafted my initial thoughts, fed them to ChatGPT, and got back something so polished that it sounded like corporate PR. Then, I used Claude to help me restructure it into something that actually sounds like my voice (which still didn’t, so I had finally edited myself). The irony of using AI to write about AI transparency isn't lost on me.
However, this is exactly the process I'm referring to. The ideas, frustrations and examples are mine. AI is helping me organise and articulate them.
Why I'm Writing This Now
Remember the Deloitte scandal? They got caught using AI to generate audit reports without disclosure. Everyone lost their minds, and trust was broken.
But what if Deloitte had just been upfront? What if they'd said, "Yes, we use AI to draft these reports, but we have rigorous verification. Every claim is checked. Every number is validated." Would that have been better? I think the scandal wasn't about AI use but deception.
And then I found myself in my own version of this dilemma.
I was preparing to release a Responsible AI policy for my organisation, and I had this moment when I wondered if I should use AI to help me write it faster, or would that be hypocritical.
I'm not going to lie, I'm actually afraid I'll be judged for this. What if people think I'm less competent? What if they lose respect for me? I hold a doctoral degree, I wrote a PhD thesis and several peer-reviewed papers before AI-assistance was around, I prepared slides and presentations for international conferences, and I’m competent in my field. Of course, I could have written it without assistance, but it would have taken me a week to complete.
Therefore, I have decided to use AI responsibly, as disclosed in the policy itself, because the evaluation board needs to be informed. The policy is currently a draft that will be refined following their review.
So what I realised was if I couldn't use AI responsibly to write a policy about responsible AI use, then what kind of policy would I even be creating? One disconnected from how people actually work and perpetuates the culture of secrecy I'm trying to change?
If I want my organisation to embrace responsible AI use, I need to set an example even when I'm scared of judgment.
That's what prompted this article.
Want to learn more about responsible AI? Download our guide: Get started with AI Governance
The Game Everyone's Playing
Almost everyone uses AI now for tasks such as writing, coding, designing, and brainstorming. What traditionally took days now takes hours. Not minutes, mind you, because you still need to read and verify everything.
But people are afraid to disclose it for fear of judgment. They worry they'll be seen as less competent.
The result, though, is that we all become detectives, scrutinising everything we read, trying to figure out if it's AI-assisted “workslop” instead of focusing on the actual message. We spend cognitive energy on pattern-matching instead of understanding ideas.
The worst part is that even if you wrote something yourself, you might get accused of using AI just because you write clearly or use an academic style. I've seen it happen.
What Should Change
LinkedIn comments that are obviously AI-generated. Come on. For three lines, you can use your brain. If you're using AI to write "Great insights! This really resonates!" then what's the point? That's noise, not engagement.
Then there are those frameworks that appear overnight with zero disclosure. I see someone share a comprehensive framework for free, and I'm left wondering: what is their actual contribution? Where's the line between their thinking and what AI generated?
Just tell us: "I used AI to structure this framework, but the core concepts came from 10 years doing X." That's honest and helpful.
How I Use AI
I use AI extensively because it enables me to be more productive, and I can explore ideas faster. However, I read everything thoroughly to understand what it produces, make plenty of edits, and always personalise it so that it sounds like me. I check sources for errors, and I verify claims for hallucinations.
The main idea is mine. The end result is a co-production.
Does this make me less competent? Lazy? I don't know. But I do know that if I didn't use AI, I'd write less, share fewer ideas, and be less productive. Is that better?
Want to learn more about responsible AI? Download our guide: Get started with AI Governance
The Real Risk
All things considered, I am worried we might lose the muscle to produce content ourselves. We might lose patience with the creative process. If we outsource too much, we could wake up one day unable to write, code, or think clearly without AI assistance.
That's legitimate.
On the other hand, perhaps we're developing new skills, such as knowing how to prompt AI effectively and how to evaluate and critique its output. Knowing when AI is wrong and when it's helpful, and, very importantly, understanding how to maintain your voice and judgment while using AI as a tool, is crucial. This includes grasping the concept of AI cannibalisation, which occurs when AI trains on AI-generated content. We have a significant responsibility to future generations of users.
Of course, some contexts genuinely require pure human work, like academic exams, legal filings, and certain creative work where human process is the point. But for most professional work, the question isn’t whether to use AI but how to use it responsibly.
What Responsible Disclosure Looks Like
I don’t believe in complicated disclaimers on everything. Just be honest when AI played a significant role.
For example, for substantial work such as articles, frameworks, or reports, include a brief note. For everyday writing, you don't need to flag every grammar fix. For social media, please write your own comments.
The point isn't to flag everything but to not actively hide it when it mattered.
This brings me to the bottom line.
«We need to move from secrecy to honest collaboration with AI and from shame to transparency.»
Because the alternative, where everyone uses AI but pretends they don't, creates more problems than it solves. It breeds suspicion; it wastes energy on detective work, and it makes honest people afraid to be transparent.
Ultimately, it prevents us from having genuine conversations about what responsible AI use actually entails.
My Disclosure
This article began as my raw stream of consciousness, with too many run-on sentences. ChatGPT polished it into corporate PR. Claude helped me restructure it and keep my voice, but I still had the final say on the end result.
I read every sentence and ensure it accurately reflects my thoughts. I take responsibility for everything written here.
That's what responsible AI use looks like to me: honesty, judgment, and accountability.
What do you think? (And I won't judge you if you use AI to help write your response.)
You might also like
No related content