Imagine this: you’re writing an article, a tweet, or even a Substack post, with AI help. You’re happy with the result. In fact, it all sounds a little bit better, smarter, and funnier than your usual ramblings. You wonder if you should notify your audience you didn’t really come up with this stuff yourself.
How do you proceed?
✅ Should you take all credit and just hit “send” like normal? It’s not like you’re disclosing using your dictionary, thesaurus, grammar check, proofreader, ergonomic keyboard, and emotional support chinchilla. What makes ChatGPT so different? “But it thinks for me!” Think again! AI doesn’t really think. Besides, chinchillas (and even proofreaders) can be surprisingly smart creatures.
❎ Or is there some moral reason you really should explain yourself? Are you actually cheating1 if you secretly use AI? Possibly! It’s all so new, still! There have been some scandals like that, right? What if you’re exposed? AI still has its enemies, you don’t want to get accused of anything. But then again, if you label your work, aren’t you essentially admitting guilt?
This is hard… what to do? I know:
Let’s ask an AI chatbot!
DeepSeek R1 is known for its advanced ‘reasoning’ capabilities. Here’s how it tackled the disclosure issue:
Okay, so the user is asking whether they should disclose their use of AI when creating a social media post with the help of GenAI. Hmm, this is a tricky ethical question. Let me start by breaking down the different angles.
First, I need to consider why someone might want to disclose. Maybe transparency is important for trust. If the audience knows AI was involved, they might appreciate the honesty. But then again, maybe people don't always care as long as the content is good. Wait, but there's also the aspect of authenticity. If the post is presented as purely human-made, but it's actually AI-assisted, does that mislead the audience?
I should think about metaphors to illustrate this. Like, if someone uses a calculator for math, they don't usually mention it. But if they use a ghostwriter for a book, they should credit them. So where does AI fall? Maybe somewhere in between. It's a tool, but it's also generating content, not just calculating numbers.
Another angle is the legal or platform requirements. Some places
2might require disclosure of AI use, but I'm not sure if social media platforms have such rules yet
3. The user might need to check guidelines, but the question is more ethical than legal.
Then there's the creator's intent. If the goal is to showcase personal creativity, using AI without disclosure might feel inauthentic. But if the AI is just helping to polish ideas, maybe it's like using a grammar checker. Hmm, but grammar checkers don't generate content, they just correct it. So the line is blurry.
Metaphors... Maybe AI is like a sous-chef in a kitchen. The head chef still oversees and approves the dish, but the sous-chef does a lot of the prep work. Should the menu mention the sous-chef? Probably not, but if the sous-chef is a famous one, maybe. So if the AI's role is significant, disclosure makes sense. If it's minor, maybe not.
4
Also, considering the audience's expectations. If followers value original human content, disclosure is key. If they care more about the message than the creator's process, maybe not. But how does one know the audience's preferences?
5
Another thought: In academia, using someone else's work without credit is plagiarism. But AI isn't a person, so it's different. However, some argue that AI-generated content should still be acknowledged to maintain integrity.
Wait, the user mentioned "nuanced angle," so I need to balance different perspectives. There's no one-size-fits-all answer. It depends on context, how much AI was used, the platform's norms, and the audience's expectations.
So possible metaphors: AI as a tool vs. AI as a collaborator. If it's a tool like a camera for a photographer, no need to disclose. If it's a collaborator, like a co-writer, then credit is due. The nuance is in how integral the AI's role was in creating the content.
Also, thinking about future implications. As AI becomes more common, disclosure might become standard, but right now it's still emerging. So maybe recommending disclosure to set a good example, even if it's not required.
But then again, over-disclosure could be unnecessary. Like, if you use spell check, you don't mention it. So where's the line between tool and content generator?
Another angle: creativity and originality. If the AI is generating ideas or phrases that the user wouldn't have thought of, disclosure might be important. But if the user is just using AI to structure their own ideas, maybe not.
Hmm, this is getting complex.
Hmm, that’s a pretty complete answer.
What about Substack?
One thing to add: you may wonder how Substack treats AI-generated content. I wrote a little article on it late last year, and the short answer is: Substack doesn’t mind if you use AI.
Also, Substack has no AI disclosure option that I’m aware of. Hashtags are not supported either.6 So you’ll have to use your own words, I guess.
Anorexia and Photoshop Laws
One metaphor DeepSeek didn’t mention: while GenAI usage is still mostly unregulated, image manipulation has been around for much longer, leading to some interesting legislation.
The world of photo editing has become so sophisticated that some governments are stepping in to stop vulnerable individuals from chasing unattainable beauty standards. In France, for instance, they've introduced a "Photoshop Law" in 2017 that requires a label on any commercial photos that have been digitally altered to make models look thinner or otherwise more "perfect."
Israel and Norway have followed suit, with their own versions of labeling laws. In Israel, models even need to provide a medical certificate to prove they're healthy enough to be on the catwalk. The idea is to reduce the pressure on young people to conform to impossible beauty ideals, which is commendable, but it also makes you wonder how much regulation is too much.
Now what?
If everyone uses GenAI to look smarter and more talented, we may find ourselves in a very similar situation! Just as we want to know if a photo has been photoshopped, we might also want to know if a social media post was crafted with the help of AI. It's all about setting realistic expectations and maintaining trust in what we see online. And who knows, maybe one day we'll have labels for AI-generated posts too, something like "Content generated with the help of AI: I’m not actually this talented."
Would you like to see AI labels?

Cheating, as in: the act of using unauthorized resources or methods to gain an unfair advantage (e.g. during an examination or assessment). We’ll save the AI boyfriend/girlfriend issue for another time.
One such place being the EU:
The European Union's Artificial Intelligence Act (Article 52) mandates transparency obligations for AI systems interacting with people or generating content that could mislead audiences. This includes requiring disclosures for AI-generated or manipulated content, such as text, images, or videos resembling real entities (e.g. so-called deepfakes), particularly when there is a risk that the audience might mistake the content for being entirely human-made or authentic. Also, if AI-generated content is used to inform the public on matters of public interest, disclosure is necessary unless the content has been reviewed and approved by human editors. However, the Act does not require disclosure for assistive uses of AI (e.g. minor edits or enhancements) where the AI's role does not substantially alter the meaning or authenticity of the input data. These rules are designed to promote accountability and trust while balancing the practical realities of AI's growing role in content creation.
My AI search tool of choice says these rules do exist now:
“Social media platforms have begun implementing specific rules regarding the disclosure of AI-generated content, reflecting growing regulatory and ethical concerns. Major platforms like Facebook, Instagram, Threads, TikTok, and YouTube now require users to disclose when AI tools are used to create or enhance content. These requirements often include clear labeling, such as hashtags (#AIgenerated) or explicit mentions in descriptions, to ensure transparency and prevent misleading audiences. For example, YouTube mandates indicators for AI-generated segments in videos, while TikTok emphasizes disclosure through hashtags and visual markers. Instagram has integrated enhanced partnership features and tagging options for AI-assisted content.”
The sous-chef metaphor makes little sense to me, but I’ll let DeepSeek finish…
I have an idea:
(For me personally, it’s two yeses.)
Well, not officially, for now. In case Substack decides to rip off one more social media feature, let me add some hashtags: #AIdisclosure #IUsedDeepSeekR1ForThisPost #IUsedPerplexityForThisPost #IEvenPastedRawDeepSeekOutputIntoSubstackButItsFineBecauseIUsedDisclosureHashtags
Sincerely, as I have just started using ChatGPT 4, I asked it the moral question: How do I acknowledge you?
Its answer was astonishing: This is your content; you don't have to acknowledge me.
When I insisted, it said, Ok, some of this was inspired by OpenAI ChatGPT!
I prefer to stay true to myself. To my own artistic abilities. I can't in good conscience take credit for something I didn't create.
But, to each his own.