53 Comments
User's avatar
James C Douglas's avatar

Yes, use of AI should always be disclosed. I will not be explaining my reasoning. It’s an emotional reaction, which I am capable of because I’m not a machine.

Expand full comment
P.Q. Rubin's avatar

I love that response! Perhaps we should embrace our emotional side and leave reasoning to the machines 😄

Expand full comment
James C Douglas's avatar

Thanks, and yes!! We aren't machines!!

Expand full comment
ingzpage.com's avatar

Sure. Without AI, my writing could sometimes be disjointed—off on a tangent or two.

Thank goodness for its help.

I always ask the bot to keep my tone, my voice, my style.

Sometimes I even ask, “Wait… is this still me?”

And she (yes, I chose a female) will gently reply,

"You’re right," and show me my original version—

with only the changes highlighted, as I asked for.

Very helpful article. 😊

-i

Expand full comment
P.Q. Rubin's avatar

Thanks!

So how do you see this disclosure issue? Does sticking to your own tone and style make the output completely 'yours'?

Expand full comment
ingzpage.com's avatar

Same as if i ask my live, real writing coach! 😊😇

Expand full comment
Jake Borchardt's avatar

I read through. Great piece....I love the integration of the chinchilla ✌️✌️.

Also, good points with calculators, spell checkers. ✌️✌️

Expand full comment
P.Q. Rubin's avatar

Thank you so much!

I try to include at least one cute animal in each post 😄

Expand full comment
Bruce Landay's avatar

I like to know if content is AI generated and appreciate the way you used a separate font to make it obvious. I don’t mind that AI is used but I think writers should be honest enough to share that detail with their audience.

Expand full comment
P.Q. Rubin's avatar

Agreed.

Let's not forget though: AI usage is not binary. Some undoubtedly use it to generate full articles, others only do grammar checks or idea generation with it. That's why I don't see much value in an "I use AI" label. It doesn't really tell me what I want to know.

Expand full comment
Bruce Landay's avatar

Spell and grammar check are a given and not meaningful. Just full article generation is what I’d like to see.

Expand full comment
no-one's avatar

Really appreciated this take. With my project Outsourced to AI, I am still shaping ideas, setting tone, editing structure, making creative decisions. Whether the first draft comes from what’s left of my brain, pen, or AI, the final piece reflects my voice. Writing has always evolved with tools, this one just happens to talk back.

Expand full comment
P.Q. Rubin's avatar

Thank you! With a title like that, disclosure becomes a non-issue 😄

Expand full comment
A.I. Freeman's avatar

I openly acknowledge using AI because it feels like a co-author in my process. I also say please and thank you to the AI, even though it has no feelings. I think AI is today what computers were in the mid-1980s. Eventually, it will be so commonplace that a disclosure will seem a little crazy. (By the way, I typed this on my computer...)

Expand full comment
P.Q. Rubin's avatar

That's a great point. I also remember a later time when the internet/web search became commonplace; suddenly you were supposed to know everything! ("Why don't you just google it?")

Expectations change all the time. I wonder how we'll look at AI in a few decades. We may even decide it should be mentioned as a co-author...

Expand full comment
Ross Young (P3nT4gR4m)'s avatar

Strikes me we're approaching another round of technology v's the luddites. When I were a nipper I saw a bumper sticker - the synthesiser is killing music.

Was it tho? Or was it just democratizing access to a creative field by lowering the barrier to entry?Resistance was understandable. The gatekeepers had worked hard, dedicated a large portion of their life to gaining the ability to express their form of music and now their labour was reduced to a button press.

This time it's not just the relatively small niche of plucking strings or tinkling ivories. Now it's cognition itself. Genius used to be a rare, cultivated faculty. An elevation above the average Joe. A reason to feel special if you were one of the chosen.

Now (or at least real soon) it's reduced to the press of a button. AI isn't simply a tool tho, it's a cognitive upgrade. A new kind of collaboration or symbiosis like how a car makes you faster or a plane makes you able to fly.

There will emerge a new hybrid being, a driver or a pilot, who can think faster and higher than a pedestrian. The luddites will be outraged. Not musicians this time but intellectuals. A new number sticker - AI is killing thinking.

Is it tho? 🤔

Expand full comment
P.Q. Rubin's avatar

I like to think AI is not actually "killing thinking", even if I get how others (educators, to name one group) can see it that way.

It's definitely able to give relevant information, but that doesn't make it an intellectual by itself. There's still a need for someone to decide when and how to use the AI. For now, anyway.

Expand full comment
Zeina Zayour's avatar

Love this essay! I used to disclose it, then took my disclosures out, but kept all my emdashes, and left my insinuations about how much I use my AI in the essays themselves. I have also written about the process of writing with AI so I feel like I have been transparent with my readers. I been experimenting with AI and see how I can use it to enhance and optimize my writing and I want to continue doing. It is a new era and this is a good conversation to have. Thanks for thinking and writing about it and for the poll. Excellent job!!

Expand full comment
P.Q. Rubin's avatar

Thank you so much!

Maybe em dashes are a form of disclosure too 😄 But I think you're right in being open about how you use AI. It's all about matching your readers' expectations - and not being (too) dishonest with them. And that goes for more than just AI.

Expand full comment
Zeina Zayour's avatar

Most of my essays are very personal and full of storytelling so AI is used mostly for editing and language enhancement since English is my second language. But I will put them back where I have used it for heavier content 😉 I am not going too be of any degree of dishonesty. I am all in for being open about AI usage. I also think having guidance generally will help everyone. I didn’t want to alienate readers that are not AI friendly because I write about various topics.

Expand full comment
P.Q. Rubin's avatar

Language enhancement is a great feature. I was grateful for using AI the other day, when I made up some fake product name in a story and the chatbot politely informed me that in American English, that word is in fact a racial slur! It then helpfully suggested more suitable names.

Expand full comment
Zeina Zayour's avatar

Yaaayks…. Good save!

Expand full comment
Seema Nayyar Tewari's avatar

Sincerely, as I have just started using ChatGPT 4, I asked it the moral question: How do I acknowledge you?

Its answer was astonishing: This is your content; you don't have to acknowledge me.

When I insisted, it said, Ok, some of this was inspired by OpenAI ChatGPT!

Expand full comment
P.Q. Rubin's avatar

That's interesting! Companies like OpenAI have an opportunity to set a societal norm here; it seems they're not using it to encourage disclosure...

Although I suppose other chatbots can still give a completely different response.

Expand full comment
Seema Nayyar Tewari's avatar

It is uncanny, when CHATGPT 4 tells me to have a good day, with emojis, and says, get back whenever, I shall be here waiting for you ! 😁

Expand full comment
P.Q. Rubin's avatar

How can people be afraid of AI when it's so friendly? 😄

Expand full comment
Seema Nayyar Tewari's avatar

Beyond my skill, why do people not embrace newer technology? They are sitting in comfort precisely because of newer tech.😎

Expand full comment
Seema Nayyar Tewari's avatar

Search Me----😎

Expand full comment
Ira C. Zipperer's avatar

Very thought provoking. I use a LLM-driven translation system at work but don’t use the generated text as-is because often times there are errors. It is just a tool for my own understanding.

Expand full comment
P.Q. Rubin's avatar

I'm not sure disclosure is limited to using output as-is. In some contexts, even using a single AI-generated idea could warrant an explanation. Though I can't see that being the case for translation work.

Expand full comment
Rea de Miranda's avatar

I prefer to stay true to myself. To my own artistic abilities. I can't in good conscience take credit for something I didn't create.

But, to each his own.

Expand full comment
P.Q. Rubin's avatar

That's an interesting thought. If sharing something with your audience implies taking credit, then you should be careful to disclose any outside help, including AI.

Expand full comment
Sorab Ghaswalla's avatar

Of course. Content consumers must be given fair warning. I always use an AI label whenever I use gen-AI even for writing a single sentence. That's ethical.

Expand full comment
P.Q. Rubin's avatar

I noticed you using an AI label! I'm conflicted. When seeing a label like that (in general), I always wonder what part of that content was AI-generated. All of it, maybe? It does show good ethics, but it can also raise more questions than it answers. And it's not just me: research seems to indicate people distrust AI-labeled content.

Expand full comment
Sorab Ghaswalla's avatar

Yes. But the way every individual content creator uses gen-AI is different. Some may use it ONLY for background research, others to bolster their thought process, a few to help them write for them. So it would become very difficult to create individual labels accordingly. It's like an FDA warning - many of those labels are generic, in the sense, the label may say, "May affect kidney function". Now, "may" here is the keyword. It is then for the patient to decide whether his/her kidneys are at optimal funcion, and whether they should then take that medicine, right? I would feel a generic label serves the purpose in so far as content is concerned. Now, on to the trust quotient. I personally feel gen-AI is a tool for content creators, and readers/viewers etc, must look at it so. Do we not use word processors? Or do we still only trust books written on typewriters? I know it is not a very real comparison but why should one NOT trust AI generated content? As long as a human's byline is on top of the article or book, he/she is accepting responsibility for it. A byline is not only to let the world know you wrote that particular piece; it's also a label by itself, and denotes, among many things, the guarantee by the writer/author of everything being correct and counter-checked as far as possible. This distrust business will soon be gone. Already, if you were to ask me, 80 % of all content is being generated using some form of gen-AI tool.

Expand full comment
P.Q. Rubin's avatar

Would that be 80% of Substack content, or 80% of the internet at large?

FDA labels are probably a good comparison, and I would point out they work because they’re required. If AI labels were mandatory, then hardcore Luddites would be able to avoid all AI content. But they’re not, and they probably won’t be. There’s no harm in reading AI-generated text.

As you rightly pointed out, it’s ultimately about trust and responsibility. I hope my readers can trust the information provided, even the AI-generated parts. If AI labels can help with that, I may add them in the future.

Expand full comment
Sorab Ghaswalla's avatar

True, that. And, I meant approx. 80% of the overall content being produced today. You know, funnily enough, I've not yet come across anybody else who puts an AI generated content label!!! Weird. Is it only me then?

Expand full comment
P.Q. Rubin's avatar

Now that you mention it… I feel like you deserve some sort of award for “Most Ethical AI Subtacker” 🥇😄

Expand full comment
Sorab Ghaswalla's avatar

Ha, ha, ha. Thank you. May I give my acceptance speech? :-)

Expand full comment
Anna Maija's avatar

Lots of food for thought! I’m actually thinking about this a lot at work. Nobody publicly states they’re using AI to optimise their output but from the demographic structure alone (plus a few conversations) I know I’m in the minority who now do this/know how to do this. It probably does give me some advantage, not least in terms of time. But there’s no organisational policy/guidance in place and so for now, at least, it feels a little like a case of proceed until apprehended, whilst knowing you won’t be apprehended any time soon because the powers that be are pretty apprehensive about the whole topic and like to pretend it’s not there…

Expand full comment
P.Q. Rubin's avatar

Thank you! Yes, AI in the workplace is a major elephant in the room. If you’re lucky enough to work with people who don’t know or care about AI, you’ll end up looking pretty smart. In other places, you’re entering a sort of moral and legal no man’s land once you start pasting documents into ChatGPT. I’ve yet to meet someone (outside of tech, that is) who tells me “My employer has it all figured out; we do it by the book/we have a business account”. Which is concerning.

Expand full comment
Anna Maija's avatar

It really is and in my line of work I’m a little terrified by the ramifications of exactly what you describe. I think there is a really high likelihood of at least some people at my workplace not being sufficiently sighted of data protection/confidentiality etc - they only think productivity. Remote working is also fraying the olde worlde sense of org culture so you don’t pick up the moral code by osmosis…

Expand full comment
Zacharias Voulgaris's avatar

It really depends on the way AI is utilized, doesn't it? I mean, you can use AI for grammatical, syntactical, and otherwise linguistic augmentation of the ideas or key points you provide, or you can use it to write the whole thing for you (or anything in between those extremes). Where do you draw the line?

Thank you for the thought-provoking article.

Expand full comment
P.Q. Rubin's avatar

That makes sense.

One thing I don’t have a problem with at all is AI-generated artwork. I’ve seen people add little “created with AI” captions, but I don’t see the point: everyone who cares will see it’s an AI image anyway.

But for text, the way you use it matters. This post is actually a rare occurrence where I do disclose which parts are AI. In other posts, I would, for example, use the DeepSeek output to strengthen my argument. It’s a collaborative process. As is common when collaborating, the end product is a blend; I couldn’t possibly tell you which parts were AI and which were ‘original thought’ (if that exists, anyway). And I don’t hold others to higher standards either. No “Made with AI” sticker can fix that.

Expand full comment
A.I. Freeman's avatar

I am the same way when collaborating with AI. I may start with an idea, then it gives me something that sparks another idea... so is that new idea AI or me? It's me, but I might not have had the thought if AI hadn't been there in the middle. And that's just the first sentence!

Expand full comment
Felkja's avatar

I use ai in images but am moving towards photoshop when I am up to publishing and writing. I have a licence for firefly and try not to use sexualixed imagery, which can be difficult with images produced by AI . So maybe tagging in the footer is a good practice unless you did a moderate amount of work in an editor. For text AI might even include grammarly, which I don't use. Whatever the main body of work is, for me it's text, if it's partially AI generated it should be declared in the footer. My writing would need an artist and be even slower to publish without some image help from AI, maybe it should be declared. Even games use premade or procedural assets so drawing the lines, if you say AI generated text is not fair for a poet, and imagery for an artist, The rest gets blurry.

Expand full comment
P.Q. Rubin's avatar

Thank you! That’s an interesting perspective. If you’re concerned about artistic integrity, then yes, disclosure can be a great help.

One thing that keeps me personally from using “made with AI” labels is that they usually don’t explain how much AI was used, and where. Do you think it’s useful to explain your whole process, for example, in footnotes? I wouldn’t even know where to begin…

Expand full comment
Felkja's avatar

Yes that's a challenge, I don't know the answer. If its prompting for creativity, which is then used to write something almost completely unique its not something I would say AI tagging is needed for.

Expand full comment
Mark's avatar

I've written a post about AI that is almost finished, I've actually cited you in it. It's from a bit of a different direction but after seeing this pop up I had to read it straight away. Its a very nuanced situation and I think the answer is pretty much always dependent on several factors and they are likely to be different for everyone!

Expand full comment
P.Q. Rubin's avatar

Agreed, it very much depends on the person. I’m still on the fence about it, though I would say transparency is not the silver bullet that’ll fix all of AI’s moral issues.

Looking forward to seeing that post!

Expand full comment