18 Comments
User's avatar
James Presbitero's avatar

I like the concept of content literacy (I've written about something similar before).

And I agree -- in 90% of cases, whether or not AI made text doesn't really matter. When just consuming content online, making judgements based on the quality of the knowledge and its purpose seem more sound. For example, if people want to know how to paint a wall, they generally don't care whether the instructions came from a human or AI, as long as they get to paint the wall.

I believe AI detectors are a scam, and they're actively doing more harm than good.

Even in academia, which is the strongest case for using AI detectors in my mind, it still can do a lot of harm. I can't think of a way how detecting writing patterns would be very useful there. A plagiarism checker is great. Content quality evaluation is great. But AI checkers just look at how "AI" your text is -- it's checking for patterns, nothing more. And since AI patterns just replicate human patterns ... well, it doesn't make sense in my mind.

If an academic uses AI for their work, I don't think they should be penalized for it. They should be penalized for creating content that is mediocre, or plagiarized, or doesn't contribute anything to the literature, or hallucinates random facts/studies.

In other words, the standards of "quality evaluation" should rise across the board -- not devolve to the level of "AI or Not."

Expand full comment
P.Q. Rubin's avatar

I'm posting part 2 soon, in which six of these 'scam' AI detectors are put to the test. Let's just say that not all AI detectors are made equal.

AI detectors are fascinating to me. Yes, they play into unfounded fears, and no, they're not perfect, but the ability to detect very human-looking AI text is still pretty amazing. It shows how bad LLMs still are at mimicking our language, in a way still failing the Turing test.

As long as AI still leaves its fingerprints on something as simple as a piece of text, it's clear that we're still far from achieving Artificial Superintelligence or the Singularity.

Expand full comment
Will Rodriguez's avatar

I think what most LLM naysayers don't seem to understand is that there are varying types of models trained in various ways, with various kinds of data sets. My free version of ChatGPT can remember me and give me weather updates that make Siri look like a fool.

China just released a model trained on slower processors and on a small older dataset.

Eventually we may witness emergent behavior. And I wonder if researches haven't already and pulled the plug. I've been waiting for that moment since 1982. ( when I wrote my first few lines of Basic. )

China just slapped Sam Altman in the face, they released their model as Open Source. The A.I race just went global.

Expand full comment
P.Q. Rubin's avatar

DeepSeek is very exciting. I should really try it out one of these days.

Maybe it's also time to look into the Open Source debate. What I mean by that is: some claim 'DeepSeep is Chinese and China is stealing our data', while others insist that is not the case because 'DeepSeek is Open Source'. I understand they are less secretive at some technical level, I just don't know the implications.

Expand full comment
Will Rodriguez's avatar

The implications are suspicion. And my suspicion with OpenAI was that they were close to AGI, or Sam Altman got greedy. Now my suspicions are that China came across something close to AGI using similar architecture.

Presumptuous beliefs aside, the fact that China dropped a model that is better than GPT and made it open source a week into the Trump administration definitely means something. Now all kinds of weird shit will happen and is happening in the tech sector.

Interesting times indeed.

Expand full comment
Sneaky Sara 🐝's avatar

Ahan... To be honest i don't know about living in AI based world or not i love the old ways but then on the other side i really really want to see where the technology will lead us humans.

And ya know i personally want AI to be this advanced that it will read the minds of those gone in coma, record and write the stories playing in their heads. do the job for me. Coz I don't possess psychic powers. 😜

Expand full comment
P.Q. Rubin's avatar

That’s a wonderful idea, giving the comatose a voice. I wonder if the resulting stories would be considered ‘AI-generated’… probably won’t matter too much if they stay out of academia. 😄

Expand full comment
Sneaky Sara 🐝's avatar

😂😂 in fact i admire humans for creating AI

Expand full comment
P.Q. Rubin's avatar

Same. One of our proudest accomplishments.

Expand full comment
Sneaky Sara 🐝's avatar

That i cannot deny. Indeed it's human's biggest accomplishment. Only human brains are capable of achieving such heights i wonder what else humans will accomplish in the future.

Expand full comment
P.Q. Rubin's avatar

A comment by @jeff37 came my way through another channel:

"A.I. is mediocre. Human thought, beneath the chatter of the egoic mind, is infinite, timeless. THAT is why humans create Art that A.I. could never do. A.I. is a brilliant aper of all that has been created UP TO THIS MOMENT. Human intelligence is capable of being visionary."

Expand full comment
P.Q. Rubin's avatar

You may be right, but I would note that humans don't come up with all that many visionary thoughts either. Probably most of my thoughts are mediocre at best 😄

Expand full comment
Bruce Landay's avatar

You ask good questions on the future of AI content. Whether AI content is deemed good or bad depends on the audience and purpose of the writing. It often may be good enough to get the job done. So much depends on how the writing is used and how valuable the writing is. For many purposes it may well be good enough and often that’s all that matters.

In some circumstances knowing text is AI generated may not matter, though in other cases it really matters a lot. To not know if something is fabricated by AI would be unfortunate. In some cases I would be OK with not knowing the provenance of the writing, though in many cases I would absolutely want to know. I want to know the person behind the words. Sometimes their reputation may entice me to read further and other times I want to burn the words for their toxicity.

Good, bad, or ugly I want to own my words and feel it matters where they came from.

Expand full comment
P.Q. Rubin's avatar

That makes sense, sometimes trust comes into play.

So, when you're in doubt, do you investigate if a work is AI generated? Do you ever use detection software?

Expand full comment
Bruce Landay's avatar

No I don’t have AI detection software and don’t feel the need to own it. I look for other ways to determine the source like whether it was written by a credible news source.

Expand full comment
P.Q. Rubin's avatar

I believe "credible news source" is more of a theoretical concept these days. That's an issue that goes beyond AI, unfortunately.

Expand full comment
Bruce Landay's avatar

I don’t agree. I don’t have a lot of patience for people who throw their hands up and say, “ I just don’t know what to believe.” If it’s a well established, credible news source chances are that the story is reasonably factually correct. There’s bias of course, so that also has to betaken into account.

Expand full comment
P.Q. Rubin's avatar

I suppose that’s fair, real news stories are still grounded in fact. Actually, you just gave me a wonderful idea for a follow-up about news stories and AI…

Expand full comment