AI’s hilarious three-legged problem is a teachable moment.

I spent some time in Sri Lanka over summer and ran into someone whose company has begun using AI —everyone I met seemed to be in love with OpenAI. Before I could get on my hobby horse about plagiarism and cheap hallucinations he noted that they were very clear about ‘fencing off’ the content they feed the algorithm, so that it won’t run amok. That’s a more sensible approach to have on this journey.

However, the chaotic AI arms race is on even here, with AI being greenlighted by businesses. One post talks up the “plethora of new job opportunities,” while several groups are popping up to get their hands around this. Dr. Sanjana Hattotuwa is one of the few who urges everyone to slow down and consider how generative AI and LLMs might influence such things as ‘truth decay,’ among other things. He calls for a ‘regulatory sandbox’ approach.

PLAGIARISM? This is more than a copy/paste problem. Consider voice theft. You may have heard a lawsuit by Scarlett Johansson against OpenAI for allegedly using her voice. (For a deep dive, read WIRED which covers the legal implications well.) It’s an intellectual property right violation–‘cloning’ someone’s voice without permission. Sure, young people will always be infatuated with algorithms that could produce anything – essays, books, graphics, software, presentations, videos etc– in a few clicks. If we don’t address this, I told my friend, the problems will go deeper than plagiarism.

Young people will begin to believe that they can’t be as ‘creative’ as the tool, and over time, give up on ideation. Slaves to the tool, we would be encouraging them to outsource everything. First, because the can. Second, because they will be unable to do what they were once capable of doing. Remember how we once knew every phone number of our friends and cousins? What made our brains do that?

HILARIOUS THREE-LEGGED PROBLEM. Before we closed for summer, my students were experimenting with AI images in Bing, now a AI-powered search engine. They also discovered that the Photoshop-like tool, we use in class, Pixlr, had a similar option. But here’s what they ran into:

Take a look at these images used by some students for an eBook assignment I gave them. (I’ve written about how each semester my 7th graders come up with about 125 books.)

Exhibit A: 

Spot anything odd about this cover? It’s not just the plastic-like muscles.

I call this the three-legged football player problem. The OpenAI tool in Bing goes off the rails at times. But instead of being annoyed with the outcome, I savored the moment. A wonderful opportunity to teach visual literacy.

Many things about this picture are wrong. Yes, the muscles look plastic, or at least over-Photoshopped. The gold marker is out of place. But the feet? There’s a third leg popping out of hid shoulder!

Exhibit B: 

This student’s eBook was about an off-duty soldier in a war set in the future. Notice anything weird here?

Yeah, the gloves. Looks like they came from from Home Depot! Anything else? Check the flag on his shoulder.


On May 14th, I presented a similar topic in a TED-Talk like event I had put together at my school. (We called it BEN Talks, being Benjamin Franklin High School.) My point was that the elephant in the room today is not even an elephant. It’s a parrot —the ‘stochastic parrot’ that researcher Emily Bender and others warned us about. It’s luring us down a dangerous path, and will pose a huge threat.


I remember a time when we we were fed the hype of how the ‘Internet of Things’ (IOT) would rescue us. (Have to admit, I swallowed that as well.) According to this glowing theory, a malfunctioning part on an aircraft making a long distance flight would ‘communicate’ with its destination, so that technicians would be ready to replace it when the plane lands. (The pilot would not even know; the ‘things’ would talk to each other.) IOT is here, but fortunately its not for every-thing. Your fairy lights can talk to the Bluetooth speaker, for all I care. But spare me the Apple watch that can tell my fridge what I need to cook for dinner because of some health condition it tracks through my skin.

If IOT was supposed to make our lives safer and convenient how come a window of a Boeing aircraft could come loose and drop out of the sky, without any warning? Why didn’t the loose nuts text message the wizards at Boeing to tell them so?

We were sold on some misleading, overhyped ‘intelligence,’ and no one dared question it. If you did, you were a fringe Luddite who needed to be voted off the island. I’m sorry but I got to this island because there was a pilot and a co-pilot on board—and not some aviation algorithm.


If you’re a student, you’ll love this.

On a related note, I support a Writers’ website, Write The World, which encourages young people to express their creative writing. Here’s one submission by a freshman at a high school.

Powerful poetry about AI’s ‘knowledge.’

So what do we do, besides write poetry and articles bemoaning the awkward, overhyped pathway we are being led? I think we should join the resistance to three-legged athletes, put on proper gloves, and take on the tech bros feeding us this pipe dream. There are more urgent, humanitarian needs that could be addressed through technology.

2023: The year AI gatecrashed our party. (Try getting the confetti out of your hair.)

Not to alarm you, but this year the ‘Doomsday Clock’ was forwarded to… 90 seconds from midnight. Ten seconds up since last year.

Speaking of timing, in the next eight minutes of your time I will focus on just four topics as we close the year: Newspapers and AI.

Writers and page editors of our student newspaper.

Focus# 1: News

If a newspaper falls in the forest, will anyone read the 12 point Times New Roman fine print before it turns to compost?  

Why newspapers? Think of news as the blood corpuscles that keep all other functions of society running. From my rudimentary knowledge of biology, like these red and white cells that transport oxygen, information that surges through our systems keeps us ticking. We who scrape our news off apps tend to forget that news is (still) produced by journalists who don’t work for free. Just because their stories show up in our feeds for free, doesn’t make it free to produce. Someone’s got to pay a salary to the fellow who walks the street, sits in at the courthouse with a notepad, presses the politician for comment, talks to a whistleblower in a dark parking garage, fact-checks the press release that is 80% BS, writes up the story or script, works with the sub-editor, and produces the story that hits Google News a few hours before it even lands on the newspaper rack in dawn’s early light. 

And still, we insult ‘The media’  as if it is some sweatshop. We tend to give Amazon a pass for listing crappy foreign-made products with fake reviews, but we attack the Press as if it were one gargantuan cabal run by Warren Buffet.

I say this because I try to teach students ‘media’ and journalism in its many amorphous forms. I teach them how to write stories, interview subjects, fact check, and do their homework on an interviewee before they get five minutes of her time. Then, they must take their notes and craft the story in a way that someone may read and be enlightened. If we don’t preserve storytelling and story craft at a young age, we may end up with the journalism we fear we have. We may be overrun by the meme makers, the conspiracy theory factories that quote fake doctors and researchers, the angry consumers of TikTok headlines who don’t care who wrote the story, nor care to read beyond paragraph one because an influencer had a sexier take on it.

Without news we may end up with…deoxygenated blood that shuts down our vitals. (News, like leukocytes, also gives us immunity but that’s another topic.)

Despite this it’s the toxic stuff that rules. The phrase, “I saw it somewhere on the Internet” turns more heads than “I read the full report.” (If you’re over 50 you know that “I saw it on Facebook” carries even more gravitas —and gets more shares.) While Facebook ‘news’ wanes, TikTok new spreads like wildfire. Some think it’s not the enemy of journalism.

Fun Fact: Journalists back in the day referred to a tiktok as a short, snippet of a story.


Focus# 2: A.I.

It’s barely a year since AI showed up at our door with a funny hat, uninvited. But what it slipped into the punch bowl has had many side effects. We have learned very quickly that AI is prone to ‘hallucinations.’ Yeah! What they mean by hallucinations is, when data fed into the machines is biased, too complex, and the machines cannot recognize patterns in ‘unseen data’ it gulps down. For instance, Google’s chatbot, Bard (The also-ran in the ChatGPT arms race) incorrectly claimed that the James Webb Space Telescope took the world’s first images of a planet outside our solar system. I’ve conducted my own quiet experiments with ChatGPT, and Bard, and have been spectacularly disappointed. I’m still open to seeing how we could someday use it as a tool, just as we do use Wikipedia, despite the bad mojo it had when it first appeared in 2001. 

Are you OK with the fact that machines were trained on language patterns stolen from the Internet – blog sites, Wikipedia, Amazon reviews, books etc? Singers and songwriters (any Ed Sheeran fans?) get sued when a line from a song seems like there’s copyright infringement,1 but we give a pass to machines. Why? What we once called crowdsourcing and plagiarism is considered ‘Generative AI.’ Interestingly, the intelligence gleaned from a “human crowd” is sometimes considered better because it increases the range of ideas compared to LLMs.2 But few seem to care, punch drunk, genuflecting at the altar of OpenAI going, “oooh, aaah!!” Even if they care, there’s no way to break up the party.

And then there was the recent mutiny in the OpenAI organization, over a purported discovery of something that was internally called Q* that employees feared could threaten humanity (so the report goes). Enough to make the folks who control the doomsday clock jittery!


Focus# 3: Social Media Reforms With Teeth

The optimistic story I’ve come across about social media. Remember the movie Social Dilemma on Netflix? Some of the folks involved in revealing how algorithms mess with our brains, came up with a ‘reform’ document with tangible, workable fixes for the platforms. There is a large body of evidence from several countries that it is harming teens. So they came up with something called Age Appropriate Design Code (AADC) for online platforms to design their services with the best interests of children in mind. The UK’s Information Commission’s Office offers a good model. 3

The code focuses on many factors such as changing the default settings, data sharing restrictions, prohibition of ‘nudging’ techniques, parental controls and much more. Many states have introduced bills 4


Focus# 4: A ‘Bookshelf’ for my Student Authors.

It’s that time of year when my students write, design, format, edit, and publish their eBooks. It’s a ‘summative’ proof of all they’ve mastered. They love it (after a week of panicking)! Topics range from history and scary YA fiction (lots of these!), to nature, sports, family values, and fantasy. I always have surprising topics. Like this book, a guide for first-time ‘Aquarists.

This semester, I switched to FLIPHTML5, one heck of a portal that lets me set up bookshelves for each class. The one above is my 1st Period class.

Why do they still love books in the age of gamification, social media distraction and AI? I have my own reasons. Which is why I love teaching this in a class that used to be a ‘keyboarding’ class.


In the spirit of wishing you a happy new year, let me leave you with something on a lighter note.

Forget the Chinese balloon that drifted into our airspace this year. Something else was shot down. Words!

  • Earlier this year publishers of Roald Dahl’s books (Charlie and the chocolate factoryJames and the giant peach etc) in a fit of political correctness said it would publish a some of his books with ‘offensive language’ — words like fat, and ugly – replaced.
  • Vivek Ramaswamy, in a rush to get to the Oval Office, called TikTok “digital fentanyl” even though he has a presence on the platform.
  • Merriam Webster’s pick for ‘word of the year’ was the letter X, after it became a replacement for Twitter that was laid to rest. Runners up were ‘meta,’ and ‘chat.’But wait! One of these stories is not true. Your challenge is to guess which one. Or go ask your favorite AI app, and see if it could do better than you.

Thank you for reading this far, and subscribing. Have a wonderful Christmas, and here’s looking forward to 2024. Please check out for my new podcast, Wide Angle.


Footnotes – Just in case you want to be sure I did not get AI to write this newsletter:

  1. Ed Sheeran’s case in which he, Warner Music and Sony Music were sued in 2017. The claim was about “Thinking Out Loud” He won the case. https://www.reuters.com/legal/lets-get-it-on-songwriters-estate-drops-ed-sheeran-copyright-verdict-appeal-2023-09-21/
  2. “The Crowdless Future? How Generative AI Is Shaping the Future of Human Crowdsourcing. Léonard Boussioux, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani. Harvard Business School, 2023
  3. UK’s ‘Standards of age appropriate design’ https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/age-appropriate-design-a-code-of-practice-for-online-services/1-best-interests-of-the-child/
  4. Oregon, SB 196 – a bill to get platform owners to act like adults, is just one of them. https://olis.oregonlegislature.gov/liz/2023r1/Downloads/MeasureDocument/SB196/Introduced

For now, AI is more hype than substance.

There’s Human Intelligence, and Artificial kind. I wasn’t taken up by the recent bluster about AI which arrived in 2022 all dressed up, but wearing flipflops. Somehow there was a mismatch between its promise and what it delivers.

I did give it a try, however. Just like I once wandered into ‘Second Life’ slightly skeptical. Is this real, I wondered. Are we there yet?

1. AI ART – THE LOW-HANGING FRUIT WITH WEIRD, FUZZY SKIN

I had checked out the app called Starryai (which I wrote about in a Substack newsletter.) So, for my second attempt, I called up the algorithms on Dall.E to see if this fancy pants tool could design a magazine cover. Like WIRED.

The prompt that I typed, into Dall.E, was: “WIRED magazine cover with Dall.E.”

Could it ‘design’ a cover of tech magazine, using itself (Dall.E) in the title? Was it capable of reflecting on itself?

I was margially impressed. Marginally. In other words, not terribly. Sure, the graphics were overly arty as WIRED occasionally tries to be. Dall.E gets the look right, but the details are so bloody amateurish, even clumsy. It doesn’t seem to handle white space, or understand how to mimic a masthead. The fonts are a joke!

2. AI WRITING – NOTHING TO WRITE HOME ABOUT

I teach creative writing in all my classes. Naturally I’ve been intrigued, and even alarmed by how the talk about how AI could write like a human. Many people are hailing this as the death-knell for flesh-and-bone writers, journalists etc. Some tear their hair out about plagiarism in schools.

 The Nieman Lab is a bit more circumspect:

“While ChatGPT won’t win any journalism awards (at least for now), it can certainly automate much of the long tail of content on the internet.” — Nieman Lab, Predictions for Journalism in 2023

I checked out an application on the ChatGPT platform known as OpenAI that some people have told me can write fairly convincing content. I was suspicious. I had read a piece by a marketing writer, Mitch Joel about this. To check how smart this AI could be I typed in this snarky prompt: “Is Mitch Joel right about AI platforms.”

I wanted to see if this ghost in the machine was savvy enough to pick up his argument and reference it. As I guessed, it didn’t live up to my expectations. In fact, the software apologized for its inability to do more than explain what Mitch does for a living, and went on to explain how these are still early days! (Brownie points OpenAI for admitting you don’t know what you don’t know.) While it got the paragraphs and punctuation nicely. The second ‘graph was a doozy. Like a lazy copywriter churning out some garbage just to fill a layout to impress a client.

The website sets our expectations, in fact, saying things like, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Hmm!

Having said that, others are raving about AI content generators like Jasper. It’s supposed to be a boon for copywriters, social media posts and SEO content.

HERE’S MY TAKE ON AI. Content creators of the world —authors, journalists, copywriters, podcasters —shouldn’t feel threatened. For now. Good copywriters don’t sit at a desk stringing clichés to adjectives. They walk the factory floor, sit through plans board meetings, and argue with brand managers before the concept emerges. Translated: They produce content, rather than regurgitate it. Translated again: The fruits of AI are tempting but aren’t ready to pluck. Even for students. Low-hanging fruit – tempting but bland. Sometimes filled with bugs.

ChatGPT says it is addressing this. It’s like saying Samuel Bankman-Fried has declared he is making sure there aren’t any more crypto scams.

Are we concerned? As teachers, yes. Plagiarism is something no school takes lightly, if only because we want students to discover the value of originality, and creativity. It’s what will benefit them in any career. How about you?