AI’s hilarious three-legged problem is a teachable moment.

I spent some time in Sri Lanka over summer and ran into someone whose company has begun using AI —everyone I met seemed to be in love with OpenAI. Before I could get on my hobby horse about plagiarism and cheap hallucinations he noted that they were very clear about ‘fencing off’ the content they feed the algorithm, so that it won’t run amok. That’s a more sensible approach to have on this journey.

However, the chaotic AI arms race is on even here, with AI being greenlighted by businesses. One post talks up the “plethora of new job opportunities,” while several groups are popping up to get their hands around this. Dr. Sanjana Hattotuwa is one of the few who urges everyone to slow down and consider how generative AI and LLMs might influence such things as ‘truth decay,’ among other things. He calls for a ‘regulatory sandbox’ approach.

PLAGIARISM? This is more than a copy/paste problem. Consider voice theft. You may have heard a lawsuit by Scarlett Johansson against OpenAI for allegedly using her voice. (For a deep dive, read WIRED which covers the legal implications well.) It’s an intellectual property right violation–‘cloning’ someone’s voice without permission. Sure, young people will always be infatuated with algorithms that could produce anything – essays, books, graphics, software, presentations, videos etc– in a few clicks. If we don’t address this, I told my friend, the problems will go deeper than plagiarism.

Young people will begin to believe that they can’t be as ‘creative’ as the tool, and over time, give up on ideation. Slaves to the tool, we would be encouraging them to outsource everything. First, because the can. Second, because they will be unable to do what they were once capable of doing. Remember how we once knew every phone number of our friends and cousins? What made our brains do that?

HILARIOUS THREE-LEGGED PROBLEM. Before we closed for summer, my students were experimenting with AI images in Bing, now a AI-powered search engine. They also discovered that the Photoshop-like tool, we use in class, Pixlr, had a similar option. But here’s what they ran into:

Take a look at these images used by some students for an eBook assignment I gave them. (I’ve written about how each semester my 7th graders come up with about 125 books.)

Exhibit A: 

Spot anything odd about this cover? It’s not just the plastic-like muscles.

I call this the three-legged football player problem. The OpenAI tool in Bing goes off the rails at times. But instead of being annoyed with the outcome, I savored the moment. A wonderful opportunity to teach visual literacy.

Many things about this picture are wrong. Yes, the muscles look plastic, or at least over-Photoshopped. The gold marker is out of place. But the feet? There’s a third leg popping out of hid shoulder!

Exhibit B: 

This student’s eBook was about an off-duty soldier in a war set in the future. Notice anything weird here?

Yeah, the gloves. Looks like they came from from Home Depot! Anything else? Check the flag on his shoulder.


On May 14th, I presented a similar topic in a TED-Talk like event I had put together at my school. (We called it BEN Talks, being Benjamin Franklin High School.) My point was that the elephant in the room today is not even an elephant. It’s a parrot —the ‘stochastic parrot’ that researcher Emily Bender and others warned us about. It’s luring us down a dangerous path, and will pose a huge threat.


I remember a time when we we were fed the hype of how the ‘Internet of Things’ (IOT) would rescue us. (Have to admit, I swallowed that as well.) According to this glowing theory, a malfunctioning part on an aircraft making a long distance flight would ‘communicate’ with its destination, so that technicians would be ready to replace it when the plane lands. (The pilot would not even know; the ‘things’ would talk to each other.) IOT is here, but fortunately its not for every-thing. Your fairy lights can talk to the Bluetooth speaker, for all I care. But spare me the Apple watch that can tell my fridge what I need to cook for dinner because of some health condition it tracks through my skin.

If IOT was supposed to make our lives safer and convenient how come a window of a Boeing aircraft could come loose and drop out of the sky, without any warning? Why didn’t the loose nuts text message the wizards at Boeing to tell them so?

We were sold on some misleading, overhyped ‘intelligence,’ and no one dared question it. If you did, you were a fringe Luddite who needed to be voted off the island. I’m sorry but I got to this island because there was a pilot and a co-pilot on board—and not some aviation algorithm.


If you’re a student, you’ll love this.

On a related note, I support a Writers’ website, Write The World, which encourages young people to express their creative writing. Here’s one submission by a freshman at a high school.

Powerful poetry about AI’s ‘knowledge.’

So what do we do, besides write poetry and articles bemoaning the awkward, overhyped pathway we are being led? I think we should join the resistance to three-legged athletes, put on proper gloves, and take on the tech bros feeding us this pipe dream. There are more urgent, humanitarian needs that could be addressed through technology.

2023: The year AI gatecrashed our party. (Try getting the confetti out of your hair.)

Not to alarm you, but this year the ‘Doomsday Clock’ was forwarded to… 90 seconds from midnight. Ten seconds up since last year.

Speaking of timing, in the next eight minutes of your time I will focus on just four topics as we close the year: Newspapers and AI.

Writers and page editors of our student newspaper.

Focus# 1: News

If a newspaper falls in the forest, will anyone read the 12 point Times New Roman fine print before it turns to compost?  

Why newspapers? Think of news as the blood corpuscles that keep all other functions of society running. From my rudimentary knowledge of biology, like these red and white cells that transport oxygen, information that surges through our systems keeps us ticking. We who scrape our news off apps tend to forget that news is (still) produced by journalists who don’t work for free. Just because their stories show up in our feeds for free, doesn’t make it free to produce. Someone’s got to pay a salary to the fellow who walks the street, sits in at the courthouse with a notepad, presses the politician for comment, talks to a whistleblower in a dark parking garage, fact-checks the press release that is 80% BS, writes up the story or script, works with the sub-editor, and produces the story that hits Google News a few hours before it even lands on the newspaper rack in dawn’s early light. 

And still, we insult ‘The media’  as if it is some sweatshop. We tend to give Amazon a pass for listing crappy foreign-made products with fake reviews, but we attack the Press as if it were one gargantuan cabal run by Warren Buffet.

I say this because I try to teach students ‘media’ and journalism in its many amorphous forms. I teach them how to write stories, interview subjects, fact check, and do their homework on an interviewee before they get five minutes of her time. Then, they must take their notes and craft the story in a way that someone may read and be enlightened. If we don’t preserve storytelling and story craft at a young age, we may end up with the journalism we fear we have. We may be overrun by the meme makers, the conspiracy theory factories that quote fake doctors and researchers, the angry consumers of TikTok headlines who don’t care who wrote the story, nor care to read beyond paragraph one because an influencer had a sexier take on it.

Without news we may end up with…deoxygenated blood that shuts down our vitals. (News, like leukocytes, also gives us immunity but that’s another topic.)

Despite this it’s the toxic stuff that rules. The phrase, “I saw it somewhere on the Internet” turns more heads than “I read the full report.” (If you’re over 50 you know that “I saw it on Facebook” carries even more gravitas —and gets more shares.) While Facebook ‘news’ wanes, TikTok new spreads like wildfire. Some think it’s not the enemy of journalism.

Fun Fact: Journalists back in the day referred to a tiktok as a short, snippet of a story.


Focus# 2: A.I.

It’s barely a year since AI showed up at our door with a funny hat, uninvited. But what it slipped into the punch bowl has had many side effects. We have learned very quickly that AI is prone to ‘hallucinations.’ Yeah! What they mean by hallucinations is, when data fed into the machines is biased, too complex, and the machines cannot recognize patterns in ‘unseen data’ it gulps down. For instance, Google’s chatbot, Bard (The also-ran in the ChatGPT arms race) incorrectly claimed that the James Webb Space Telescope took the world’s first images of a planet outside our solar system. I’ve conducted my own quiet experiments with ChatGPT, and Bard, and have been spectacularly disappointed. I’m still open to seeing how we could someday use it as a tool, just as we do use Wikipedia, despite the bad mojo it had when it first appeared in 2001. 

Are you OK with the fact that machines were trained on language patterns stolen from the Internet – blog sites, Wikipedia, Amazon reviews, books etc? Singers and songwriters (any Ed Sheeran fans?) get sued when a line from a song seems like there’s copyright infringement,1 but we give a pass to machines. Why? What we once called crowdsourcing and plagiarism is considered ‘Generative AI.’ Interestingly, the intelligence gleaned from a “human crowd” is sometimes considered better because it increases the range of ideas compared to LLMs.2 But few seem to care, punch drunk, genuflecting at the altar of OpenAI going, “oooh, aaah!!” Even if they care, there’s no way to break up the party.

And then there was the recent mutiny in the OpenAI organization, over a purported discovery of something that was internally called Q* that employees feared could threaten humanity (so the report goes). Enough to make the folks who control the doomsday clock jittery!


Focus# 3: Social Media Reforms With Teeth

The optimistic story I’ve come across about social media. Remember the movie Social Dilemma on Netflix? Some of the folks involved in revealing how algorithms mess with our brains, came up with a ‘reform’ document with tangible, workable fixes for the platforms. There is a large body of evidence from several countries that it is harming teens. So they came up with something called Age Appropriate Design Code (AADC) for online platforms to design their services with the best interests of children in mind. The UK’s Information Commission’s Office offers a good model. 3

The code focuses on many factors such as changing the default settings, data sharing restrictions, prohibition of ‘nudging’ techniques, parental controls and much more. Many states have introduced bills 4


Focus# 4: A ‘Bookshelf’ for my Student Authors.

It’s that time of year when my students write, design, format, edit, and publish their eBooks. It’s a ‘summative’ proof of all they’ve mastered. They love it (after a week of panicking)! Topics range from history and scary YA fiction (lots of these!), to nature, sports, family values, and fantasy. I always have surprising topics. Like this book, a guide for first-time ‘Aquarists.

This semester, I switched to FLIPHTML5, one heck of a portal that lets me set up bookshelves for each class. The one above is my 1st Period class.

Why do they still love books in the age of gamification, social media distraction and AI? I have my own reasons. Which is why I love teaching this in a class that used to be a ‘keyboarding’ class.


In the spirit of wishing you a happy new year, let me leave you with something on a lighter note.

Forget the Chinese balloon that drifted into our airspace this year. Something else was shot down. Words!

  • Earlier this year publishers of Roald Dahl’s books (Charlie and the chocolate factoryJames and the giant peach etc) in a fit of political correctness said it would publish a some of his books with ‘offensive language’ — words like fat, and ugly – replaced.
  • Vivek Ramaswamy, in a rush to get to the Oval Office, called TikTok “digital fentanyl” even though he has a presence on the platform.
  • Merriam Webster’s pick for ‘word of the year’ was the letter X, after it became a replacement for Twitter that was laid to rest. Runners up were ‘meta,’ and ‘chat.’But wait! One of these stories is not true. Your challenge is to guess which one. Or go ask your favorite AI app, and see if it could do better than you.

Thank you for reading this far, and subscribing. Have a wonderful Christmas, and here’s looking forward to 2024. Please check out for my new podcast, Wide Angle.


Footnotes – Just in case you want to be sure I did not get AI to write this newsletter:

  1. Ed Sheeran’s case in which he, Warner Music and Sony Music were sued in 2017. The claim was about “Thinking Out Loud” He won the case. https://www.reuters.com/legal/lets-get-it-on-songwriters-estate-drops-ed-sheeran-copyright-verdict-appeal-2023-09-21/
  2. “The Crowdless Future? How Generative AI Is Shaping the Future of Human Crowdsourcing. Léonard Boussioux, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani. Harvard Business School, 2023
  3. UK’s ‘Standards of age appropriate design’ https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/age-appropriate-design-a-code-of-practice-for-online-services/1-best-interests-of-the-child/
  4. Oregon, SB 196 – a bill to get platform owners to act like adults, is just one of them. https://olis.oregonlegislature.gov/liz/2023r1/Downloads/MeasureDocument/SB196/Introduced

From HAL to Watson, AI is now more relevant than ever

Every time I overhear someone talking to Google to do a voice search, I am reminded the usual crop of techno futurologists from Asimov to Arthur C. Clarke to Ray Kurzweil who alerted us to the potential of AI. Just for the record I don’t buy Kurzweil’s funky ‘singularity‘ theory is about how around 1945, we will augment our bodies with super-intelligent machines…

Yet, AI is becoming more relevant. Consider what’s happening around autonomous cars. Hint: Google isn’t the only one in this race.Or consider the pace of robotics.

Sidebar: A few weeks ago I asked some students (2nd through 6th graders at a Montessori school) to design and build robots from assorted parts. Many of them gave them names, though that was not the requirement! They have no qualms about machines that might live’ alongside us. I once took some older students to visit a hospital and see a da Vinci surgical robot. They loved it! A bot that can cut and suture one of your body parts!

Back to AI. That famous ‘machine’ known as Watson, which beat humans in that game show Jeopardy, was able to search a massive databases and respond faster than Ken Jennings and Brad Rutter. But that was not all. It also outsmarted them in strategy – that is in picking the categories that would win bigger. It is eerie to watch those rounds and see how a computer sitting in between two humans looks like. (It sounds human too, as it calmly picks a category such as ‘chicks dig me’ to the nervous laughter of the live audience.)

I was intrigued to read about how JWT, the agency that handled IBM and this show, was briefed on how to present Watson. At one time, the inventor behind it, specifically asked that Watson should not bear any resemblance to…HAL. If you know Stanley Kubrik’s and Arthur Clarke’s 2001: a Space Odyssey, you’ll know why. That softer logo ‘Smart Planet’ logo, derived from IBM’s larger project about a smarter planet was not supposed to look humanoid, or scare people.

Even those people who talk to their machines (Siri) or instruct them where to go (GPS).