AI’s hilarious three-legged problem is a teachable moment.

I spent some time in Sri Lanka over summer and ran into someone whose company has begun using AI —everyone I met seemed to be in love with OpenAI. Before I could get on my hobby horse about plagiarism and cheap hallucinations he noted that they were very clear about ‘fencing off’ the content they feed the algorithm, so that it won’t run amok. That’s a more sensible approach to have on this journey.

However, the chaotic AI arms race is on even here, with AI being greenlighted by businesses. One post talks up the “plethora of new job opportunities,” while several groups are popping up to get their hands around this. Dr. Sanjana Hattotuwa is one of the few who urges everyone to slow down and consider how generative AI and LLMs might influence such things as ‘truth decay,’ among other things. He calls for a ‘regulatory sandbox’ approach.

PLAGIARISM? This is more than a copy/paste problem. Consider voice theft. You may have heard a lawsuit by Scarlett Johansson against OpenAI for allegedly using her voice. (For a deep dive, read WIRED which covers the legal implications well.) It’s an intellectual property right violation–‘cloning’ someone’s voice without permission. Sure, young people will always be infatuated with algorithms that could produce anything – essays, books, graphics, software, presentations, videos etc– in a few clicks. If we don’t address this, I told my friend, the problems will go deeper than plagiarism.

Young people will begin to believe that they can’t be as ‘creative’ as the tool, and over time, give up on ideation. Slaves to the tool, we would be encouraging them to outsource everything. First, because the can. Second, because they will be unable to do what they were once capable of doing. Remember how we once knew every phone number of our friends and cousins? What made our brains do that?

HILARIOUS THREE-LEGGED PROBLEM. Before we closed for summer, my students were experimenting with AI images in Bing, now a AI-powered search engine. They also discovered that the Photoshop-like tool, we use in class, Pixlr, had a similar option. But here’s what they ran into:

Take a look at these images used by some students for an eBook assignment I gave them. (I’ve written about how each semester my 7th graders come up with about 125 books.)

Exhibit A: 

Spot anything odd about this cover? It’s not just the plastic-like muscles.

I call this the three-legged football player problem. The OpenAI tool in Bing goes off the rails at times. But instead of being annoyed with the outcome, I savored the moment. A wonderful opportunity to teach visual literacy.

Many things about this picture are wrong. Yes, the muscles look plastic, or at least over-Photoshopped. The gold marker is out of place. But the feet? There’s a third leg popping out of hid shoulder!

Exhibit B: 

This student’s eBook was about an off-duty soldier in a war set in the future. Notice anything weird here?

Yeah, the gloves. Looks like they came from from Home Depot! Anything else? Check the flag on his shoulder.


On May 14th, I presented a similar topic in a TED-Talk like event I had put together at my school. (We called it BEN Talks, being Benjamin Franklin High School.) My point was that the elephant in the room today is not even an elephant. It’s a parrot —the ‘stochastic parrot’ that researcher Emily Bender and others warned us about. It’s luring us down a dangerous path, and will pose a huge threat.


I remember a time when we we were fed the hype of how the ‘Internet of Things’ (IOT) would rescue us. (Have to admit, I swallowed that as well.) According to this glowing theory, a malfunctioning part on an aircraft making a long distance flight would ‘communicate’ with its destination, so that technicians would be ready to replace it when the plane lands. (The pilot would not even know; the ‘things’ would talk to each other.) IOT is here, but fortunately its not for every-thing. Your fairy lights can talk to the Bluetooth speaker, for all I care. But spare me the Apple watch that can tell my fridge what I need to cook for dinner because of some health condition it tracks through my skin.

If IOT was supposed to make our lives safer and convenient how come a window of a Boeing aircraft could come loose and drop out of the sky, without any warning? Why didn’t the loose nuts text message the wizards at Boeing to tell them so?

We were sold on some misleading, overhyped ‘intelligence,’ and no one dared question it. If you did, you were a fringe Luddite who needed to be voted off the island. I’m sorry but I got to this island because there was a pilot and a co-pilot on board—and not some aviation algorithm.


If you’re a student, you’ll love this.

On a related note, I support a Writers’ website, Write The World, which encourages young people to express their creative writing. Here’s one submission by a freshman at a high school.

Powerful poetry about AI’s ‘knowledge.’

So what do we do, besides write poetry and articles bemoaning the awkward, overhyped pathway we are being led? I think we should join the resistance to three-legged athletes, put on proper gloves, and take on the tech bros feeding us this pipe dream. There are more urgent, humanitarian needs that could be addressed through technology.

2023: The year AI gatecrashed our party. (Try getting the confetti out of your hair.)

Not to alarm you, but this year the ‘Doomsday Clock’ was forwarded to… 90 seconds from midnight. Ten seconds up since last year.

Speaking of timing, in the next eight minutes of your time I will focus on just four topics as we close the year: Newspapers and AI.

Writers and page editors of our student newspaper.

Focus# 1: News

If a newspaper falls in the forest, will anyone read the 12 point Times New Roman fine print before it turns to compost?  

Why newspapers? Think of news as the blood corpuscles that keep all other functions of society running. From my rudimentary knowledge of biology, like these red and white cells that transport oxygen, information that surges through our systems keeps us ticking. We who scrape our news off apps tend to forget that news is (still) produced by journalists who don’t work for free. Just because their stories show up in our feeds for free, doesn’t make it free to produce. Someone’s got to pay a salary to the fellow who walks the street, sits in at the courthouse with a notepad, presses the politician for comment, talks to a whistleblower in a dark parking garage, fact-checks the press release that is 80% BS, writes up the story or script, works with the sub-editor, and produces the story that hits Google News a few hours before it even lands on the newspaper rack in dawn’s early light. 

And still, we insult ‘The media’  as if it is some sweatshop. We tend to give Amazon a pass for listing crappy foreign-made products with fake reviews, but we attack the Press as if it were one gargantuan cabal run by Warren Buffet.

I say this because I try to teach students ‘media’ and journalism in its many amorphous forms. I teach them how to write stories, interview subjects, fact check, and do their homework on an interviewee before they get five minutes of her time. Then, they must take their notes and craft the story in a way that someone may read and be enlightened. If we don’t preserve storytelling and story craft at a young age, we may end up with the journalism we fear we have. We may be overrun by the meme makers, the conspiracy theory factories that quote fake doctors and researchers, the angry consumers of TikTok headlines who don’t care who wrote the story, nor care to read beyond paragraph one because an influencer had a sexier take on it.

Without news we may end up with…deoxygenated blood that shuts down our vitals. (News, like leukocytes, also gives us immunity but that’s another topic.)

Despite this it’s the toxic stuff that rules. The phrase, “I saw it somewhere on the Internet” turns more heads than “I read the full report.” (If you’re over 50 you know that “I saw it on Facebook” carries even more gravitas —and gets more shares.) While Facebook ‘news’ wanes, TikTok new spreads like wildfire. Some think it’s not the enemy of journalism.

Fun Fact: Journalists back in the day referred to a tiktok as a short, snippet of a story.


Focus# 2: A.I.

It’s barely a year since AI showed up at our door with a funny hat, uninvited. But what it slipped into the punch bowl has had many side effects. We have learned very quickly that AI is prone to ‘hallucinations.’ Yeah! What they mean by hallucinations is, when data fed into the machines is biased, too complex, and the machines cannot recognize patterns in ‘unseen data’ it gulps down. For instance, Google’s chatbot, Bard (The also-ran in the ChatGPT arms race) incorrectly claimed that the James Webb Space Telescope took the world’s first images of a planet outside our solar system. I’ve conducted my own quiet experiments with ChatGPT, and Bard, and have been spectacularly disappointed. I’m still open to seeing how we could someday use it as a tool, just as we do use Wikipedia, despite the bad mojo it had when it first appeared in 2001. 

Are you OK with the fact that machines were trained on language patterns stolen from the Internet – blog sites, Wikipedia, Amazon reviews, books etc? Singers and songwriters (any Ed Sheeran fans?) get sued when a line from a song seems like there’s copyright infringement,1 but we give a pass to machines. Why? What we once called crowdsourcing and plagiarism is considered ‘Generative AI.’ Interestingly, the intelligence gleaned from a “human crowd” is sometimes considered better because it increases the range of ideas compared to LLMs.2 But few seem to care, punch drunk, genuflecting at the altar of OpenAI going, “oooh, aaah!!” Even if they care, there’s no way to break up the party.

And then there was the recent mutiny in the OpenAI organization, over a purported discovery of something that was internally called Q* that employees feared could threaten humanity (so the report goes). Enough to make the folks who control the doomsday clock jittery!


Focus# 3: Social Media Reforms With Teeth

The optimistic story I’ve come across about social media. Remember the movie Social Dilemma on Netflix? Some of the folks involved in revealing how algorithms mess with our brains, came up with a ‘reform’ document with tangible, workable fixes for the platforms. There is a large body of evidence from several countries that it is harming teens. So they came up with something called Age Appropriate Design Code (AADC) for online platforms to design their services with the best interests of children in mind. The UK’s Information Commission’s Office offers a good model. 3

The code focuses on many factors such as changing the default settings, data sharing restrictions, prohibition of ‘nudging’ techniques, parental controls and much more. Many states have introduced bills 4


Focus# 4: A ‘Bookshelf’ for my Student Authors.

It’s that time of year when my students write, design, format, edit, and publish their eBooks. It’s a ‘summative’ proof of all they’ve mastered. They love it (after a week of panicking)! Topics range from history and scary YA fiction (lots of these!), to nature, sports, family values, and fantasy. I always have surprising topics. Like this book, a guide for first-time ‘Aquarists.

This semester, I switched to FLIPHTML5, one heck of a portal that lets me set up bookshelves for each class. The one above is my 1st Period class.

Why do they still love books in the age of gamification, social media distraction and AI? I have my own reasons. Which is why I love teaching this in a class that used to be a ‘keyboarding’ class.


In the spirit of wishing you a happy new year, let me leave you with something on a lighter note.

Forget the Chinese balloon that drifted into our airspace this year. Something else was shot down. Words!

  • Earlier this year publishers of Roald Dahl’s books (Charlie and the chocolate factoryJames and the giant peach etc) in a fit of political correctness said it would publish a some of his books with ‘offensive language’ — words like fat, and ugly – replaced.
  • Vivek Ramaswamy, in a rush to get to the Oval Office, called TikTok “digital fentanyl” even though he has a presence on the platform.
  • Merriam Webster’s pick for ‘word of the year’ was the letter X, after it became a replacement for Twitter that was laid to rest. Runners up were ‘meta,’ and ‘chat.’But wait! One of these stories is not true. Your challenge is to guess which one. Or go ask your favorite AI app, and see if it could do better than you.

Thank you for reading this far, and subscribing. Have a wonderful Christmas, and here’s looking forward to 2024. Please check out for my new podcast, Wide Angle.


Footnotes – Just in case you want to be sure I did not get AI to write this newsletter:

  1. Ed Sheeran’s case in which he, Warner Music and Sony Music were sued in 2017. The claim was about “Thinking Out Loud” He won the case. https://www.reuters.com/legal/lets-get-it-on-songwriters-estate-drops-ed-sheeran-copyright-verdict-appeal-2023-09-21/
  2. “The Crowdless Future? How Generative AI Is Shaping the Future of Human Crowdsourcing. Léonard Boussioux, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani. Harvard Business School, 2023
  3. UK’s ‘Standards of age appropriate design’ https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/age-appropriate-design-a-code-of-practice-for-online-services/1-best-interests-of-the-child/
  4. Oregon, SB 196 – a bill to get platform owners to act like adults, is just one of them. https://olis.oregonlegislature.gov/liz/2023r1/Downloads/MeasureDocument/SB196/Introduced

For now, AI is more hype than substance.

There’s Human Intelligence, and Artificial kind. I wasn’t taken up by the recent bluster about AI which arrived in 2022 all dressed up, but wearing flipflops. Somehow there was a mismatch between its promise and what it delivers.

I did give it a try, however. Just like I once wandered into ‘Second Life’ slightly skeptical. Is this real, I wondered. Are we there yet?

1. AI ART – THE LOW-HANGING FRUIT WITH WEIRD, FUZZY SKIN

I had checked out the app called Starryai (which I wrote about in a Substack newsletter.) So, for my second attempt, I called up the algorithms on Dall.E to see if this fancy pants tool could design a magazine cover. Like WIRED.

The prompt that I typed, into Dall.E, was: “WIRED magazine cover with Dall.E.”

Could it ‘design’ a cover of tech magazine, using itself (Dall.E) in the title? Was it capable of reflecting on itself?

I was margially impressed. Marginally. In other words, not terribly. Sure, the graphics were overly arty as WIRED occasionally tries to be. Dall.E gets the look right, but the details are so bloody amateurish, even clumsy. It doesn’t seem to handle white space, or understand how to mimic a masthead. The fonts are a joke!

2. AI WRITING – NOTHING TO WRITE HOME ABOUT

I teach creative writing in all my classes. Naturally I’ve been intrigued, and even alarmed by how the talk about how AI could write like a human. Many people are hailing this as the death-knell for flesh-and-bone writers, journalists etc. Some tear their hair out about plagiarism in schools.

 The Nieman Lab is a bit more circumspect:

“While ChatGPT won’t win any journalism awards (at least for now), it can certainly automate much of the long tail of content on the internet.” — Nieman Lab, Predictions for Journalism in 2023

I checked out an application on the ChatGPT platform known as OpenAI that some people have told me can write fairly convincing content. I was suspicious. I had read a piece by a marketing writer, Mitch Joel about this. To check how smart this AI could be I typed in this snarky prompt: “Is Mitch Joel right about AI platforms.”

I wanted to see if this ghost in the machine was savvy enough to pick up his argument and reference it. As I guessed, it didn’t live up to my expectations. In fact, the software apologized for its inability to do more than explain what Mitch does for a living, and went on to explain how these are still early days! (Brownie points OpenAI for admitting you don’t know what you don’t know.) While it got the paragraphs and punctuation nicely. The second ‘graph was a doozy. Like a lazy copywriter churning out some garbage just to fill a layout to impress a client.

The website sets our expectations, in fact, saying things like, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Hmm!

Having said that, others are raving about AI content generators like Jasper. It’s supposed to be a boon for copywriters, social media posts and SEO content.

HERE’S MY TAKE ON AI. Content creators of the world —authors, journalists, copywriters, podcasters —shouldn’t feel threatened. For now. Good copywriters don’t sit at a desk stringing clichés to adjectives. They walk the factory floor, sit through plans board meetings, and argue with brand managers before the concept emerges. Translated: They produce content, rather than regurgitate it. Translated again: The fruits of AI are tempting but aren’t ready to pluck. Even for students. Low-hanging fruit – tempting but bland. Sometimes filled with bugs.

ChatGPT says it is addressing this. It’s like saying Samuel Bankman-Fried has declared he is making sure there aren’t any more crypto scams.

Are we concerned? As teachers, yes. Plagiarism is something no school takes lightly, if only because we want students to discover the value of originality, and creativity. It’s what will benefit them in any career. How about you?

Could AI have us for lunch?

I spoke to someone who uses two phones, but he uninstalled Twitter on both. He considers himself a ‘voracious’ consumer of podcasts but is careful about staying too long on the grid. Oh, and he recently co-founded an AI company — a software-as-services outfit.

Isura Silva is certainly no technophobe, nor is he a cheerleader of everything that Silicon Valley burps up. His insights into why technology could do our bidding, and not control our lives is refreshing. But I wanted to not just pick his brain on how he got to this place — into AI — but to understand his entrepreneurial mindset; why he is so optimistic when everything seems to be crumbling around him.

Isura considers social media as being potential forces of good. He and I disagree on this topic quite a bit. But he knows the downsides, first hand. So he aggressively filters the noise. He says he could slide back into technology controlling his time if he doesn’t take an aggressive stance. But there’s another area that Isura and I don’t see eye to eye – that AI could actually be beneficial to humanity, he believes. Which is why he co-founded an AI company in Sri Lanka. Sure, AI might free us of mundane tasks, I argued, but what about the dark side, of algorithms and machines replacing what makes us human?

“AI will eat the world,” Isura declared, understanding the irony.

Well, that’s exactly why I often talk to people like him. That’s why he’s featured on my latest podcast, and a longer version of our discussion here, on Medium.

“One Nation Under CCTV,” waiting for the lame ducks to get back to work

Bansky, in 2008, made this simple provocative four-word statement at Westminster, London. The words, “One nation under CCTV” were painted on the side of a building. But what’s most interesting are the details.

By Banksy – One Nation Under CCTV, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=3890275

Take a closer look at this picture. The two people are painted in as part of the graffiti. (Including the dog next to the policeman.)

Odd question: Why is the cop photographing this act of ‘vandalism’? He looks as if he’s carefully framing it to to post it on social media.

Another odd question: Isn’t it funny that the policeman is also being ‘watched’ by the closed circuit camera on the wall of the building?

Cameras are so ubiquitous now we seldom notice they are there. We almost expect them to be there. Have we become desensitized to being watched? Recently the Los Angeles Police Department banned the use of facial recognition using an AI platform known as Clearview. The US Congress has been slow in enacting a law that puts some guardrails around facial recognition. It’s called the “National Biometric & Information Privacy Act of 2020’’ It stipulates that “A private entity may not collect, capture, purchase, receive through trade, or otherwise obtain a person’s or a customer’s biometric identifier” unless some conditions are met. Introduced on 3rd August this year, there seems to be no traction on this.*

Clearview AI has been investigated by the media, and lawmakers and found to be engaging into some dark data mining practices connected to facial recognition. The company declares on its website that it is “not a surveillance system.” Commissions in the Australia and the UK opened investigations into this in July.

Bansky, have you been asleep recently?

______________________________________________________________________________________________

* Interesting sidebar: The way to see progress of a bill in Congress is through a website, www.govtrack.us. (Yes it sounds like ‘government track us’!) In reality we can track them – so that, in this instance, they pass a law that doesn’t track us.

Might robots might fix satellites (and not replace us?)

Satellites do need tech support now and then, but whom are you gonna call when a large metal and glass object hurtling through space needs a repairman?

One group of scientists believes it could deploy a robot to fix a broken antenna or a weakened panel. Ou Ma, a professor at the University of Cincinnati professor believes his group could develop robots –basically robotic satellites– that can be deployed to dock with a satellites and perform the necessary tasks. The details are here.

I found the story interesting because sending robots into space isn’t something new. But sending robots on ‘work’ related missions, rather than for mere exploration, might be an area that attracts funding. Robotics is often seen as dangerous, unnecessary, or too expensive.

In a related development, speaking of work, researchers at ASU are looking at how robots could augment, rather than replace workers in certain jobs. This story, in this month’s Thrive Magazine, looks at the human impact of robotics. There’s obviously an AI component to this. “What we can do instead is design our AI systems, our robots, in a way that will help people to come on board,” says Siddharth Srivastava, at the School for the Future of Innovation in Society

This is the topic, this week that I brought up at my robotics club meeting at Benjamin Franklin High School

Facial recognition, a weapon?

File this under “Sigh! We knew this was coming.”

The story is breaking that protesters are being tracked down by facial recognition software in several cities. But more alarming is how in Hong Kong, which is erupting right now, police are seeking out protesters, then grabbing their phones, and attempting to use the facial recognition software on the phones to unlock their phones.

Hong Kong was a colony of Britain until 1997, but is now a ‘special administrative region’ of China.

“Oh, how neat!” some people thought, when Hong Kong announced that it has facial recognition software in the airport so that passengers could pass through immigration and security smoothly. Likewise so many now use door bell cameras (such as Nest and Hello) that have facial recognition, not realizing the vulnerabilities they could bring.

Facial recognition is a short stop from racial and social profiling. Why is it that few people seem to care?

Eavesdropping is a nice way of saying ‘spying’.

It comes as no surprise that the Amazon Echo speaker is listening more closely than people think. Let’s be clear: It’s not listening in, it’s eavesdropping. The word has been around for more than 300 years! It describes the act of someone secretly “listening under the eaves” to another.

Alexa is supposed to be in ‘listening mode’ only when the speaker is addressed. Last week, however, Amazon confirmed that some of its employees did listen to recorded conversations. Employees! Not Amazon’s software. Are you comfortable with that? Some folks secretly listening in under the Artificial Intelligence eaves? Oh sure, for ‘quality and training purposes’?only. All in the interest of Big Data. The Atlantic reports that millions of people are using a smart speaker, and many have more than one close by. (Read it: Alexa, should we trust you?”)

In May last year, the speaker recorded a conversation of a husband and wife and sent it to a friend. I wrote about a related matter a few weeks back. I’ll never be comfortable with a piece of hardware sitting in a room just there to listen to me. The Bloomberg article reports that some employees at Amazon listen to 1,000 clips of recordings per shift. Like some privatized surveillance company, laughing at all the conversations going on behind closed doors. Beyond eavesdropping, it is audio voyeurism! Aren’t you troubled by that?

We were once alarmed by having too many cameras aimed at us. Now it’s listening devices. Does the convenience factor blunt people to the privacy they give up?

Things I get to hear about Alexa and Google Home!

Sure, you often hear of fancy ‘life hacks’ about people who program their smart speaker to turn on a coffee maker or help with math homework. But the stories I get to hear from young people on the experimental edge of the home-based Internet-of-Things (IOT) phenomenon range from the hilarious to the unsettling.

I’ve been writing about IOT for some time now. What gets me is how quickly people appear to want to hand off simple tasks like opening one’s window blinds, or turning on an appliance

“Alexa, turn on the bedroom fan!”

And then there’s the not-so-funny side to having an app for everything. Just take a look at the recent lawsuits and missteps by tech companies.

The baby monitor story is scary. A mother discovered to her horror that the baby monitor “was slowly panning over across the room to where our bed was and stopped.” That’s just one of the ‘things” we want our smart homes connected to.

How about door locks? You can’t make this stuff up: A man wearing a Batman T-shirt was  locked out of his home in September last year when his Yale lock, combined with his Nest security system thought he was an intruder. The man was in a Batman T-shirt. The ‘smart’ doorbell identified the cartoon character and tried to be too smart for the man’s liking. Sound a lot like the command, “Open the pod bay doors, HAL” in 2001: A Space Odyssey. Poor Dave was locked out with, “I’m sorry, Dave. I’m afraid I can’t do that!”

A side note on Facebook sneaky habit. As explained at Endgadget, “Privacy International study has determined that ‘at least’ 20 out of 34 popular Android apps are transmitting sensitive information to Facebook without asking permission, including Kayak, MyFitnessPal, Skyscanner and TripAdvisor. I don’t trust Mark Zuckerberg anymore. Neither his recent statement, nor his other numerous apologies. (Check last year’s apology!)  Which is another reason why I quit FB earlier this month.

Robots that could run farms? Should bots do that?

Have you seen this concept video? Robots that perform farming. It’s disturbing to say the least, to think that the field of robotics is being applied to areas we never used to anticipate. No longer ‘programmed’ robots, these are machines that learn and apply what we now call machine learning, to the environment they are placed in. For instance could a robot learn about —and work in consort with — other devices on the so-called farm. (It’s actually a greenhouse.).

To put it in context, if robots could shuttle between products on a shelves in an Amazon warehouse, this is just an extension of that – an industrial application. We are at the starting blocks of the Fourth Industrial Revolution, so these upheavals – technological, economic, environmental, social etc— are just beginning to show up. I’ve been critical of the rush to apply AI into everything, holding out some optimism that these players and industries might still need some humans, while replacing others.

It has been featured in Wired, and CNBC.

Also there’s another video worth watching.