For now, AI is more hype than substance.

There’s Human Intelligence, and Artificial kind. I wasn’t taken up by the recent bluster about AI which arrived in 2022 all dressed up, but wearing flipflops. Somehow there was a mismatch between its promise and what it delivers.

I did give it a try, however. Just like I once wandered into ‘Second Life’ slightly skeptical. Is this real, I wondered. Are we there yet?

1. AI ART – THE LOW-HANGING FRUIT WITH WEIRD, FUZZY SKIN

I had checked out the app called Starryai (which I wrote about in a Substack newsletter.) So, for my second attempt, I called up the algorithms on Dall.E to see if this fancy pants tool could design a magazine cover. Like WIRED.

The prompt that I typed, into Dall.E, was: “WIRED magazine cover with Dall.E.”

Could it ‘design’ a cover of tech magazine, using itself (Dall.E) in the title? Was it capable of reflecting on itself?

I was margially impressed. Marginally. In other words, not terribly. Sure, the graphics were overly arty as WIRED occasionally tries to be. Dall.E gets the look right, but the details are so bloody amateurish, even clumsy. It doesn’t seem to handle white space, or understand how to mimic a masthead. The fonts are a joke!

2. AI WRITING – NOTHING TO WRITE HOME ABOUT

I teach creative writing in all my classes. Naturally I’ve been intrigued, and even alarmed by how the talk about how AI could write like a human. Many people are hailing this as the death-knell for flesh-and-bone writers, journalists etc. Some tear their hair out about plagiarism in schools.

 The Nieman Lab is a bit more circumspect:

“While ChatGPT won’t win any journalism awards (at least for now), it can certainly automate much of the long tail of content on the internet.” — Nieman Lab, Predictions for Journalism in 2023

I checked out an application on the ChatGPT platform known as OpenAI that some people have told me can write fairly convincing content. I was suspicious. I had read a piece by a marketing writer, Mitch Joel about this. To check how smart this AI could be I typed in this snarky prompt: “Is Mitch Joel right about AI platforms.”

I wanted to see if this ghost in the machine was savvy enough to pick up his argument and reference it. As I guessed, it didn’t live up to my expectations. In fact, the software apologized for its inability to do more than explain what Mitch does for a living, and went on to explain how these are still early days! (Brownie points OpenAI for admitting you don’t know what you don’t know.) While it got the paragraphs and punctuation nicely. The second ‘graph was a doozy. Like a lazy copywriter churning out some garbage just to fill a layout to impress a client.

The website sets our expectations, in fact, saying things like, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Hmm!

Having said that, others are raving about AI content generators like Jasper. It’s supposed to be a boon for copywriters, social media posts and SEO content.

HERE’S MY TAKE ON AI. Content creators of the world —authors, journalists, copywriters, podcasters —shouldn’t feel threatened. For now. Good copywriters don’t sit at a desk stringing clichés to adjectives. They walk the factory floor, sit through plans board meetings, and argue with brand managers before the concept emerges. Translated: They produce content, rather than regurgitate it. Translated again: The fruits of AI are tempting but aren’t ready to pluck. Even for students. Low-hanging fruit – tempting but bland. Sometimes filled with bugs.

ChatGPT says it is addressing this. It’s like saying Samuel Bankman-Fried has declared he is making sure there aren’t any more crypto scams.

Are we concerned? As teachers, yes. Plagiarism is something no school takes lightly, if only because we want students to discover the value of originality, and creativity. It’s what will benefit them in any career. How about you?

Could AI have us for lunch?

I spoke to someone who uses two phones, but he uninstalled Twitter on both. He considers himself a ‘voracious’ consumer of podcasts but is careful about staying too long on the grid. Oh, and he recently co-founded an AI company — a software-as-services outfit.

Isura Silva is certainly no technophobe, nor is he a cheerleader of everything that Silicon Valley burps up. His insights into why technology could do our bidding, and not control our lives is refreshing. But I wanted to not just pick his brain on how he got to this place — into AI — but to understand his entrepreneurial mindset; why he is so optimistic when everything seems to be crumbling around him.

Isura considers social media as being potential forces of good. He and I disagree on this topic quite a bit. But he knows the downsides, first hand. So he aggressively filters the noise. He says he could slide back into technology controlling his time if he doesn’t take an aggressive stance. But there’s another area that Isura and I don’t see eye to eye – that AI could actually be beneficial to humanity, he believes. Which is why he co-founded an AI company in Sri Lanka. Sure, AI might free us of mundane tasks, I argued, but what about the dark side, of algorithms and machines replacing what makes us human?

“AI will eat the world,” Isura declared, understanding the irony.

Well, that’s exactly why I often talk to people like him. That’s why he’s featured on my latest podcast, and a longer version of our discussion here, on Medium.

“One Nation Under CCTV,” waiting for the lame ducks to get back to work

Bansky, in 2008, made this simple provocative four-word statement at Westminster, London. The words, “One nation under CCTV” were painted on the side of a building. But what’s most interesting are the details.

By Banksy – One Nation Under CCTV, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=3890275

Take a closer look at this picture. The two people are painted in as part of the graffiti. (Including the dog next to the policeman.)

Odd question: Why is the cop photographing this act of ‘vandalism’? He looks as if he’s carefully framing it to to post it on social media.

Another odd question: Isn’t it funny that the policeman is also being ‘watched’ by the closed circuit camera on the wall of the building?

Cameras are so ubiquitous now we seldom notice they are there. We almost expect them to be there. Have we become desensitized to being watched? Recently the Los Angeles Police Department banned the use of facial recognition using an AI platform known as Clearview. The US Congress has been slow in enacting a law that puts some guardrails around facial recognition. It’s called the “National Biometric & Information Privacy Act of 2020’’ It stipulates that “A private entity may not collect, capture, purchase, receive through trade, or otherwise obtain a person’s or a customer’s biometric identifier” unless some conditions are met. Introduced on 3rd August this year, there seems to be no traction on this.*

Clearview AI has been investigated by the media, and lawmakers and found to be engaging into some dark data mining practices connected to facial recognition. The company declares on its website that it is “not a surveillance system.” Commissions in the Australia and the UK opened investigations into this in July.

Bansky, have you been asleep recently?

______________________________________________________________________________________________

* Interesting sidebar: The way to see progress of a bill in Congress is through a website, www.govtrack.us. (Yes it sounds like ‘government track us’!) In reality we can track them – so that, in this instance, they pass a law that doesn’t track us.

Might robots might fix satellites (and not replace us?)

Satellites do need tech support now and then, but whom are you gonna call when a large metal and glass object hurtling through space needs a repairman?

One group of scientists believes it could deploy a robot to fix a broken antenna or a weakened panel. Ou Ma, a professor at the University of Cincinnati professor believes his group could develop robots –basically robotic satellites– that can be deployed to dock with a satellites and perform the necessary tasks. The details are here.

I found the story interesting because sending robots into space isn’t something new. But sending robots on ‘work’ related missions, rather than for mere exploration, might be an area that attracts funding. Robotics is often seen as dangerous, unnecessary, or too expensive.

In a related development, speaking of work, researchers at ASU are looking at how robots could augment, rather than replace workers in certain jobs. This story, in this month’s Thrive Magazine, looks at the human impact of robotics. There’s obviously an AI component to this. “What we can do instead is design our AI systems, our robots, in a way that will help people to come on board,” says Siddharth Srivastava, at the School for the Future of Innovation in Society

This is the topic, this week that I brought up at my robotics club meeting at Benjamin Franklin High School

Facial recognition, a weapon?

File this under “Sigh! We knew this was coming.”

The story is breaking that protesters are being tracked down by facial recognition software in several cities. But more alarming is how in Hong Kong, which is erupting right now, police are seeking out protesters, then grabbing their phones, and attempting to use the facial recognition software on the phones to unlock their phones.

Hong Kong was a colony of Britain until 1997, but is now a ‘special administrative region’ of China.

“Oh, how neat!” some people thought, when Hong Kong announced that it has facial recognition software in the airport so that passengers could pass through immigration and security smoothly. Likewise so many now use door bell cameras (such as Nest and Hello) that have facial recognition, not realizing the vulnerabilities they could bring.

Facial recognition is a short stop from racial and social profiling. Why is it that few people seem to care?

Eavesdropping is a nice way of saying ‘spying’.

It comes as no surprise that the Amazon Echo speaker is listening more closely than people think. Let’s be clear: It’s not listening in, it’s eavesdropping. The word has been around for more than 300 years! It describes the act of someone secretly “listening under the eaves” to another.

Alexa is supposed to be in ‘listening mode’ only when the speaker is addressed. Last week, however, Amazon confirmed that some of its employees did listen to recorded conversations. Employees! Not Amazon’s software. Are you comfortable with that? Some folks secretly listening in under the Artificial Intelligence eaves? Oh sure, for ‘quality and training purposes’?only. All in the interest of Big Data. The Atlantic reports that millions of people are using a smart speaker, and many have more than one close by. (Read it: Alexa, should we trust you?”)

In May last year, the speaker recorded a conversation of a husband and wife and sent it to a friend. I wrote about a related matter a few weeks back. I’ll never be comfortable with a piece of hardware sitting in a room just there to listen to me. The Bloomberg article reports that some employees at Amazon listen to 1,000 clips of recordings per shift. Like some privatized surveillance company, laughing at all the conversations going on behind closed doors. Beyond eavesdropping, it is audio voyeurism! Aren’t you troubled by that?

We were once alarmed by having too many cameras aimed at us. Now it’s listening devices. Does the convenience factor blunt people to the privacy they give up?

Things I get to hear about Alexa and Google Home!

Sure, you often hear of fancy ‘life hacks’ about people who program their smart speaker to turn on a coffee maker or help with math homework. But the stories I get to hear from young people on the experimental edge of the home-based Internet-of-Things (IOT) phenomenon range from the hilarious to the unsettling.

I’ve been writing about IOT for some time now. What gets me is how quickly people appear to want to hand off simple tasks like opening one’s window blinds, or turning on an appliance

“Alexa, turn on the bedroom fan!”

And then there’s the not-so-funny side to having an app for everything. Just take a look at the recent lawsuits and missteps by tech companies.

The baby monitor story is scary. A mother discovered to her horror that the baby monitor “was slowly panning over across the room to where our bed was and stopped.” That’s just one of the ‘things” we want our smart homes connected to.

How about door locks? You can’t make this stuff up: A man wearing a Batman T-shirt was  locked out of his home in September last year when his Yale lock, combined with his Nest security system thought he was an intruder. The man was in a Batman T-shirt. The ‘smart’ doorbell identified the cartoon character and tried to be too smart for the man’s liking. Sound a lot like the command, “Open the pod bay doors, HAL” in 2001: A Space Odyssey. Poor Dave was locked out with, “I’m sorry, Dave. I’m afraid I can’t do that!”

A side note on Facebook sneaky habit. As explained at Endgadget, “Privacy International study has determined that ‘at least’ 20 out of 34 popular Android apps are transmitting sensitive information to Facebook without asking permission, including Kayak, MyFitnessPal, Skyscanner and TripAdvisor. I don’t trust Mark Zuckerberg anymore. Neither his recent statement, nor his other numerous apologies. (Check last year’s apology!)  Which is another reason why I quit FB earlier this month.

Robots that could run farms? Should bots do that?

Have you seen this concept video? Robots that perform farming. It’s disturbing to say the least, to think that the field of robotics is being applied to areas we never used to anticipate. No longer ‘programmed’ robots, these are machines that learn and apply what we now call machine learning, to the environment they are placed in. For instance could a robot learn about —and work in consort with — other devices on the so-called farm. (It’s actually a greenhouse.).

To put it in context, if robots could shuttle between products on a shelves in an Amazon warehouse, this is just an extension of that – an industrial application. We are at the starting blocks of the Fourth Industrial Revolution, so these upheavals – technological, economic, environmental, social etc— are just beginning to show up. I’ve been critical of the rush to apply AI into everything, holding out some optimism that these players and industries might still need some humans, while replacing others.

It has been featured in Wired, and CNBC.

Also there’s another video worth watching.

Could MIT reinvent itself with an ‘ethical’ approach to AI?

Just in time, as the field of AI ramps up. (Also by some coincidence, a week after the cover story in LMD.)

MIT has just announced it will add a new college, the Stephen A. Schwarzman College of Computing, dedicated to world-changing breakthroughs in AI, and their ethical application. The college will “reorient MIT” to add 50 new faculty positions, and give  students in every discipline an opportunity to develop and apply AI and computing technologies.

The term ‘ethical’ keeps popping up these days in relation to Artificial Intelligence. MIT expands on this, saying it will “examine the anticipated outcomes of advances in AI and machine learning, and to shape policies around the ethics of AI.” As I have mentioned elsewhere, most experts (from Elon Musk, to Bill Gates to Berners-Lee aside) agree that we are just at the tadpole stage of the life-cycle of AI.

However, some, such as sci-fi writer, Isaac Asimov and even Stephen Hawking have had concerns. Hawking, for instance remarked that “we all have a role to play in ensuring that we and the next generation have the determination to engage with science … and create a better world for the whole human race.” MIT seems to be the first large institution to take up this mantle, and in the process, redefine and re-invent its role in education.

AI is here – should we prepare or panic? – LMD cover story

Linked from the Futureoflife Institute

A few weeks back I featured an ominous exercise, conducted seven years ago by the Navy Research Lab.Today Artificial Intelligence is taking us into a new machine age, with devices, and not just robots, being able to grow ‘intelligent’ with data they glean from other machines we use.

Big players are developing capabilities in AI –from PwC and IBM, to Tesla and Alibaba!

For the October issue of LMD I was commissioned to write the cover story on AI. You can access it here