Bansky, in 2008, made this simple provocative four-word statement at Westminster, London. The words, “One nation under CCTV” were painted on the side of a building. But what’s most interesting are the details.
Take a closer look at this picture. The two people are painted in as part of the graffiti. (Including the dog next to the policeman.)
Odd question: Why is the cop photographing this act of ‘vandalism’? He looks as if he’s carefully framing it to to post it on social media.
Another odd question: Isn’t it funny that the policeman is also being ‘watched’ by the closed circuit camera on the wall of the building?
Cameras are so ubiquitous now we seldom notice they are there. We almost expect them to be there. Have we become desensitized to being watched? Recently the Los Angeles Police Department banned the use of facial recognition using an AI platform known as Clearview. The US Congress has been slow in enacting a law that puts some guardrails around facial recognition. It’s called the “National Biometric & Information Privacy Act of 2020’’ It stipulates that “A private entity may not collect, capture, purchase, receive through trade, or otherwise obtain a person’s or a customer’s biometric identifier” unless some conditions are met. Introduced on 3rd August this year, there seems to be no traction on this.*
Clearview AI has been investigated by the media, and lawmakers and found to be engaging into some dark data mining practices connected to facial recognition. The company declares on its website that it is “not a surveillance system.” Commissions in the Australia and the UK opened investigations into this in July.
* Interesting sidebar: The way to see progress of a bill in Congress is through a website, www.govtrack.us. (Yes it sounds like ‘government track us’!) In reality we can track them – so that, in this instance, they pass a law that doesn’t track us.
Satellites do need tech support now and then, but whom are you gonna call when a large metal and glass object hurtling through space needs a repairman?
One group of scientists believes it could deploy a robot to fix a broken antenna or a weakened panel. Ou Ma, a professor at the University of Cincinnati professor believes his group could develop robots –basically robotic satellites– that can be deployed to dock with a satellites and perform the necessary tasks. The details are here.
I found the story interesting because sending robots into space isn’t something new. But sending robots on ‘work’ related missions, rather than for mere exploration, might be an area that attracts funding. Robotics is often seen as dangerous, unnecessary, or too expensive.
In a related development, speaking of work, researchers at ASU are looking at how robots could augment, rather than replace workers in certain jobs. This story, in this month’s ThriveMagazine, looks at the human impact of robotics. There’s obviously an AI component to this. “What we can do instead is design our AI systems, our robots, in a way that will help people to come on board,” says Siddharth Srivastava, at the School for the Future of Innovation in Society
The story is breaking that protesters are being tracked down by facial recognition software in several cities. But more alarming is how in Hong Kong, which is erupting right now, police are seeking out protesters, then grabbing their phones, and attempting to use the facial recognition software on the phones to unlock their phones.
Hong Kong was a colony of Britain until 1997, but is now a ‘special administrative region’ of China.
“Oh, how neat!” some people thought, when Hong Kong announced that it has facial recognition software in the airport so that passengers could pass through immigration and security smoothly. Likewise so many now use door bell cameras (such as Nest and Hello) that have facial recognition, not realizing the vulnerabilities they could bring.
Facial recognition is a short stop from racial and social profiling. Why is it that few people seem to care?
It comes as no surprise that the Amazon Echo speaker is listening more closely than people think. Let’s be clear: It’s not listening in, it’s eavesdropping. The word has been around for more than 300 years! It describes the act of someone secretly “listening under the eaves” to another.
Alexa is supposed to be in ‘listening mode’ only when the speaker is addressed. Last week, however, Amazon confirmed that some of its employees did listen to recorded conversations. Employees! Not Amazon’s software. Are you comfortable with that? Some folks secretly listening in under the Artificial Intelligence eaves? Oh sure, for ‘quality and training purposes’?only. All in the interest of Big Data. The Atlantic reports that millions of people are using a smart speaker, and many have more than one close by. (Read it: Alexa, should we trust you?”)
In May last year, the speaker recorded a conversation of a husband and wife and sent it to a friend. I wrote about a related matter a few weeks back. I’ll never be comfortable with a piece of hardware sitting in a room just there to listen to me. The Bloombergarticle reports that some employees at Amazon listen to 1,000 clips of recordings per shift. Like some privatized surveillance company, laughing at all the conversations going on behind closed doors. Beyond eavesdropping, it is audio voyeurism! Aren’t you troubled by that?
We were once alarmed by having too many cameras aimed at us. Now it’s listening devices. Does the convenience factor blunt people to the privacy they give up?
Sure, you often hear of fancy ‘life hacks’ about people who program their smart speaker to turn on a coffee maker or help with math homework. But the stories I get to hear from young people on the experimental edge of the home-based Internet-of-Things (IOT) phenomenon range from the hilarious to the unsettling.
I’ve been writing about IOT for some time now. What gets me is how quickly people appear to want to hand off simple tasks like opening one’s window blinds, or turning on an appliance
“Alexa, turn on the bedroom fan!”
And then there’s the not-so-funny side to having an app for everything. Just take a look at the recent lawsuits and missteps by tech companies.
The baby monitor story is scary. A mother discovered to her horror that the baby monitor “was slowly panning over across the room to where our bed was and stopped.” That’s just one of the ‘things” we want our smart homes connected to.
How about door locks? You can’t make this stuff up: A man wearing a Batman T-shirt was locked out of his home in September last year when his Yale lock, combined with his Nest security system thought he was an intruder. The man was in a Batman T-shirt. The ‘smart’ doorbell identified the cartoon character and tried to be too smart for the man’s liking. Sound a lot like the command, “Open the pod bay doors, HAL” in 2001: A Space Odyssey. Poor Dave was locked out with, “I’m sorry, Dave. I’m afraid I can’t do that!”
Have you seen this concept video? Robots that perform farming. It’s disturbing to say the least, to think that the field of robotics is being applied to areas we never used to anticipate. No longer ‘programmed’ robots, these are machines that learn and apply what we now call machine learning, to the environment they are placed in. For instance could a robot learn about —and work in consort with — other devices on the so-called farm. (It’s actually a greenhouse.).
To put it in context, if robots could shuttle between products on a shelves in an Amazon warehouse, this is just an extension of that – an industrial application. We are at the starting blocks of the Fourth Industrial Revolution, so these upheavals – technological, economic, environmental, social etc— are just beginning to show up. I’ve been critical of the rush to apply AI into everything, holding out some optimism that these players and industries might still need some humans, while replacing others.
MIT has just announced it will add a new college, the Stephen A. Schwarzman College of Computing, dedicated to world-changing breakthroughs in AI, and their ethical application. The college will “reorient MIT” to add 50 new faculty positions, and give students in every discipline an opportunity to develop and apply AI and computing technologies.
The term ‘ethical’ keeps popping up these days in relation to Artificial Intelligence. MIT expands on this, saying it will “examine the anticipated outcomes of advances in AI and machine learning, and to shape policies around the ethics of AI.” As I have mentioned elsewhere, most experts (from Elon Musk, to Bill Gates to Berners-Lee aside) agree that we are just at the tadpole stage of the life-cycle of AI.
However, some, such as sci-fi writer, Isaac Asimov and even Stephen Hawking have had concerns. Hawking, for instance remarked that “we all have a role to play in ensuring that we and the next generation have the determination to engage with science … and create a better world for the whole human race.” MIT seems to be the first large institution to take up this mantle, and in the process, redefine and re-invent its role in education.
A few weeks back I featured an ominous exercise, conducted seven years ago by the Navy Research Lab.Today Artificial Intelligence is taking us into a new machine age, with devices, and not just robots, being able to grow ‘intelligent’ with data they glean from other machines we use.
Big players are developing capabilities in AI –from PwC and IBM, to Tesla and Alibaba!
I’ve had some fun with Alexa. The matter was settled over the Christmas break: We can do without AI in our home.
I had previously written about it here. And featured voice assistants in my last tech column, “I spy with my little AI.” I reference how creepy it could get should an AI enabled device such as Alexa, Google assistant or even Siri eavesdrop on our private conversations. AI devices after all are supposed to do our bidding, not spy on us. But there’s a fine line between passively listening and spying.
So when we discovered that an AirBnB we rented over the break provided an Amazon Echo speaker, it got to the point where (after a few rounds of asking Alexa random questions and finding ‘her’ quite annoying) I unplugged it and put the darn thing away.
It was no surprise then to hear that at the Consumer Electronics Show (CES) in Vegas, several new breeds of AI devices were unveiled, designed to respond to human inclination to suddenly want to talk to hardware. Such as the smart refrigerator by LG that ‘talks’ to a smart oven etc.
Which makes me wonder: Just at the time when we have plenty of research pointing to the correlation between being too plugged in, and being extremely socially disconnected, we have the tech sector pushing products that seem to exacerbate the issue. I don’t need a smart fridge, thank you very much – I just need a painless way to talk to an LG service rep (25 minutes on hold, seems customary) when my fridge behaves badly.
And speaking of snooping devices, here’s something that is advertised as being able to monitor a home. A clothes hook with a hidden camera. Creepy? Or is it the sign of (the Internet of) things to come?
What might you get if you affix an android head onto a metal and plastic life-size body? More than a bobble-head, for sure. especially if there’s a whole bunch of robotics, plus artificial intelligence under the hood.
The android known as Sophia debuted at the Future Investment Initiative, an event with speakers as varied as Richard Branson, to Nicolas Sarkozy, to Maria Bartiromo. Indeed Sophia made recent headlines because Saudi Arabia granted it ‘citizenship’ – whatever that means. Let that sink in for a moment – giving civic status to a machine.
Hansen Robotics, the workshop where Sophia was built has several models. A bald-headed Han, a 17 inch tall boy robot called Zeno, and a full-sized animatronic, Albert Einstein. These bots use facial tracking, natural language processing, and their creators plan on developing Emotional Intelligence for Einstein.
Robotics is a double-edged sword. I cover robotics, help train students, and often talk of being alert to where all this could be headed. Governments, labs, schools, policy-makers and ethicists should be joining the debate. (Recall Elon Musk and others sounded a warning that AI could threaten human civilization.) It shouldn’t be a conversation dominated by those in technology alone.