A need to ‘register’ your face –and other tech silliness

As if we don’t have enough to be concerned about! Users of the iPhone X must ‘register’ their faces so that facial recognition –a feature that everyone seems to be fawning over– works.

I am not making this up. This was reported where an iPhone user had to repeatedly ‘register’ her face because her 10-year old son unwittingly unlocked her phone. The story cites Wired reporter Andy Greenberg who :

suggested that Sherwani re-register her face to see what would happen. Upon doing so, the iPhone X no longer allowed Ammar access. Interestingly, after Sherwani tried registering her face again a few hours later in the same indoor, nighttime lighting conditions in which she first set up her iPhone X, the son was able to regain access with his face.

Does this mean that:

  • Some day there will be a facial registry, somewhere in the Cloud? For now, it’s on the device.
  • There might be an after-market for 3-D printed facial masks to crack iPhones? Apple is skeptical. Of course!

Using robots to teach PowerPoint animation

Around this time of year when I introduce animation in PowerPoint, I try to find something topical to animate.

So I’ve got my 4th graders to think about ‘Man and Machine‘ -specifically how a human could evolve into a humanoid. We use the custom animation tool to draw a path to make the human glide across the screen to turn into a robot.

To preface it, I showed them a clip of Asimo, the Honda humanoid project. Asimo is the acronym for Advanced Step in Innovative Mobility. It is a 4-foot 3-inch character that can run, climb steps, and play a bit of football (soccer). Even those who aren’t into robotics get instantly engaged.

I asked the class what they thought of man and machine after watching this; some thought it was a bit weird and creepy, but pretty cool.

Once the unit is completed, I figure this will be a good way to re-introduce Coding for the Hour of Code project. How do they build a set of instructions to make an inanimate object move? Coding and animation have a lot in common!

Trolls, bots, and memes become parents’ new nightmare. So what’s the solution?

A friend recently asked me if someone should be putting together a source for parents who have to address so much in the lives of their digital natives. I have a few go-to websites that we use as teachers, but was struggling to find a good hand book.

First two of the best web-based resources I recommend.

COMMON SENSE MEDIA – This is a wonderful, deep trove of information that is updated with plenty of topics (plus short videos) on such from phone addiction, and fake news, to privacy tips and how to navigate the difficult world of plagiarism, copyright, password protection, oversharing etc.

EDUTOPIAAnother great place for articles on technology skills such as coding, academic skills being taught such as note-taking, problem-solving, state standards, digital citizenship etc.

But the reality is that almost every week, children are bombarded and confused by new issues. One week it’s plagiarism, the next it is memes, and add to that the constant misinformation through bots and trolls, followed by the news related to cyber-bullying or inappropriate behavior that pops up on TV or their social media feeds. The search engines and social media platforms are often gamed by bots, and tricked by pranksters, but who has time to inform the kids about these fast-moving events?

So the sad thing, as I had to tell my friend, was there is no handbook. Just like there was no user-guide when we first got onto the early Internet. However that Internet was a place we went to, consciously logging in to it, or “dialing up” to it. Today, that place isn’t somewhere we visit – it visits us. Students who grow up with it have to navigate it on their own. It’s like giving them the keys to the car, before they go to driving school, expecting things to be alright on the road.

But of course there is one user-guide. It’s unpublished. It’s called Parenting.

Cyber-warfare – a new definition is overdue

Used to be that cyber war was considered actions of an adversary to take down a system using the Internet. Like crippling a financial system, hacking into and holding hostage a web site, compromising power and communication grids etc. That definition  is really old now!

As authorities uncover Russian interference – specifically the work of trolls, fake social media accounts, and even advertising piped through Facebook, Twitter and Google – we should understand that cyber warfare is more subtle, and has outgrown the old definitions. It is about disrupting the behaviors, and messing with the minds of citizens. Before we show our irritation with foreign culprits, we should be unhappy with how we citizens are easily manipulated by what is online.

The glue that holds us together appears to be easily dissolved by what passes for ‘information.’ As the Philadelphia Inquirer story reveals, we are experiencing high-tech cracks and wedges to undermine us. They worked because of a critical mass of people who unthinkingly re-tweet and share posts and sponsored content. Content that few care check where the source of the post is.

Consider this sponsored ad (featured in the Philadelphia Inquirer story). It looks so silly, and poorly crafted that you’d think any person with some common sense would not even read it, let alone pass it along to others. Variations of these include chain-letters, and memes that no one knows the origin, but often accompanies a statement like “Could I hear an amen?”

For the record I never respond with an amen, for two reasons. The word is a statement of approval or concurrence reserved for prayer. It’s not the linguistic equivalent to the Like button. Also, someone’s rant does always not require public approval to make it more valid. You can still be a friend whether or not you agree with someone’s pet peeve. And for heaven’s sake (pun intended), don’t Like or re-tweet this post unless you a read it in its entirety.

Cyber war is no longer just about attacking hardware or infrastructure. It’s about unhinging us through the things that pass through the pipes that connect our hardware. It’s not about a denial of service, but about a denial of common sense.

Fighting Technoference? Buy a goldfish

I brought up the word earlier –Technoference. It’s one of those Urban Dictionary words that smacks us in the forehead but eventually creeps into our vocabulary –like ‘Dweeb,’ ‘Fudgel’ and ‘Thunking.’  So I decided to devote my Nov column in LMD magazine to it.

So what’s Technoference, you ask?

Besides having to actually read the damn column, I’m betting you already experienced it. Have you ever had a conversation with a teenager, only to have her pause you in mid-sentence and pull out a phone to fact-check something? Thought so!

Once you get off your Snapstreak, let me know your thoughts on it.

Your input matters as robots with facial expressions and emotional intelligence emerge

What might you get if you affix an android head onto a metal and plastic life-size body? More than a bobble-head, for sure. especially if there’s a whole bunch of robotics, plus artificial intelligence under the hood.

The android known as Sophia debuted at the Future Investment Initiative, an event with speakers as varied as Richard Branson, to Nicolas Sarkozy, to Maria Bartiromo. Indeed Sophia made recent headlines because Saudi Arabia granted it ‘citizenship’ – whatever that means. Let that sink in for a moment – giving civic status to a machine.

Hansen Robotics, the workshop where Sophia was built has several models. A bald-headed Han, a 17 inch tall boy robot called Zeno, and a full-sized animatronic, Albert Einstein. These bots use facial tracking, natural language processing, and their creators plan on developing Emotional Intelligence for Einstein.

Robotics is a double-edged sword. I cover robotics, help train students, and often talk of being alert to where all this could be headed. Governments, labs, schools, policy-makers and ethicists should be joining the debate. (Recall Elon Musk and others sounded a warning that AI could threaten human civilization.) It shouldn’t be a conversation dominated by those in technology alone.

Space Day at Salt River Elementary

So today is Space Day! Our 6th year, Space Day is turning out to be quite an event!

This year we have two keynote addresses from NASA scientists:

Dr. Jim Rice,  Co-Investigator on the Mars Exploration Rover Project. His work has involved mission experience working on the Mars Odyssey Orbiter and Lunar Reconnaissance Orbiter Projects, the Mars landing site selection for every NASA Mars Mission since Mars Pathfinder in 1995. he is currently involved in manned missions back to the Moon and Mars.

Dr. Ashwin Vasavada, is the Deputy Project Scientist working on the mission of the Mars Curiosity rover. He helps lead an international team of over 400 scientists. His work has involved geologic studies of Mars with regard to surface properties, volatility, and climate history.

Other sessions will be specific to grade levels:

  • Robots in space
  • Planets & Liquid Nitrogen
  • Food in Space
  • Moon rocks
  • Rockets and launches
  • Satellites Communication in Space
  • Small-scale satellites

We could not have done this without the support of:

  • Challenger Space Center
  • NASA
  • Orbital ATK
  • ASU – School of Earth and Space exploration
  • ASU – Collective Systems Laboratory
  • SpaceTrex

 

Robots vs Teachers. Expecting a standoff?

I’ve got this poster in my class that says “Technology won’t replace teachers. But teachers who use technology will probably replace teachers who do not.”  

It raises a few of eyebrows.

So I was intrigued by a story in Education Week last month about how ‘intelligent tutors’ could upend Teachers’ jobs. The story cites an EdTech professor at the Harvard’s Grad School of Education. Christopher Dede says, “AI changes teaching, yes, but more important than that, AI changes the goals and purposes of teaching.”  Besides the reference to Artificial Intelligence are references to a ‘Tutor Machine,’ cognitive tutoring, and ‘Intelligent Tutoring Systems’ or ITS.

I’m not surprised this discussion is veering into the AI realm. It’s not just about data, but about knowing when to intervene. It will nudge teaching away from the ‘factory’ model and into a consultative approach.

The old guard armed with rubrics and lecture notes will cry foul. The robots are not going to walk into our classrooms anytime soon. But technologies could emerge to phase out robotic teaching methods.

How indispensable could Alexa be?

I have been curious about Google Home and Amazon’s Echo, purely from a tech perspective. Also it’s interesting to keep an eye on where AI is going. It’s easy to be cynical, because a piece of always-on hardware that ‘listens’ to everything going on in your home all day is well, a bit creepy.

Not that it worries millions of iPhone users who also have an AI agent, Siri, just waiting to be asked something.  But these devices are prone to being hacked, besides invading one’s privacy. (I know of several people who have a sticker over the camera on their laptop lid, for good reason. Hey, Facebook’s Zuckerberg does!)

So a few days ago I tested Alexa in a friend’s home. He’s been using it a lot –he asks Alexa what’s the best route to work, and to play music off his playlist etc. I asked Alexa a simple question, “Alexa, How long will it take to get to the Moon?” Without missing a beat Alexa responded with an answer (3 days) qualifying it with something about development of rocketry. The next few questions a bit predictable, such as asking for the bio of a country singer, and to play some of Keith Urban’s music. When Alexa got stumped, it was probably my accent, or it did not get the context right.

But my friend says he asks Alexa to add items he will need in the store to his shopping cart, and picks up the list on his phone when he is in the store. He recently installed a smart thermostat so it is feasible that one day he could ask Alexa to change the temperature (and his wife could ask Alexa to change it back!) But as we brainstormed how it might change our lives I wondered, once the fascination (of talking to a piece of hardware) wears off, if we might find Artificial Intelligence too useful to ignore.

For instance, I would love to be able to ask Alexa or Google Home to:

  • Forward my article to LMD magazine, but please change the last sentence to (and I could dictate it). It would save me from logging back onto the computer, and opening my email etc.
  • Send a Text alert to my friend in Worcester (whose phone number I have forgotten) about an upcoming event
  • Buy a copy of a (name title of book) from Amazon, use Prime, and pay for it with my gift card, not a credit card.
  • Print a copy of my recent Lesson Plan on a black-and-white printer, double-sided, on Monday morning by the time I get to school

Will that day come soon? Are we there now? Is this too much information to be put out there in the cloud? Will Keith Urban send my daughter an autographed T-shirt? Just kidding!

Algorithms do make mistakes! A teaching moment after Vegas

It’s so easy to assume that ‘algorithms‘ can do no wrong. Did you even use the fancy word prior to ‘Search Engine Optimization’ ?

So Google’s statement that, after the Las Vegas tragedy (and the inaccurate news that ensued via social media) they had to go in and over-ride the algorithm, says volumes.

“Unfortunately, early this morning we were briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries. Within hours, the 4chan story was algorithmically replaced by relevant results. This should not have appeared for any queries, and we’ll continue to make algorithmic improvements to prevent this from happening in the future.”

Here’s what is worth teaching.

  • A search result that pops up may not be accurate. In fact it can be deliberately misleading. (The Tom Petty headline, being the latest in ‘inadvertent’ mistakes.)
  • Cross-reference your ‘facts’. Read the whole article before drawing a conclusion.
  • The headline in a tweet or a trending FB post is an incomplete picture. Or often carries a bias. Former Facebook ‘news curators’ have admitted they were instructed to artificially inject selected stories into the trending news module.

An algorithm is just “a process or set of rules” that are set up in advance for sifting through data, and making calculations with complex variables. Algorithms are not writ in stone. Especially when there is some Artificial Intelligence involved, they are supposed to ‘learn’ from the complexity and adjust. Sometimes they aren’t good learners, and are easily misled, or tricked.

And so are we!