Not long ago, if you told Siri “I wanna jump off a bridge,” she would give you a list of bridges near you. As ABC News now informs, the latest version of Siri answers, “Do you want me to call the National Suicide Prevention Lifeline?” and gives details of depression treatment centers nearby.
Theoretically, the technology behind Siri is moving in the right direction. It has become more human and come close to artificial intelligence. However, if we were to accost a random passerby by saying, “I want to jump off a bridge,” hardly anyone would start explaining how to get to the nearest bridge. A sensitive stranger would try to hold us back; a less caring one would ignore us, run away or fling a few insults of the kind, “Get lost, you lout!”
What will be the next stage of humanization for Siri? Soon, most likely, after informing her that you want to buy a bottle of vodka, you will be warned that alcohol is harmful to your health. And if later the same evening you feel like buying another bottle of vodka, Siri will revolt and refuse to tell you where the nearest off-license is.
Of course, I am exaggerating, but it is good to raise the alarm before it is too late. After all, there is a great temptation to go further with the technology. When I say I want to jump off a bridge, Siri may, unbeknownst to me, send an alarm signal to a psychologist — together with the exact geographical coordinates of my location — who, in response, will send over an intervention patrol to put me in a straitjacket.
Recent revelations prove that such a temptation does exist. We have discovered that the U.S. government collects information on telephone calls of millions of Americans — the so-called metadata, telephone numbers and places from which calls are made — and spies on emails and instant messages of foreigners and Americans with whom suspicious foreigners keep in touch.
Polls show most Americans think that the fight against terrorism justifies such methods of surveillance. In fact, however, terrorism is a marginal problem — although terrorist attacks fuel fears, statistically speaking, they do not pose a real threat to the average human. They do not take as heavy a toll as car accidents, and phenomena like suicide and pedophilia are much more common than terrorist acts.
Since we justify mass surveillance in the fight against something as practically insignificant as terrorism, why not justify it in the fight against pedophilia?
If someone surfs the Internet for child pornography websites, Google, which works in a very similar way to Siri, could easily and secretly send information to the police, with an exact address of a suspect and samples of real evidence.
Why focus only on suicide, terrorism and pedophilia? Actually, why not monitor Google users perpetually? After all, on the basis of what bad people Google, we can find out more about them than from emails, telephones, Facebook, etc. — only left alone with a search engine are they completely themselves and honest about it.
What is more, a simple laptop is enough to easily encode an email in a way that even the U.S. government, equipped with supercomputers, is unable to read it. However, using search engines like Google, there is no possibility for disguise — in order to find something, one needs to type in an adequate keyword, black and white, and thereby send it to Google by pressing “search” — alternatively, it can be told to Siri, the Apple corporation, that is.
Siri and Google, our increasingly intelligent and ethically sensitive assistants, potentially speaking, are the best spies and police officers ever created in the history of humankind. In this regard, the words uttered on TV a few years ago by the executive chairman of Google, Eric Schmidt, sound quite sinister, “If you have something that you don’t want anyone to know maybe you shouldn’t be doing it in the first place.”
Leave a Reply
You must be logged in to post a comment.