Emerging risks or old risks simply renamed

Emerging risks or old risks simply renamed

Mar 23rd, 2017

Today we will discuss if intelligent personal assistants and connected toys using speech recognition technology represent Emerging risks. Or perhaps, they are old risks simply renamed? We are referring to Amazon Alexa, doll My Friend Cayla or various Teddy bears.

The environment

First let’s keep in mind that any new technology, development comes with upward and downward risks. Oftentimes the upward side opens fantastic new horizons, well worth “tolerating” the downward aspects. The point of this post is to understand how deep is the downward side. We also want to discuss possible mitigations, if possible simple and efficient ones.

Emerging risks or old risks simply renamed

We are not referring to any make, supplier specifically. Indeed we believe all these contraptions can, sooner or later, depending on the opportunities, be abused in one way or another. Recent news related to smart Samsung TVs, or home speakers spreading fake news have shown this.

Furthermore we have to place this discussion in the general context. A context that sees a great majority of users voluntarily surrender all sorts of “private” information to social media.  “Tolerate” the downward aspect until they discover, in some cases, the vastity of the damage.

Emerging risks or old risks, simply renamed

What are the risks behind these marvelous contraptions?

Well, there is a likelihood that the flow of information is diverted and that “who-knows-who” may nefariously use it. That would certainly generate some downward consequences.

Likelihood and consequences are the basic ingredients of risk.

When installing one of these devices or giving a new toy to your children you are letting eyes and ears of “who-knows-who” in your house. And you let that “who-knows-who” talk to the most vulnerable members of your family.

Are these risks new or emerging? Actually neither or only partially. The big difference is that today the interface is apparently “not human”. However there may be humans behind the scene at some point. This comes in contrast to the past where the “interface” was in primis human. Generally, for example an unseemly servant, a spy, a traitor, etc.
The consequences end up being the same. Brainwashed people, mistrust, jealousy, etc. Open any literary drama and you will have some neat examples.

So, these apparently emerging risks are actually “classic”, old risks against which humanity has fought since time immemorial. Nevertheless those who could afford them, have always tolerated the risk generated by unseemly personnel.

Thus we can answer if Cayla, connected Teddy bears, Alexa and the likes represent an Emerging risks or old risks simply renamed. The answer is that actually it is preponderantly an old risk, simply renamed.

Historic Mitigations

Rulers all over the world, clergy of almost any credo found a simple way to somewhat protect themselves from indiscreet ears and potential consequences of leaks. They used “special languages” (not necessarily a purposely encrypted language) which “belonged to them”, to reduce the risks.

For example, not so long ago the French language was the language of diplomacy and many royal courts of Europe embraced it. Those included the English, the Russian, the German, the Italian courts and even some overseas ones, like the Egyptian royal court in the 19th century. Servants and other personnel certainly did not understand a word and “voila, les jeux sont faits”!

The Catholic Church used Latin, and refused at the Council of Trent (16th century) to use national languages “to preserve its unity”. But the Catholic Church is certainly not alone as Hinduism, Buddhism, Orthodox and Judaism all use “sacred languages”. Interestingly Islam uses classic Arabic, becoming an exception.

What are the capabilities of the modern devices and what is happening?

Alexa, at this time, can interact and communicate only in English and German. However it is only matter of time before its designers expand the language availability.
Most devices with Alexa allow users to activate the device using a wake-word (such as Echo). Other devices require the user to push a button to activate Alexa’s listening mode.

Many toys can be accessed remotely, and used to listen to your family (Cloud pets ). This to the point that many voices are raising to advice the public not to buy these devices.

Those same voice invoke regulators intervention in what seems to be a totally unregulated market.

Cases of collapsing “connected toys” companies doing nothing to protect their clients, while hackers steel millions of voice recordings of kids and parents, have already emerged.
But there are not only toys. Parents may want to install a cute teddy bear equipped with a concealed camera to check on a baby sitter, or the children if left alone. Then they may forget that any of these devices can be hacked following the rule that “anything is hackable, sooner or later”. Possible hacks in the UK ).

We could write a book adding many pages everyday, as things are evolving at lighting speed.

As a matter of fact while writing this blog post we learned that a Vibrator Maker who secretly tracked use of the “toy” ended-up reaching a $3.75 million class action settlement with users following the allegations of “sex-habits-spying”.

Modern mitigations

First let’s note that today spoken dialogue systems can already recognize the language we speak and partially understand it. Automatic language identification is not a problem anymore. Additionally, Automatic Speech Recognition, that is transcription from speech to text, has high level of accuracy. Natural Language Understanding systems can determine the language of a written or spoken text. They can analyze it and extract information.
Can the system teach itself a new language from scratch? Until today the answer is no. The system needs a new corpus (i.e. a set of training data) in order to generate new models through machine learning. However, as the availability of training data is growing faster than ever before, we can imagine that the learning speed will increase (technical information courtesy of CELI Language Technology).

Thus it may well be that soon Alexa NLP will be able to recognize the language you speak and adapt. But if and when that occurs it will be “market driven”. Thus dialects and unusual languages may be safer for a bit longer.

Some suggestions

  • The cave-man solution. Pull the plug of the device, kill the battery, use an axe.
  • The spy movie trick. Put a loud source of music next to the device to mask your voice.
  • The black hood. Blind your device with a “executioner style” black hood. Remember no holes for the eyes and the device does not need to breathe either.
  • Talk a “secret language” for the machine. For example not German or English which Alexa currently understands.
    • It is a good exercise for the whole family to learn another language.
    • It does not need to be Navajo, the language the US army used to code messages during WWII. But it could be a dialect.
    • Dialects are great as we trust no NLP provider will invest in a dialect.
    • If none of the above works use your own freshly developed argot (slang). After all French outlaws and thugs all over the world used to speak a slang to avoid police interference.

Conclusions

We had fun writing this piece on Emerging risks or old risks simply renamed.
Do not be mistaken. Although we used a “light tone”, the risks are very serious. None of us wants to discover our children entangled with remote confidants with their own very peculiar set of values.

On the other hand, these marvelous developments all have an upside!

Call us if you wish to understand how to evaluate the balance between upside and downside risks in your life, company, organization.

Tagged with:

Category: Risk analysis

Leave a Reply

Your email address will not be published. Required fields are marked *

Riskope Blog latests posts

  • Wells Fargo judgement and Tailings risks
  • 1-02-2023
  • We thought of Wells Fargo judgement and Tailings risks in follow-up to the recent judgement against a number of bank’s…
  • Read More
  • Prefeasibility hazard adjusted NPV
  • 25-01-2023
  • A mining company asked us to perform a Prefeasibility hazard adjusted NPV evaluation. Our action first focused on bringing clarity…
  • Read More
  • OpenAI’s ChatGPT applied to tailings dams and associated risks
  • 11-01-2023
  • As everyone else, we got excited about the new ChatGPT so we tried OpenAI’s ChatGPT applied to tailings dams and…
  • Read More
  • Get in Touch
  • Learn more about our services by contacting us today
  • t +1 604-341-4485
  • +39 347-700-7420

Hosted and powered by WR London.