The singularity scenario
Apr 20th, 2017
The singularity scenario is well known. AI becomes overnight smarter than us. We become a hazard (aren’t we already?) to our planet, generating consequences, hence risks. Machines get rid of us because of that very reason and because we may hinder their mission. It is only a matter of point of view, right? Before that night AI was only 50% as smart than us. The giant “last night” leap is the result of exponential growth.
We have all seen it happening in many fields, the latest being alternative energy. We will probably soon witness it with electric cars and some new contraption we have not thought about yet.
If we use famed futurist Ray Kurzweil intelligence proxy measuring “calculations per second per $1,000,” we have a number that keeps growing. We are today at “insect brain” stage with AI. Some predict by 2025 we might have a computer at “human brain” stage. And overnight, well, you know the end.
The singularity scenario is the last one.
The singularity scenario is the last one we will see.
So, beside AI having its own motivations, possibly “feelings” if you want to go that far, it is conceivable that AI could come to the conclusion that humanity is too much of a hazard. Therefore it would take appropriate Risk Management mitigative measures to reduce the hazard (i.e. “us”).
Our consequences to the geosphere are there to stay. They range from toxic waste dumps to remnants of war, spills, releases, etc. AI may resolve that first thing to do is to eliminate the source. And that is Humanity.
That may seem radical and inhuman, for sure, but probably highly efficient from an algorithmic or self programming intelligence point of view.
Let’s change the point of view
From our human point of view the issue is completely different. We know our world is far from perfect. We keep doing things we should not do, try to reduce the harm, spend huge amounts of money to discuss and give ourselves alibi for inaction. Politicians are at serious disadvantage to machines: they need to be elected, be popular, machines do not.
Back to the hazards of AI. Let’s suppose machines are carrying out human written programs. Probably this will be the case for some time, before the intelligent machines start truly “programming” themselves. Programs should have safe-guards, right? And these safeguards could start with the Asimov Laws of Robotics.
Now we all can see that many scenarios could lead an AI, programmed to obey to the Laws, to the point where paradoxically it has to decide to annihilate scores of humans.
The singularity scenario would cascade from that initial slip.
Or the initial program may miss some lines of code, some instructions covering special cases. That would allow a first exception. The singularity scenario would cascade from that initial slip.
The Terminator, WarGames, and Frankenstein described somewhat similar scenarios where AI or the likes become a hazard to humanity generating huge risks.
We do not mean to sound alarmist
We do not mean to sound alarmist, but a lot of silly very real things are already happening. The singularity scenario may start with silly hazardous situations that may affect (or afflict?) millions hence generate large societal risks.
In a country-wide cyber warfare risk assessment we performed we focused precisely on this type of event. We looked at the multifaceted consequences of retirement payments not being performed, salaries not paid on time, “silly” disruptions to utilities, etc.
The singularity scenario does not need to be a “nuclear option”. It can be pain by fatigue and stress leading to societal destabilization and collapse.
And in a prior study, almost a decade ago a government contracted us to perform a “fake news” (propaganda) risk assessment. Yes, there are clients that are way ahead of the pack and ask themselves long term questions.
So, again it is not “new risks” that AI will bring, but old risks simply renamed. Threat-to, threat-from vectors slightly modified.
Many of these old/new risks come up inadvertently, like those generated by the race to get more likes in social networks. For example, sensationalism, crude images, belonging to a radicalized group, etc.
Another example is writing in a style that is well liked by Google. There AI is forcing humans to behave in a certain way. Slavery is starting. Think also repeating stories without proper fact-checking.
Think about this. Every time a website suggests you a movie, news, products, there is a machine, an AI behind the scenes. It judges you on past behavior, habits, residence location, job, probably health and revenue. You would not want another human to know all that, yet you have already surrendered to machines. The singularity scenario may have started here.
Remember Occupy Wall Street? We wrote on that quite a while ago. Well, that’s another example of machines cruel logic applied to millions of people.
Needless to say we salute Elon Musk donation “to keep A.I. from turning evil.” even if the title is quite sensationalist.
Conclusion
Like we have written over and over in other disciplines, programmers should not work in isolation from society, with the only objective of honing the capabilities their product. Black boxes needs to be accounted for in the risk register.
Comprehensive risk assessments should be performed on programs and AI applications to investigate hazards, every possible interdependent situation, including the worst-case possibilities and multi-dimensional consequences at large.
Like we foster engineers education to think hazards and risks, so we need to educate designers to think through algorithm effects. It’s again a major ethical need.
Tagged with: AI, Hazard, singularity scenario, spills, toxic waste dumps to remnants of war
Category: Hazard, Risk analysis, Risk management
Leave a Reply