The Ominous Implications of Artificial Intelligence
The optimistic future that The Madhouse Projects by Rick Badman depicts is one that our society is on the path towards achieving in a possibly short amount of time considering the rapid advancement of technological breakthroughs in recent years, particularly since the 2010s. In his book, the automotive engineer and genius behind multiple innovations in that field rolls out highly-specialized and revolutionary technological advancements that would greatly benefit humankind, but not without criticism and roadblocks along the way.
On the course of expansion that worldwide technological advancement is currently progressing towards, the prospect of artificial intelligence is among the most sensitive as it is a moral and ethical grey-area in the scientific community. When discussing how close he was to perfecting cutting-edge artificial intelligence, Elon Musk once said that AI is far more dangerous than nukes. “It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential,” he claimed.
Just a year before he gave that talk, Stephen Hawking warned of the potential cataclysmic impact that artificial intelligence could bring about on the world if not strictly monitored in a morally-responsible and ethical way. Without a way to anticipate, understand, and avoid the potential risks associated with increasing AI advancements that lead it closer and closer to sentience. The technology could potentially end up bringing about the worst event in the history of human civilization and considering the countless degree of terribly unthinkable events that have unfolded throughout our species’ history, which emphasizes the dire importance of Stephen Hawking’s warning.
Because of speculation and mainstream literature, Sci-Fi horror scenarios generally have a public connection with artificial intelligence hazards (AI). Sometimes these include destroying machines and highly advanced computer systems that see humans as a threat that must be destroyed for the world’s sake. While such cautionary incidents frequently played out in comic books and on screens as exaggerated and dumb, intelligence artificial threats cannot be too readily avoided — and there are actual AI dangers to worry about.
Much like in The Terminator series, AI could theoretically progress to this unstably dangerous point when machines can reprogram and self-develop successively and independently, leading to a so-called ‘technological singularity’ or ‘intelligence explosion’. The risks of machinery with the capability to override human effort in resource conservation struggles just absolutely cannot be ignored or overlooked. When the machines become sentient, and they decide that humanity is inferior, what would happen next? The classic Frankenstein-Esque trope of the creation turning on its creator? We may soon find out if the current artificial technology advancements ever go unchecked or unaccounted for.
Jobs automation is commonly regarded as the most immediate issue when considering the dangerous implications of artificial intelligence. It is no longer a question of the extent to which AI substitutes for particular forms of work. Disruption is well underway in many sectors — mainly, but not only, those where employees perform routine and repetitive activities. A Brookings Institution report of 2019 reveals that 36 million workers are working with “high” automation exposure. At least 70% of their actions – from the supermarket, industry research, and the housing and warehouse jobs – will be carried out with AI soon. An even more recent Brookings study reveals that workers in white collars are more vulnerable.
As AI algorithms are human-made, they can be inserted into the algorithm either deliberately or unintentionally. When Ai systems are constructed with a predictive nature or the data from which they are given are evaluated, they yield tested results. This can lead to unforeseen effects, such as those we saw with biased algorithms and discriminatory Microsoft twitter chat. Companies create and work responsibly as they construct AI algorithms.
On the other hand, when AI takes over simple chores and enables people to dramatically decrease their time on a job, the other liberty might seem at first glimpse to be a wonderland. To feel like their lives have meaning, however, people need to canalize their new liberation into new behaviors that give them the same social and psychological gain as their jobs. Some people and societies can find this simpler than others.
Financial concerns would undoubtedly emerge as robots assume responsibility that people used to be paid for. In firms’ profit-loss statements, the economic advantage of higher efficiencies is very evident, but the net benefit to society and civilization is much less pronounced.