Science fiction movies and TV shows like Black Mirror often set the scene of a bleak future. Many of these scenes relate to the downfall of humanity through the advancement of artificial intelligence.
The development of deep neural networks (DNNs) has fuelled the progress of AI in recent years. The precise mechanics of how a DNN works are complex, but the basic idea is more simple. Similarly to the human brain, a DNN consists of thousands of simulated neurons linked together and arranged into hundreds of complex layers. By both sending and receiving these signals to and from one another, these layers of neurons do something called deep learning. This deep learning enables them to teach themselves how to do things with little or no human supervision.
Trending AI Articles:
1. Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data
2. Data Science Simplified Part 1: Principles and Process
3. Getting Started with Building Realtime API Infrastructure
4. How I used machine learning as inspiration for physical paintings
Currently, we are surrounded by examples of artificial narrow intelligence (ANI). These can range from something basic like spam filters, voice transcribers all the way through to autonomous cars and virtual assistants like Google Home, Amazon’s Alexa & Apple’s Siri, all of which are powered by DNN.
Tech companies are creating ANI programs and systems as quickly as they are able to and applying them to more and more facets of human life. These programs are present on our mobile phones, in our hospitals, underlying our finance applications, on the websites we are browsing, and as this trend continues, they’ll become further and further intertwined with nearly every area of our daily lives.
In each of their many areas, ANI systems approach, equal and in some cases even surpass human intelligence, but only within their specified area. That said the same basic principles behind DNN-powered ANI programs can be used to create more generalised systems also. These systems can tackle a wider scope of responsibilities, such a virtual assistant being able to arrange appointments and reservations interacting with humans on your behalf. When this milestone is accomplished, ANI will be surpassed by artificial general intelligence (AGI). At this point, we will find that AI will begin to approach parity with humans in terms of overall intelligence.
Once this occurs, the sky will be the limit. Just like with any current ANI programs, AGI systems will be able to improve themselves continuously at a lightning pace. Eventually reaching the point where they will be able to outperform the human mind, not just slightly, but by many billions of times. Once this occurs we will have artificial superintelligence (ASI).
At current growth rates within the space, AGI will be commonplace in the 2040s, whilst ASI will be a reality by 2070.
With this evolution within technology, AI will eventually develop a mind of its own. It will garner the ability to think about the world in a way that is both independent of human input and distinctly non-human in its functioning. Whilst we are unable to predict what that mind will evolve into, we can be sure that it will evolve from the AI systems that exist and are being built right now.
Currently, we are living in a pivotal period of history. From now until some point within the next two decades, our current AI research and development will effectively shape the contours of the landscape on which the future of humanity will be constructed.
In an idealistic world, the power and potential of AI would be harnessed toward exalted, humanistic goals, goals that have humanity’s future and best interests in mind. Unfortunately, this is not the case, with the majority of tech companies operating to the mantra of ‘build it first ask for forgiveness later’, essentially rushing the product or service to the market and waiting for the public to find out if there are any consequences be it positive or negative.
In the instance that there are negative consequences, then the company will issue an apology and move on from it. There are a growing number of headlines related to this, i.e. the Facebook-Cambridge Analytica scandal of 2018, in which many millions of Facebook users had their personal data compromised.
So what does the AI future hold for us? Whilst it is largely speculation, one of the main premises is that in the near future, there will be sophisticated AI-powered apps, systems and devices that are intertwined with almost every aspect of our lives.
Whole sectors could become more increasingly reliant upon artificial intelligence. Banking, transportation, healthcare etc. AI devices would have the power to tell us if we have a particular nutrient deficiency, and what we should eat to remedy it. In the more distant future, nanobots could be directly injected into our bodies where they’d not only detect abnormalities but also have the capacity to heal our maladies without any human intervention.
That said, with an increasing dependence on AI, we are putting ourselves at greater risk. With pressure on tech companies to remain competitive and continue to innovate and produce new products and services constantly, devices could be released that are prone to glitches, these glitches could have a significant impact on our lives. Imagine whole healthcare systems or transportation systems rendered inoperable due to technical errors.
More threatening than glitches are coordinated hacks. Imagine a scenario where hackers with malicious intentions could hijack a countries systems, and devices, effectively holding an entire nation hostage.
Then imagine a nation hacks nanobots that are present in the general populous and turns them against their hosts effectively wiping out a whole population. Whilst this seems more like a plot to a sci-fi movie, it could become reality as the growing environmental crisis worsens and the number of resources becomes more limited, meaning such malicious acts have to be committed as a means for survival.
Artificial intelligence has the potential to provide us with tremendous benefits but also has the capacity to lead humans into many dangers. How can we ensure we enjoy the former while avoiding the latter?
There needs to be a shift away from tech companies focusing on corporate profit, and a refocusing on development around humanistic values. Under present circumstances, however, that is merely wishful thinking. Even if companies have the best intentions, they’d still be pressured by the relentless market forces into developing and releasing unvetted AI services and products as quickly as possible. Unless those forces can be mitigated, then it would be an unrealistic expectation to see companies exercise an increased level of self-restraint.
For this reason, it is important that the Government step in by creating and implementing robust and comprehensive policies towards AI. This doesn’t solely mean developing new laws, regulations and bodies to ensure that tech companies are complying with ethical and legal standards. But also investing enormous amounts of funding into the AI sector. This infusion of capital would remove the pressure to rush products onto the market and gain profits as quickly as possible.
That said governments also need to work with other countries to form a centralised body which is guided by humanistic values. This international body would bring together a broad array of specialists and leaders, not solely politicians and AI experts, but also economists, political scientists, sociologists and futurists.
If such an international body existed, then companies would be able to share knowledge amongst one another and build on each other’s advancements. With the resulting progress and prosperity encouraging rival nations to join so that they aren’t left behind, but in order to join, they would need to comply with the humanistic principals outlined at the inception.
Thus you’d have nations uniting and humanity would be able to enjoy the enormous benefits of AI without being exposed to the downsides.