Technology is rapidly bringing us into a new epoch in human life. With this change however comes difficult societal, moral and economic questions.
Let’s consider the difference between two majorly successful businesses Kodak & Instagram. When Kodak was in its prime. it provided jobs for 145,000 people. In contrast, when Facebook acquired Instagram in 2012 for $12 billion, Instagram had just 12 employees in total. Instagram is not alone either, in 2014 when Facebook acquired Whatsapp for $19 billion, which at the time had 400 million users, there were only 55 employees.
As a result, it is possible that the lessons of the past will no longer apply in the future. The current economic revolution we are experiencing could lead to far more Instagrams and far fewer Kodaks. Resulting in a growing gulf between a tiny but super-rich elite and an increasingly jobless middle class.
Trending AI Articles:
1. Basics of Neural Network
2. Bursting the Jargon bubbles — Deep Learning
3. How Can We Improve the Quality of Our Data?
4. Machine Learning using Logistic Regression in Python with Code
To avoid such issues we need to be creative with our ideas. One notion is a universal basic income, whereby the state would provide all citizens with enough money on which to live.
As it stands artificial intelligence has been good at performing individual tasks. Apple’s Siri, for example, can learn to understand your voice and follow commands, and even has the capacity to translate one language into another.
Yet it is still unable to replicate wider human intelligence effectively. Things that come naturally to us like creativity or intuition remain a difficult prospect for machines.
If you take a piece of hair from your head and look at the width, it seems tiny. But in nano-terms it is massive. A single nanometer is about one ten-thousandth of its width. This gives some context to what sort of scale we are dealing with when looking at nanotechnology.
Nanotechnology is created at the scale of molecules or single atoms. Yet despite its tiny anatomy its impact on our health and lives, in general, will be nothing short of huge.
This is partially down to nanoscience exploiting the fact, that at the level of single atoms and molecules, materials have different properties. Amongst other things, this means they can have less weight and greater strength. If we take carbon nanotubes, microscopic tubes formed of carbon atoms, they are able to generate incredibly strong materials.
A stack of around 100 sheets of these carbon nanotubes — whilst still thinner than a millimetre — has the strength to take a bullet, enabling the creation of an ultra-thin and lightweight bulletproofing material.
Whilst nanotechnology has the potential to protect the outside of our bodies, it shows even greater promise on what it can do inside our bodies. Developments in sensors and computing on the nanoscale mean we will soon be able to fight disease and keep us healthy with the help of nanorobotics.
In the near future, your body could house nanorobots which are constantly patrolling your circulatory system. Travelling your bloodstream, they will be able to attack viruses, bacteria and other disease-carrying bodies.
Nanotechnology will also help us to manage long-term conditions. Patients with diabetes, for example, might soon have nanorobots in their bloodstreams, constantly measuring their blood nutrient levels and giving them a boost of the right chemicals at the right time.
In Indian mythology, the gods sailed the oceans searching for an elixir that provided immortality, the Greeks also talked of an elusive elixir of life. Perhaps advancements in nanotechnology will enable us the power to control our health. When mixed with genetic coding, this enables us opportunities to almost play God.
Our DNA acts as our programming of who we are, and similarly to computers, we can now analyse, read and even manipulate our DNA. Many companies now exist that offer DNA analysis to every one of the 22,000 genes that make us us. Such testing can give a broad analysis from what % Neanderthal you are through to your genetic predisposition to Alzheimer’s.
In 2013, Angelina Jolie made the decision to have a double mastectomy, this decision was supported by a genetic analysis which suggested she had an 87% probability that she would develop breast cancer within the next 14 years, which allowed her to take preventative action.
The consequences are profound. To date, our evolution has been based on natural selection. However now ‘human selection’, the ability to dictate our own evolution, lies before us. Along with it comes profound ethical questions with no clear answers. Could a bereaved parent clone their lost child? Should we all evolve to have the running ability of Eliud Kipchoge?
The surveillance around us is increasing daily, we are constantly being recorded by sensors, cameras and connected devices. Scattered across the cyber world is a digital avatar of you. What you like to watch on Netflix, which shows you never finished. What you like to eat, what you usually buy and at what time. Your political views and whom you converse with.
But no one has the full picture. This in itself is an issue, as our data is not only our own, but it lies fragmented with different organisations around the world.
Our fragmented selves are beneficial to companies like Facebook. They can make money on the pieces of incomplete data they hold on you by selling it to advertisers. However, the fact this data is not complete can also cause problems. After spending a prolonged period in one place, the adverts that appear are for services in that place, whether you are still there or not still is not accounted for, so the adverts can be irrelevant to you and waste the advertisers budget.
A suggest alternative this is called the ‘Me Model’. This would be a true and complete digital profile of you, with all data pulled together in one place. The establishments you frequent. The number of steps and exercise you do daily. Your hospital records. Your Google & Netflix histories. All stores in a single system that utilises artificial intelligence to maintain the best possible, most up-to-date version of yourself.
This model offers a number of advantages. Firstly, it belongs to you, allowing you complete control over what you share and with whom you share it with. Secondly, whilst today many private companies are utilising your data for their financial gain, with this model, it enables you to monetise your own data. Why would an advertiser pay Facebook for an incomplete and fragmented dataset, when you could sell them the complete picture.
We are now living in a data-driven economy, and whilst it is important and to be conscious about the data we produce, there is already more data out there about you than you are aware. It is becoming a greater definition of who you are as a person, so would it not be better to have control of it?
In March 2018, an Uber that was being autonomously driven struck and killed a woman, making her the first pedestrian fatality by a self-driving car. This immediately raised difficult questions, who is to blame in this scenario? The owner for owning the car and using it for its intended purpose? The manufacturer of the car? The company that programmed the software that operates the car?
As the ‘smart’ machines around us grow more and more integral to our daily lives, the questions surrounding how we control and regulate them become greater.
How do we prevent robots from being hacked and misused? What about morals? Do we encode them into robots? If so, whose morals?
Isaac Asimov, the renowned writer and professor proposed in 1942, three laws for robots. First, a robot should not injure a human or allow one to come into harm. Second, a robot must obey its orders, except where they would conflict with the first law. Third, a robot should protect itself, as long as the protection does not go against either of the first two laws. Whilst these 3 laws give a solid foundation, they provide no clear guidance for some of the more complex situations a machine could face.
Consider again a self-driving car that sees someone step out unexpectedly into the street. It is presented with a moral dilemma, does it swerve dangerously to protect the pedestrian but risk its owner’s life. Or does it prioritise its owner’s safety at the expense of the pedestrian’s? Does a self-driving vehicle have a loyal duty to protect its owner? and if yes, do taxis and public transportation vehicles behave differently to privately owned cars? Should a vehicle’s behaviour change if the pedestrian is a child or an elderly person?
It will take time to find solutions to these questions, in the meantime, it is worth being nice to machines, who knows, in the long run, it may pay off to stay in their good books!