Tuesday, January 26, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Artificial Intelligence

AI Systems Evolving Rapidly Across Nations, Governments, Industries, Organizations and Academia (NGIOA)

March 5, 2019
in Artificial Intelligence
AI Systems Evolving Rapidly Across Nations, Governments, Industries, Organizations and Academia (NGIOA)
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Credit: AI Trends


You might also like

Red Kill Switch for AI Autonomous Systems May Not be a Life Saver

Remote Learning Boosting Adoption of Innovative Technologies for Education 

With New Healthcare Tech Relying on Data Sharing, Trust is Required 

By Jayshree Pandya, Founder and CEO of Risk Group LLC

This is an age of artificial intelligence (AI) driven automation and autonomous machines. The increasing ubiquity and rapidly expanding potential of self-improving, self-replicating, autonomous intelligent machines has spurred a massive automation driven transformation of human ecosystems in cyberspace, geospace and space (CGS). As seen across nations, there is already a growing trend towards increasingly entrusting complex decision processes to these rapidly evolving AI systems. From granting parole to diagnosing diseases, college admissions to job interviews, managing trades to granting credits, autonomous vehicles to autonomous weapons, the rapidly evolving AI systems are increasingly being adopted by individuals and entities across nations: its government, industries, organizations and academia (NGIOA).

Jayshree Pandya (née Bhatt), Founder and CEO of Risk Group LLC

Individually and collectively, the promise and perils of these evolving AI systems are raising serious concerns for the accuracy, fairness, transparency, trust, ethics, privacy and security of the future of humanity — prompting calls for regulation of artificial intelligence design, development and deployment.

While the fear of any disruptive technology, technological transformation, and its associated changes giving rise to calls for the governments to regulate new technologies in a responsible manner are nothing new, regulating a technology like artificial intelligence is an entirely different kind of challenge. This is because while AI can be transparent, transformative, democratized,  and easily distributed, it also touches every sector of global economy and can even put the security of the entire future of humanity at risk. There is no doubt that artificial intelligence has the potential to be misused or that it can behave in unpredictable and harmful ways towards humanity—so much so that entire human civilization could be at risk.

While there has been some — much-needed — focus on the role of ethics, privacy and morals in this debate, security, which is equally significant, is often completely ignored. That brings us to an important question: Are ethics and privacy guidelines enough to regulate AI? We need to not only make AI transparent, accountable and fair, but we need to also create a focus on its security risks. 

Security Risks

As seen across nations, security risks are largely ignored in the AI regulation debate. It needs to be understood that any AI system: be it a robot, a program running on a single computer, a program running on networked computers, or any other set of components that hosts an AI, carries with it security risks.

So, what are these security risks and vulnerabilities? It starts with the initial design and development. If the initial design and development allows or encourages the AI to alter its objectives based on its exposure and learning, those alterations will likely occur in accordance with the dictates of the initial design. Now, the AI will one day become self-improving and will also start changing its own code, and, at some point, it may change the hardware as well and could self-replicate. So, when we evaluate all these possible scenarios, at some point, humans will likely lose control of the code or any instructions that were embedded in the code. That brings us to an important question: How will we regulate AI when humans will likely lose control of its development and deployment cycle?

As we evaluate the security risks originating from disruptive and dangerous technology over the years, each technology required substantial infrastructure investments. That made the regulatory process fairly simple and easy: just follow the large amounts of investments to know who is building what. However, the information age and technologies like artificial intelligence have fundamentally shaken the foundation of regulatory principles and control. This is mainly because determining the who, where and what of artificial intelligence security risks is impossible because anyone from anywhere with a reasonably current personal computer (or even a smartphone or any smart device) and an internet connection can now contribute to the development of artificial intelligence projects/initiatives. Moreover, the same security vulnerabilities of cyberspace also translate to any AI system as both the software and hardware are vulnerable to security breaches.

Moreover, the sheer number of individuals and entities across nations that may participate in the design, development and deployment of any AI system’s components will make it difficult to identify responsibility and accountability of the entire system if anything goes wrong.

Now, with many of the artificial intelligence development projects going open source and with the rise in the number of open-source machine learning libraries, anyone from anywhere can make any modification to such libraries or to the code—and there is just no way to know who made those changes and what would be its security impact in a timely manner. So, the question is when individuals and entities participate in any AI collaborative project from anywhere in the world, how can security risks be identified and proactively managed from a regulatory perspective?

Jayshree Pandya , is Founder of Risk Group, Host of Risk RoundupPodcast, Author of The Book, The Global Age & a Strategic Security Advisor.

Read the source article in Forbes.

Credit: AI Trends By: John Desmond

Previous Post

How to Buy External Data to Fuel Analytics, AI Insights

Next Post

17 Great ML Books and Funny Pictures

Related Posts

Red Kill Switch for AI Autonomous Systems May Not be a Life Saver
Artificial Intelligence

Red Kill Switch for AI Autonomous Systems May Not be a Life Saver

January 22, 2021
Remote Learning Boosting Adoption of Innovative Technologies for Education 
Artificial Intelligence

Remote Learning Boosting Adoption of Innovative Technologies for Education 

January 22, 2021
With New Healthcare Tech Relying on Data Sharing, Trust is Required 
Artificial Intelligence

With New Healthcare Tech Relying on Data Sharing, Trust is Required 

January 22, 2021
Time is Right for the AI Infrastructure Alliance to Better Define Rules 
Artificial Intelligence

Time is Right for the AI Infrastructure Alliance to Better Define Rules 

January 22, 2021
AI is Helping Forecast the Wind, Manage Wind Farms 
Artificial Intelligence

AI is Helping Forecast the Wind, Manage Wind Farms 

January 22, 2021
Next Post
17 Great ML Books and Funny Pictures

17 Great ML Books and Funny Pictures

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Google: North Korean hackers have targeted security researchers via social media
Internet Security

Google: North Korean hackers have targeted security researchers via social media

January 26, 2021
Moving Averages: Natural Weights, Iterated Convolutions, and Central Limit Theorem
Data Science

Moving Averages: Natural Weights, Iterated Convolutions, and Central Limit Theorem

January 26, 2021
Smart Algorithm Cleans Up Images By Searching For Clues Buried In Noise
Machine Learning

Smart Algorithm Cleans Up Images By Searching For Clues Buried In Noise

January 26, 2021
Data of BuyUcoin cryptocurrency exchange traders allegedly leaked online
Internet Security

Data of BuyUcoin cryptocurrency exchange traders allegedly leaked online

January 26, 2021
Tools And Models Used In Software Testing
Data Science

Tools And Models Used In Software Testing

January 26, 2021
Using machine learning to better understand elbow injury | The Source
Machine Learning

Using machine learning to better understand elbow injury | The Source

January 26, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Google: North Korean hackers have targeted security researchers via social media January 26, 2021
  • Moving Averages: Natural Weights, Iterated Convolutions, and Central Limit Theorem January 26, 2021
  • Smart Algorithm Cleans Up Images By Searching For Clues Buried In Noise January 26, 2021
  • Data of BuyUcoin cryptocurrency exchange traders allegedly leaked online January 26, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates