Sunday, February 28, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Neural Networks

Cognitive computing — will computers ever amaze us again in any way?

February 27, 2020
in Neural Networks
Cognitive computing — will computers ever amaze us again in any way?
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

As its users, we have grown to take technology for granted. Hardly anything these days is as commonplace and unremarkable as a personal computer that crunches numbers and enables us to read files and access the Internet. Will computers ever amaze us again in any way? Some potential for amazement may lie in cognitive computing — a skill-set widely considered to be the most vital manifestation of artificial intelligence.

Back during my university days, and later at the outset of my professional career, I wrote software. I earned my first paycheck as a programmer. I often stayed up late and even pulled all-nighters correcting endless code errors. There were times when the code I wrote finally began to do just what I wanted it to, serving its intended purpose. In time, such moments became more and more frequent. I often wondered if programmers would ever be replaced. But how and with what? The science fiction literature I was into abounded with stories on robots, artificial intelligence and self-learning technologies that overstepped their boundaries and began to act against the rules, procedures and algorithms. Such technologies managed to learn from their mistakes and accumulate experience. It was all science fiction then. A computer program that did anything other than the tasks assigned to it by its programmer? What a delusion. But then I came across other concepts, such as self-learning machines and neural networks.

You might also like

How AI Can Be Used in Agriculture Sector for Higher Productivity? | by ANOLYTICS

Future Tech: Artificial Intelligence and the Singularity | by Jason Sherman | Feb, 2021

Tackling ethics in AI algorithms: the case of Salesforce | by Iflexion | Feb, 2021

As it turns out, a computer program may amass experience and apply it to modify its behavior. In effect, machines learn from experience that is either gained directly by themselves or implanted into their memories. I have learned about algorithms that emulate the human brain. They self-modify in the search for the optimal solutions to given problems. I have learned about cognitive computing, and it is my reflections on this topic that I would like to share in this article.

All the existing definitions of cognitive computing share a few common features. Generally speaking, the term refers to a collection of technologies that result largely from studies on the functioning of the human brain. It describes a marriage of sorts of artificial intelligence and signal processing. Both are key to the development of machine consciousness. They embody advanced tools such as self-learning and reasoning by machines that draw their own conclusions, process natural language, produce speech, interact with humans and much more. All these are aspects of collaboration between man and machine. Briefly put, the term cognitive computing refers to a technology that mimics the way information is processed by the human brain and enhances human decision-making.

Cognitive computing emulates human thinking. It augments the devices that use it while empowering the users themselves. Cognitive machines can actively understand such language and respond to information extracted from natural language interactions. They can also recognize objects, including human faces. Their sophistication is unmatched by any product ever made in the history of mankind.

In essence, cognitive computing is a set of features and properties that make machines ever more intelligent and, by the same token, more people-friendly. Cognitive computing can be viewed as a technological game changer and a new, subtle way to connect people and the machines they operate. While it is neither emotional nor spiritual, the connection is certainly more than a mere relationship between subject and object.

Owing to this quality, computer assistants such as Siri (from Apple) are bound to gradually become more human-like. The effort to develop such features will focus on the biggest challenge of all faced by computer technology developers. This is to make machines understand humans accurately, i.e. comprehend not only the questions people ask but also their underlying intentions and the meaningful hints coming from users who are dealing with given problems. In other words, machines should account for the conceptual and social context of human actions. An example? A simple question about the time of day put to a computer assistant may soon be met with a matter-of-fact response followed up by a genuine suggestion: “It is 1:30pm. How about a break and a snack? What do you say, Norbert?”

I’d like to stop here for a moment and refer the reader to my previous machine learning article. In it, I said that machine technology enables computers to learn, and therefore analyze data more effectively. Machine learning adds to a computer’s overall “experience”, which it accumulates by performing tasks. For instance, IBM’s Watson, the computer I have mentioned on numerous occasions, understands natural language questions. To answer them, it searches through huge databases of various kinds, be it business, mathematical or medical. With every successive question (task), the computer hones its skills. The more data it absorbs and the more tasks it is given, the greater its analytical and cognitive abilities become.

Machine learning is already a sophisticated, albeit very basic machine skill with parallels to the human brain. It allows self-improvement of sorts based on experience. However, it is not until cognitive computing enters the picture that users can truly enjoy interactions with a technology that is practically intelligent. The machine not only provides access to structured information but also autonomously writes algorithms and suggests solutions to problems. A doctor, for instance, may expect IBM’s Watson not only to sift through billions of pieces of information (Big Data) and use it to draw correct conclusions, but also to offer ideas for resolving the problem at hand.

At this point, I would like to provide an example from daily experience. An onboard automobile navigation system relies on massive amounts of topographic data which it analyzes to generate a map. The map is then displayed, complete with a route from the requested point A to point B, with proper account taken of the user’s travel preferences and prior route selections. This relies on machine learning. However, it is not until the onboard machine suggests a specific route that avoids heavy traffic, while incorporating our habits that it begins to approximate cognitive computing.

All this is fine, but where did today’s engineers get the idea that computers should do more than crunch numbers at a rapid pace? The head of IBM’s Almaden Research Center Jeffrey Welser, who has spent close to five decades developing artificial intelligence, offered this simple answer: “The human mind cannot crunch numbers very well, but it does other things well, like playing games, strategy, understanding riddles and natural language, and recognizing faces. So we looked at how we could get computers to do that”.

Efforts to use algorithms and self-learning to develop a machine that would help humans make decisions have produced a spectacular effect. In designing Watson, IBM significantly raised the bar for the world of technology.

The study of the human brain, which has become a springboard for advancing information technology, will — without a doubt — have broader implications in our lives, affecting the realms of business, safety, security, marketing, science, medicine and industry. “Seeing” computers that understand natural language and recognize objects can help everyone, from regular school teachers to scientists searching for a cure for cancer. In the world of business, the technology should — in time — help use human resources more efficiently, find better ways to acquire new competencies and ultimately loosen the rigid corporate rules that result from adhering to traditional management models. In medicine, much has already been written on doctors’ hopes associated with the excellent analytical tool — IBM’s Watson. In health care, Watson will go through a patient’s medical history in an instant, help diagnose health conditions and enable doctors to instantly access information that could previously not be retrieved within the required time horizon. This may become a major breakthrough in diagnosing and treating diseases that cannot yet be cured.

Watson has attracted considerable interest from the oncology community, whose members have high hopes for the computer’s ability to rapidly search through giant cancer databases (which is crucial in cancer treatment) and provide important hints to doctors.

Combined with quantum computing, this will become a robust tool for solving complex technological problems. Even today, marketing experts recognize the value of cognitive computing systems, which are playing an increasingly central role in automation, customer relationships and service personalization. Every area of human activity in which data processing, strategic planning and modeling are of importance, will eventually benefit from these technological breakthroughs.

Some people go as far as to claim that cognitive computing will begin the third age of IT. Early in the 20th century, computers were seen as mere counting machines. Starting in the 1950s, they began to rely on huge databases. In the 21st century, computers learned to see, hear and think. Since human thinking is a complex process whose results are often unpredictable, perhaps we could presume that a cognitive union of man and machine will soon lead to developments that are now difficult to foresee.

Machines of the future must change the way people acquire and broaden their knowledge, to achieve “cognitive” acceleration. However, regardless of what the future may bring, the present day, with its ever more efficient thinking computers is becoming more and more exciting.

Credit: BecomingHuman By: Norbert Biedrzycki

Previous Post

AI in B2B Marketing: Human vs. Martech Intelligence

Next Post

Will Machine Learning migrate to Africa?

Related Posts

How AI Can Be Used in Agriculture Sector for Higher Productivity? | by ANOLYTICS
Neural Networks

How AI Can Be Used in Agriculture Sector for Higher Productivity? | by ANOLYTICS

February 27, 2021
Future Tech: Artificial Intelligence and the Singularity | by Jason Sherman | Feb, 2021
Neural Networks

Future Tech: Artificial Intelligence and the Singularity | by Jason Sherman | Feb, 2021

February 27, 2021
Tackling ethics in AI algorithms: the case of Salesforce | by Iflexion | Feb, 2021
Neural Networks

Tackling ethics in AI algorithms: the case of Salesforce | by Iflexion | Feb, 2021

February 27, 2021
Creative Destruction and Godlike Technology in the 21st Century | by Madhav Kunal
Neural Networks

Creative Destruction and Godlike Technology in the 21st Century | by Madhav Kunal

February 26, 2021
How 3D Cuboid Annotation Service is better than free Tool? | by ANOLYTICS
Neural Networks

How 3D Cuboid Annotation Service is better than free Tool? | by ANOLYTICS

February 26, 2021
Next Post
Will Machine Learning migrate to Africa?

Will Machine Learning migrate to Africa?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Python vs R! Which one should you choose for data Science
Data Science

Python vs R! Which one should you choose for data Science

February 28, 2021
Can Java be used for machine learning and data science?
Machine Learning

Can Java be used for machine learning and data science?

February 28, 2021
These four new hacking groups are targeting critical infrastructure, warns security company
Internet Security

These four new hacking groups are targeting critical infrastructure, warns security company

February 28, 2021
The Time-Series Ecosystem – Data Science Central
Data Science

The Time-Series Ecosystem – Data Science Central

February 28, 2021
Accurate classification of COVID‐19 patients with different severity via machine learning – Sun – 2021 – Clinical and Translational Medicine
Machine Learning

Accurate classification of COVID‐19 patients with different severity via machine learning – Sun – 2021 – Clinical and Translational Medicine

February 28, 2021
Privacy Commissioner asks for clarity on minister’s powers in Critical Infrastructure Bill
Internet Security

Privacy Commissioner asks for clarity on minister’s powers in Critical Infrastructure Bill

February 28, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Python vs R! Which one should you choose for data Science February 28, 2021
  • Can Java be used for machine learning and data science? February 28, 2021
  • These four new hacking groups are targeting critical infrastructure, warns security company February 28, 2021
  • The Time-Series Ecosystem – Data Science Central February 28, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates