Credit: Google News

Source: shutterstock

Artificial intelligence (AI) is trending globally in commerce, science, health care, geopolitics, and more areas. Deep learning, a subset of machine learning, is the lever that launched the worldwide rush—an area of strategic interest for researchers, scientists, visionary CEOs, academics, geopolitical think tanks, pioneering entrepreneurs, astute venture capitalists, strategy consultants, and management executives from companies of all sizes. Yet in the midst of this AI renaissance, is a relatively fundamental unsolvable problem with machine learning that is not commonly known, nor frequently discussed outside of the small cadre of philosophers, and artificial intelligence experts.

A global research team of researchers have recently demonstrated that machine learning has an unsolvable problem, and published their findings in *Nature Machine Intelligence* in January 2019. Researchers from Princeton University, the University of Waterloo, Technion-IIT, Tel Aviv University, and the Institute of Mathematics of the Academy of Sciences of the Czech Republic, proved that AI learnability cannot be proved nor refuted when using the standard axioms of mathematics. An axiom, or postulate, is a mathematical statement that is self-evidently true without proof.

To understand why and how the researchers arrived at this conclusion requires a retrospective look well before the term “artificial intelligence” was even coined, in a field of study entirely different from computer science—the realm of mathematics—specifically, the continuum hypothesis.

In mathematics, the continuum hypothesis is a proposed explanation regarding the possible sizes of infinite sets. A set in mathematics is a collection of objects. Whether the sets are infinite (without any limits or bounds) or finite, you do not have to count the individual elements in order to compare them.

For example, to figure out if you have more jerseys than players on a football or soccer team or vice versa, the coach only has to take a brief look to see if there are leftover jerseys, or players missing sports uniforms. In 1874, German mathematician Georg Cantor applied a similar approach to this concept in order to illustrate that the set of real numbers (positive or negative values that represent a quantity along a number line) is larger than the set of natural numbers (positive whole numbers that may or may not include zero, depending on the standard used).

Cantor was the first to posit that there is no infinite set with a cardinal number (numbers used for counting that represents quantity rather than position in a list) between the infinite sets of integers and real numbers (the continuum) circa 1878. In effect, Cantor showed that the continuum is not countable—real numbers are a larger infinity than counting numbers. This discovery initiated the set theory field of mathematics.

In 1900 German mathematician David Hilbert (1862 – 1943) presented a list of unsolved math problems at the International Congress of Mathematicians in Paris, of which, “Cantor’s problem of the cardinal number of the continuum” was first on the list.

This remained unsolved for over three decades until mathematician Kurt Gödel demonstrated that the negation of the continuum hypothesis could not be proved in standard set theory. Gödel was born in 1906 in the Czech Republic. Gödel was a proponent of mathematical Platonism, and viewed math as a descriptive science. Gödel and Albert Einstein were friends and took daily walks while they were both at the Institute for Advanced Study. The Institute for Advanced Study is an independent postdoctoral research center in Princeton, New Jersey—a leading centers for curiosity-driven pursuit of knowledge with over 33 Nobel Laureates, 42 Fields Medalists, 17 Abel Prize Laureates, and many MacArthur Fellows and Wolf Prize recipients among its faculty and members.

“Gödel was the first man to demonstrate that certain mathematical theorems can neither be proved nor disproved with the accepted, rigorous method of mathematics … Gödel actually proved this theorem, not with respect to mathematics only, but for all systems which permit a formalization, that is a rigorous and exhaustive description, in terms of modern logic: For no such system can its freedom from inner contradictions be demonstrated with the means of the system itself.” – John von Neumann, Mathematician, Physicist, Computer Scientist

Gödel demonstrated that if the continuum hypothesis was added to the axiomatic system of the Zermelo-Fraenkel set theory (ZFC), there would be no contradiction. It was not until the early 1960s was Gödel’s work on the continuum hypothesis completed. American mathematician Paul Cohen demonstrated that the nonexistence of an intermediate-sized set is unprovable. Cohen (1934 – 2007) was the recipient of the 1967 National Medal of Science, 1966 Fields Medal for logic and the 1964 American Mathematical Society’s Bôcher Prize for analysis. Using the set theory technique of forcing, Cohen showed that if the negation of the continuum hypothesis was added to the set theory, there would not be any resulting contradiction.

Thus, together the work of Gödel and Cohen established that the validity of the continuum hypothesis was undecidable because it was dependent on the version of set theory used—it cannot be proven right or wrong.

Fast forward to present-day where researchers create a proof based on “the fact the continuum hypothesis cannot be proved nor refuted,” and demonstrate, at least in certain cases, that “a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis.”

A computer algorithm—a well-defined instructions that enable computers to problem solve—is based on logic, a form of reasoning. Artificial intelligence algorithms use principles from mathematics and statistics to enable machines to perform without explicit programming, also known as “hard coding.” For the study, the researchers focused on a learning problem called “estimated the maximum” (EMX). Using the EMX model, the team discovered that regardless of the mathematical method used, it is not a guarantee whether or not artificial intelligence is capable of managing the task. The team posits that the ability for a machine to learn (learnability) is limited by mathematics that is unprovable.

That is how an erudite concept of infinite sets and mathematical conjectures from the 1800s and 1900s has modern-day relevance and may impact the future of machine learning in this century, and beyond.

*Copyright © 2019 Cami Rosso All rights reserved.*

Credit: Google News