From the Editor’s Desk
One of the key concepts in data science (and indeed in physics) is the nature of noise and randomness. The two are connected, though not the same thing. Randomness is an abstract concept that can be described in any number of ways, but a useful way of thinking about is that when two variables are completely uncorrelated to one another, a phase space diagram of the two will ultimately fill a unit square if the variables are normalized.
True randomness is surprisingly rare. Noise, on the other hand, is not. Noise is seldom truly random, but instead, can be thought of as the signals of the universe that are not of interest to you. Sometimes that noise can be treated as random (television static is a good example, as that signal may be everything from the cosmic background radiation to interactions of the Earth’s electromagnetic fields to … who knows, ghosts in the machine, so to speak. A surprising amount of such noise comes in fact from recursive fractals that seem to fill nature, and may in fact be artifacts of any complex system. There’s a growing amount of evidence to suggest that contexts are themselves fractals, which is perhaps why AI keeps bumping up against the fractal walls.
One of the roles of most data scientists and data analysts is to extract the signal from the noise, to understand what is relevant from what is not. Neural Networks, when you get right down to it, exist primarily to boost signals – to both find and enhance the signals from noise, and to then use that to predict or verify behavior based upon known signals.
Announcements
DSC Featured Articles
Credit: Data Science Central By: Kurt Cagle