Credit: Google News
In 1967, Marshall McLuhan said: “We shape our tools and, thereafter, our tools shape us.” Except that it wasn’t really McLuhan who said it. It was his friend Father John Culkin, a professor of communication at Fordham University. The misattribution is easy to make because it fits so well with McLuhan’s idea that tools (media, technology) are extensions of ourselves. His go-to example was a blind person’s cane that is an extension of the person’s sense of touch.
This way of thinking about tools was surprising at the time—less so now that McLuhan has turned out to be our own cane for sensing what was ahead of us—because we thought of tools as things we used, not as parts of who we are.
When tools work well
Martin Heidegger, the German philosopher, had come to a similar conclusion 40 years earlier. In his book, Being and Time, he talked about the things of the world primarily showing themselves to us as available for our use in whatever project we have in mind. If we’re about to cut a tomato on our plate, we don’t see our knife as a simple metal object of a particular shape; the knife shows itself to us as something good for cutting tomatoes. In fact, said Heidegger, when a tool is operating well, it becomes invisible to us: We don’t notice the knife unless its edge is too dull or if it balances badly in our hands. In that invisibility, a tool operates as an extension of us.
His observation that we only become aware of tools when they break is astute, but ignores the fact that they also become visible to us when they work exceptionally well. When a knife is so sharp that it cuts a tomato into translucent slices, you notice and admire the knife. There is a noticeable joy in using a good tool.
I have at times heard that joy breaks out among developers of machine learning systems.
Sometimes, in my experience they’re laughing because a system under development is giving such obviously wrong results, the way we all laugh at the foibles of computers. They may also laugh when they realize why the AI came to those wrong results, often due to the machine’s insane literalism as it operates on the data it’s been given.
Credit: Google News