A brief interview with leading AI expert Joanna Bryson on consciousness, diversity, and inequality.
“Many people have learned the value of diversity, and it is really important. You need a lot of ideas to solve a lot of problems.”
Joanna is an associate professor of AI at Bath University, and an affiliate at Princeton University. She just recently joined the new Google AI ethics advisory council. Her background lies in natural intelligence, culture, religion and the design of intelligent systems.
I was very lucky to talk to Joanna for an hour to explore more of her in-depth experiences and viewpoints.
A lot of your research is on culture and religion, which are representations of systems. For example, a culture, a country or even the world could be seen as a collective intelligence. However, there is no global consciousness and that can be the reason why many things are going wrong as there is no connecting element putting all the bits and pieces together in a way that it is positive for all of us. This also relates to the problem of AI safety. Could you please elaborate on that?
Trending AI Articles:
1. Basics of Neural Network
2. Bursting the Jargon bubbles — Deep Learning
3. How Can We Improve the Quality of Our Data?
4. Machine Learning using Logistic Regression in Python with Code
I would agree with the first half of your characterization. I don’t think I ever said that the problem is that we lack a global consciousness although I do say that as we increase our global capacity to perceive, that we are more likely to handle global problems like for example climate change, which we literally couldn’t detect before. Now anybody can see those maps that show how much warmer the world is and so on.
I would not say that a single global consciousness is necessarily the right kind of metaphor and even within people, we know that you can have strokes in quite a lot of the brain, and you can have a compromised consciousness, but you do not entirely lose all awareness or capacity to communicate; depending on what kind of stroke you have.
I’ve been working actually a lot till now on economic inequality. What I am about to say is speculation. It is not published yet, in fact, it is not even written up. We do have a paper that we are trying to get published about how inequality correlates with political polarization. I used to try to understand World War I when I was a kid. With regards to all the other wars, we got to write about in our history lessons you can say, you know exactly what happened, e.g. with the American Civil War or The Second World War…
In America, we did get taught a lot about wars…for these sets of wars, the one outlier was World War I. It made no sense and the books even said that nobody knows; it seems like maybe something about a big chain of events. But now I think as I’m studying inequality, I’m understanding better because back then was also a period of immense inequality like we have now.
What we’re seeing is that individuals are able to make too much difference, e.g. one funder that has a set of interests can have so much money that they could fund politicians and even if it is not one funder it could be a very small number of funders and they are able to get together in coalition. They could give a huge amount of money to a politician and force them to have extremist positions.
I think having a small number of actors endangers you. When there are some individuals that are able to go so far out, they could just make random arbitrary decisions that turn out to be a bad idea…I can imagine who in the current headlines is making me think about this, someone who maybe has not got enough sleep or something and did some erratic things. One person shouldn’t have that much power, and that is almost a definition of government redistribution.
Though, you do not want it to be totally equal. Communism and Socialism was not actually the best thing. It turns out that there is something called the Gini coefficient, which is between zero and one and it turns out just empirically, we do not have a good theory for this, that the best level seems to be 0.27.
The Gini index or Gini coefficient is a statistical measure of distribution developed by the Italian statistician Corrado Gini in 1912. It is often used as a gauge of economic inequality, measuring income distribution or, less commonly, wealth distribution among a population.
You want enough money going to the people who are making innovations or who are working harder or who are just happy to be lucky to be in the right place at the right time to do things that benefit other persons. You want them to be able to get more money, but not too much money. Getting that balance right is sort of the history of politics and governance, and we did not totally understand it before.
Coming back to that, it is not clear to me that having one consciousness at the top of the whole planet is the right metaphor. You might have a whole lot of well-networked parties or some other way of maintaining the planet, and then someone from outside the planet might describe that as a single consciousness, but from inside, from our perspective, just like from the inside of the brain, it’s not going to look like one thing.
I do not think that we need one super intelligent machine or something. Why would that be any different from one powerful individual? I think it’s more important to have an architecture. I think it’s actually really important that there are core humans that are accountable, but the humans are enhanced with the AI that we are able to use so that our decision making is improved.
Let us talk about consciousness. You are saying it is a tool for intelligence. Often consciousness and intelligence are being confused to a certain degree. In one of your papers, you are saying; it can be a useful tool for intelligence…
The paper is called “A world for consciousness and action selection collection.”
There are so many different meanings of the word consciousness but if we just look at that feeling that we have when we report being conscious. At what point do we report remembering that things happened; that seems to be when we are building new memories of a particular kind. Again, it has to do with planning, navigation, and structuring things that can’t just be remembered in a flash. You cannot report how you recognize a picture or something, you have no idea, that was done with a sort of parallel concurrent part of your brain, but the part that has to deal with the “how do I get from here to there” or “how do I take steps”. That seems to be the part that when we are learning new models for it makes us say that we are conscious.
And yeah, that could be useful. If you had a robot system where it also needed to make those kinds of plans and it had resources, of course with constraints so it could not just be doing that about everything all the time, which none of us can, then it would make sense to focus that learning on the kinds of things it is doing right now.
We tend to learn about stuff that is in front of us, but we are not sure how to do it. If we are sure how to do it, we do not remember it; we are not conscious of it, it is just a skill at that point. It is just a reflex, but where we are uncertain, or something unusual happens, that suddenly grabs our attention, we are conscious of it. You could have a component in the AI system that might be useful depending on whether you want the system to learn for itself.
To make it more accountable, we would train up the components separately and they are only updating things in the name of the person they are working for. You do not have to have it constructing radical new plans.
What are the areas that amaze you the most about AI?
I remember the thing that I was most excited about recently, and this is now a few years ago — real-time translation. I just thought it would be so amazing if we are both talking in our native languages. For a lot of people, it would be so much better if they were not being forced to re-translate their ideas in something that they cannot be quite as poetic in. Of course, it would never be the same level of poetry from the translation, but at least letting everybody be fluent and talk and conceptualize in the language and then, great, if they learn the other language, they could do even better but that’s just an emotional level I guess…
At the same time, it would be interesting to see if you will also learn about the culture because when learning a new language, you automatically grasp a lot about the culture. Language shapes how you think, and if you can facilitate this with technology, this could reduce conflicts between people and cultures.
That is an interesting question. Let us go back to the heterogeneous architecture idea I had before. Maybe you do not want everything to be the same, and there is another possibility, which is the opposite of what you just said. You might get a veneer of understanding, which actually left you capable of more heterogeneity because you are able to do the important negotiations and get that done but then you actually allow people to keep their relatively different views about the world.
Many people have learned the value of diversity, and it is really important. You need a lot of ideas to solve a lot of problems.
At the same time, nature maintains diversity. In fact, it increases diversity in some ways but also selects away the stuff that is not doing work for you, and some people are resistant to selecting anything away. They want to keep revisiting the same questions over and over. I think I got slightly lost from your question now.
I think that’s totally fine. Those are some very good final words. Thank you very much for your time.
Found this article useful?
- Find an in-depth interview of Joanna on AI safety and ethics here.