On Sunday, we invited Meredith Broussard @MerBroussard, author of “Artificial Unintelligence: How Computers Misunderstand the World” to our monthly AI Ethics Twitter Book Chat to discuss the inner workings and outer limits of technology
Mia Shah-Dand: Welcome and thank you for joining us. Let’s start with the basic premise of your book. Why do humans find it so hard to understand the outer limits of what tech can do?
Meredith Broussard: Thanks for having me! One of the reasons I wrote the book was that I found myself in lots of conversations about imaginary things that computers would someday be able to do. Then, I got older and realized I’d been hearing the same empty promises for decades. I want the conversation to move away from imaginary futures and move toward realistic futures. Computational literacy will move us toward more productive conversations about .
MSD: I am a big fan of the term ‘Technochauvinism’ you’ve used to describe our blind faith in tech. How have you seen this manifest during the pandemic?
MB: Technochauvinism is the belief that technological solutions are superior to others. It is common to assume that tech is the first and best solution — but this is wrong! Tech is one of many possible solutions. During the COVID-19 pandemic, I’ve seen a lot of technochauvinism around contact tracing apps. People put an awful lot of faith in the idea that there will be a magic app that everyone will use, and it will help to keep us safe. Contact tracing is good and important but an app is not the complete solution to the crisis. We need human contact tracers working with good, stable technology. Think of it as a human-in-the-loop system. It’s also important to remember that the humans making the contact tracing apps are working under the same conditions as the rest of us: stressed, at home, with inadequate childcare. If you’re not doing your best work right now, neither are they. One more thing: compliance matters for COVID-19 contact tracing apps. Singapore only got up to 4% of the population using their contact tracing app? It’s hard to imagine millions of Americans being perfectly compliant about using one app.
2. AI for CFD: byteLAKE’s approach (part3)
3. AI Fail: To Popularize and Scale Chatbots, We Need Better Data
4. Top 5 Jupyter Widgets to boost your productivity!
MSD: Data is (as you said) both unreasonably and seductively effective. What would you say to folks who pride themselves on being “data-driven” about the flaws or blind spots in their approach?
MB: I love data! Data-driven decision-making can be extremely effective. It’s important not to give data supreme importance, however. Data is not everything.
MSD: In that same vein, what are the dangers of optimized pricing algorithms in an unequal world?
MB: Optimized pricing algorithms benefit the company that creates the algorithms. Rarely does optimized pricing benefit the consumer. Remember this Wall Street Journal story from 2012 about pricing algorithms? It is one of the important investigative pieces that kicked off the current golden era of algorithmic accountability reporting: Algorithmic accountability reporting is a kind of investigative journalism in which we investigate algorithms and the people who make them. Sometimes, we make our own algorithms to commit these acts of reporting.
Some pioneers in algorithmic accountability reporting are @themarkup @propublica. We also teach algorithmic accountability reporting methods at @nyu_journalism The short answer: computer systems that optimize pricing will inevitably charge poor people more and charge rich people less, because the systems mirror the existing inequality of the world. We can and should change this.
MSD: Why is there such an obsession with human replacement systems (ex: self-driving cars) vs. human assistance systems?
MB: I’m continually surprised by how persistent the fantasy of the self-driving car is. Flying cars, self-driving cars, personal jet packs, dirigibles — these are really old tropes. And: they’ve never worked! In the book, I tell a story of a terrifying, nearly-fatal ride I took in a self-driving car. You rarely hear these kinds of stories — which should make you suspicious. BTW, 2020 was the year that AV folks predicted AVs would be on the road. People often say that computers are better, faster, cheaper. But over the long term, we can see now that technology is just as expensive as humans — sometimes even more so. I think about the farmers in the Right to Repair movement. If you are a farmer who doesn’t have access to the software when your computerized tractor breaks, you may lose your entire crop. This is not better than before!
MS: Why does the myth that math and computation are “more objective” or “fairer” still persist even though it’s been disproven many times over by experts like yourself and others?
MB: I am delighted that there is now a robust body of work addressing the false belief that math/computation is “fairer” or “more objective.” It’s important to discuss the ways that technochauvinism and supremacist thinking play into this false belief. Some folks I recommend following to continue the conversation: @safiyanoble @ruha9 @cmcilwain @DocDre @histoftech @margaretomara @centerforcrds @mathbabedotorg @JuliaAngwin @katecrawford @alexhanna @mer__edith @ubiquity75 @sjjphd @schock @marylgray @sivavaid @zeynep @dfreelon @sara_ann_marie @whkchun @RaceNYU @PopTechWorks @onekade Plus: consider following @zephoria @BostonJoan @firstdraftnews and all the amazing people on this list: https://lighthouse3.com/our-blog/100-brilliant-women-in-ai-ethics-you-should-follow-in-2019-and-beyond/
Thank you so much for joining us today and sharing your insights.
We’ll be back next month with our June AI Ethics Book Chat, where we’ve invited author and scholar, Dr. Safiya Noble @SafiyaNoble to discuss their powerful book “Algorithms of Oppression.”