June 23, 2021


Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

Microsoft’s Kate Crawford: ‘AI Is Neither Artificial Nor Intelligent’

Microsoft’s Kate Crawford: ‘AI Is Neither Artificial Nor Intelligent’

An anonymous reader shares an excerpt from an interview The Guardian conducted with Microsoft’s Kate Crawford. “Kate Crawford studies the social and political implications of artificial intelligence,” writes Zoe Corbyn via The Guardian. “She is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. Her new book, Atlas of AI, looks at what it takes to make AI and what’s at stake as it reshapes our world.” Here’s an excerpt from the interview: What should people know about how AI products are made?
We aren’t used to thinking about these systems in terms of the environmental costs. But saying, “Hey, Alexa, order me some toilet rolls,” invokes into being this chain of extraction, which goes all around the planet… We’ve got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorizing data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.

Problems of bias have been well documented in AI technology. Can more data solve that?
Bias is too narrow a term for the sorts of problems we’re talking about. Time and again, we see these systems producing errors — women offered less credit by credit-worthiness algorithms, black faces mislabelled — and the response has been: “We just need more data.” But I’ve tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world. Training datasets used for machine learning software that casually categorize people into just one of two genders; that label people according to their skin color into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI.

What do you mean when you say we need to focus less on the ethics of AI and more on power?
Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? What we see time and again, from facial recognition to tracking and surveillance in workplaces, is these systems are empowering already powerful institutions — corporations, militaries and police.

What’s needed to make things better?
Much stronger regulatory regimes and greater rigour and responsibility around how training datasets are constructed. We also need different voices in these debates — including people who are seeing and living with the downsides of these systems. And we need a renewed politics of refusal that challenges the narrative that just because a technology can be built it should be deployed.

Read more of this story at Slashdot.