Pace Live

Trevor Paglen Talks to Kate Crawford

April 19, 2020
Conversation recorded on April 9, 2020

In the second episode of our Instagram Live conversation series, artist Trevor Paglen spoke with Kate Crawford, a leading academic on the social and political implications of artificial intelligence, about Art, Politics, and AI in the Time of COVID-19.

Scroll down to read the full transcript.
Learn more about Trevor Paglen.

Kate Crawford: Hello, Trevor. Are you here?

Trevor Paglen: I am here. How are you doing, Kate?

KC: I'm doing great. I can't believe this is working. This is quite the technical marvel that we're engaged in right now. I'm pasting a little comment in here so everybody will know the name of this conversation. There we go, done. We have a title now.

TP: Aww, beautiful! It's nice to see so many friends showing up here, too.

KC: I know, it's lovely. Oh, hi friends!. We can see you all. We can see into your homes. Well, hi, Trevor. How are you doing?

TP: Hi there. I’m okay. I have a little bit of stage fright, which is super weird because I give talks all the time and all we're doing is sort of talking on the phone, but I am strangely nervous about this.

KC: Yeah. I have to say, it's an extremely odd format. It's lovely to see you, though. It feels like such an incredibly heartbreaking time and this is such a weird way to be touching base with friends. But hi, lovely to see you.

TP: Hello! Hello. We're all on our like, astronaut international space station communication platforms here. It is a different place, you know. It feels like a couple of weeks ago we were in Paris doing our big (opens in a new window) show about facial recognition and its history. The year was going crazy and then it just got all turned off.

KC: Yeah. I have to say, January feels like it was about five years ago now. I can't imagine how time is feeling for you, but I do have wonderful memories of our show in Paris. And now, of course, France is in a recession and you wouldn't be able to fly there if you wanted to. THE other thing I was thinking about recently was the (opens in a new window) show that we did at Fondazione Prada in the second half of last year, because, of course, that's in Milan and so many of the people that we worked with, from the curators to the art handlers to the installers, have just been going through hell. Of course , here in New York we're just six weeks behind them. I think you're in New York, too, now, right?

TP: Yeah, I'm at my studio in Brooklyn. I have a small studio here in Brooklyn where I work a lot. And then the big studio is in Berlin. But yeah, northern Italy, it's really messed up. You know, I was going to have a big show open in Turin, I think last month. It was in this giant, hangar-sized place. It was going to be all the full-sized models of the different kinds of satellites that we built over the years and we were supposed to go and install it, and I was like, ‘you know what, probably this feels sketchy.’ And then ‘no, it's okay, it's okay.’ And then, you know, a week later, it's like, ‘no, no way.’ And the space there, OGR, has actually recently been repurposed as a hospital.

KC: You're kidding. So that museum has become a hospital? That huge one in Torino?

TP: Yeah. They turned that into a hospital and just basically every show has been either delayed or canceled. It's sort of like it's a whole different world. And just really dramatically, too, you know? It just almost felt like, almost like overnight. It's like, woah.

KC: Yeah. I'm in Greenwich Village right now and it's completely silent apart from the occasional ambulance siren. And then of course at seven o'clock we all go outside, and we cheer and scream and bash on pots and pans to say thank you to all of the care workers who are doing the really hard jobs right now. But other than that, it's the strangest feeling being in New York and obviously it's a really, really hard time for the city. We're all pretty heartbroken. A thing that's really on my mind is what's happening in the art world. You’ve had shows canceled—the whole year is in doubt now—but you also employ a lot of people. I mean, you've got two studios, the one here in New York and you've got the one in Berlin. How are you dealing with that? How are you guys coping?

TP: I'm not going to lie, it's really stressful. But the good news, if there is any...I mean, there's two things I guess—one is that I have a bunch people, you know...it's not a bunch, there's a small handful of people that work for me and they all work in the studio in Berlin, and fortunately the German government has a program that's called Kurzarbeit, where if your hours are reduced the government will pay sixty percent of your salary. So, everybody in the studio has gone down to half-time, but they are still getting about seventy-five or eighty percent of their normal salaries. So that's good and that's sustainable for longer than it would have been otherwise. But I certainly have friends with much bigger studios that just had to furlough everybody, and I think that that's pretty common. It's just that overnight everybody is losing their jobs in a lot of cases, because nobody knows when projects are going to start coming up again or whether there's going to be income. And I think there is that sense of not knowing what's going to happen. Well, I guess the one part of that is the good news is that people that work for me are employed in Germany, which has a much stronger social safety net. You don't get that same sense of fear that you're starting to feel here now, or certainly that I'm starting to feel here now. So, it's a little bit less intense, you know?

KC: Absolutely. I mean, here it's extraordinary. We just found out today that the Trump administration decided to stop subsidizing Covid testing (this was later (opens in a new window) reversed). Again, it's that sense of having an x-ray put on the societies in which we live and being able to see right into this terrifying degree of inequality, of precarity, of the difference between people who are extraordinarily at risk right now—people who are delivery workers, people who are in hospitals, and everybody who's at home sheltering and terrified. In particular in New York, but really across the US right now, people are feeling that precarity and that lack of a unified vision for how a society is going to change. And it is going to change. That’s really clear now.

TP: Yeah. I think the one other thing that ameliorates it a little bit is the sense of a lot of people being in this together, you know what I mean? Everybody I know who's in the art/artist side of things or on the gallery side of things, everybody's kind of screwed and nobody knows what's going to happen. But at least there's not this like "oh, ha-ha," like “so-and-so screwed something up” or this or that. You know, there is just a kind of sense of solidarity, which weirdly feels good. And it is one of these extraordinary things that can happen in times of crises, and someone like Rebecca Solnit talks about this a lot. There are spaces where you can try new things and there are different forms of care that can emerge from times like these if we want to be optimistic.

KC: I think that's right. I mean, I'm thinking of the work of people like Amy Kapczynski, who really has been doing extraordinary work in the field of public health. And she's asking “What does an ethics of care look like in a moment like this?” How do we care for each other? How do we show up for each other? And to see the real resurgence in mutual aid networks has been one of the things that's been keeping me going. I mean, there's so much of that happening in New York right now, but also around the world. To see that rent strikes are working is really fantastic. But I'm going to take a pause here, because we now have a very full channel and I'm going to introduce us and tell you why we're all here. Hi, everybody! Welcome to the Pace Instagram Live channel. My name is Kate Crawford, I'm a Distinguished Research Professor at NYU, where I am the co-founder and am co-founder of the AI Now Institute. And I'm delighted to be having this conversation with my friend and collaborator, Trevor Paglen.

TP: Hi there, I'm Trevor Paglen and I'm an artist. And that's about all I have to say.

KC: Tell us a little bit about the shows that you had coming up that are now maybe not happening.

TP: Well, I mentioned the show in Italy that I'm super excited about. Again, it’s this giant, almost hangar-like space. And the show was all the full-size models of these different reflective satellites that I've built over there. It's just giant sculptures of these mirrored structures with panels and stuff like that. It's going to look like almost a cross between a hangar bay or something like that, or like an aircraft hangar, but then with these really bizarre and—I sheepishly say—quite beautiful objects. And then the other big project that was coming out this month was one I've been working on for quite some time, which is a project looking at computer vision and AI and looking at some of the stuff that you and I have done a lot of work around and looking at that in relation to 19th-century American Western photography. So, trying to think about: Can you locate something like artificial intelligence or computer vision within a history of photography that would begin with figures like Timothy O'Sullivan doing survey photographs from the Department of War in the 19th century, and then moving through someone like Muybridge, who starts as a landscape photographer, also working for the Department of War, photographing the Indian Wars, and then, of course, doing the famous motion studies? And the motion studies kind of set off a history of technology and a history of technologically enhanced vision that I think you can make the argument goes up until the present in things like computer vision, AI, what have you. The project's been going back to a lot of these sites of classical, kind of Western frontier photography. In the studio, we've built a lot of software to visualize what different computer vision algorithms are doing. You're seeing a lot of these landscapes that are showing you the landscape, but then also the landscape as it is seen through different computer vision systems. So, we can say ‘let's look at, you know, a tree, and somebody through the eyes of a guided missile,’ or something like that, and make it look like that. It's a fun project and it's tricky because in this moment you're like ‘okay, who knows when this thing is going to open?’ It's all done. I like the project, so I’m thinking about...how do you share that with people?

KC: Can I can I make a crazy suggestion? I'm not sure. We haven't really planned this, Trevor, so forgive me if this is a bit out of order. Do you have anything that you could show us? I mean, we'd love to see some of these images. Do you have any printed works that you could show on screen?

TP: Yeah, one second, here, one second...Well these aren't works from the show. One of the things that I'm doing with that body of work is going back and a lot of them I'm printing as in 19th-century styles. So, doing things like albumen prints and carbon prints and using these really old, classical styles. It's like the process—like shooting with an 8x10 camera, taking that film, digitizing the film, running it through these computer vision systems, and then going back to a film output, and then making these contact prints using, you know, egg whites and silver nitrate and stuff like that. I had my own darkroom, where I was making these things for a while. This is kind of what an albumen print looks like. This is not from the series, this is from our earlier series, but you get that sense. And these are kind of what other ones look like. But yeah, I'm super happy about it. But again, we'll see what happens.

KC: So how do you make an albumen print? Where do you do that?

TP: Well, nowadays I work with this guy named Barret Oliver, who specializes in this. So, if you're the Guggenheim and you need some prints done in the style of the19th century, this is the guy that you go to. And he's a really interesting guy. He lives on this kind of weird farm outside of LA and he raises his own chickens, so that you can make the albumen and then you kind of go to his house and it feels like you're in the 19th century. But he's this really, really talented guy and just can tell you everything about all the different techniques for 19th-century printings. I've been hanging out with him doing that, but I really learned how to do it initially by building a darkroom in my bathroom and making a giant mess, like sitting there beating eggs, you know, for days at a time, and then mixing in all these chemicals. It's very alchemical and very material.

KC: It sounds like a perfect Covid-19 project, Trevor. You should basically just raise chickens and start making albumen prints in your backyard. You know, if any of us had backyards in New York, we could totally do that.

TP: I mean, I think that's really real. And I think one of the things that is certainly on my mind, and I think on a lot of artists’ minds, is related to that question—how many things that we think we understand the meaning of are just changing overnight? Like, the other day I was at the beach just filming some stuff for this other project that I was thinking about, and I looked up and was seeing planes fly out of JFK and was like, oh, woah, airplanes. They just mean something really different now, that image of the airplane in the sky. It's just really weird to be an artist right now, because all of these images that you work with in the language that you have, the meaning is changing really quickly. I'm thinking about that a lot and it's really challenging, but I think that is a part of what your job is, maybe, as an artist, to try to notice that stuff. And it's funny that you mentioned the albumen prints being a Covid kind of project, because it does seem to me that working at that scale feels about right, right now. So I don't know. It's just a really sad time.

KC: I think there's something about having really personal projects that we can actually make at this time. I mean, obviously with those prints they're one-offs, so you're making these interventions that are inherently really personal.

TP: Yeah, that’s absolutely true. And what about you? I know you're finishing up a book and I imagine the meaning of that is changing as well.

KC: Oh, yeah. So, I co-lead the AI Now Institute at NYU and we are a team of almost thirty people, and it's been really hard to think about how we do it all now. Fortunately, we decided to send everyone into a working from home state two weeks before the work from home order in New York, so we're really proud of getting people home safe. And we have a lot of projects that are really relevant to this right now. We have one of our post-docs, Liz Kaziunas, looking at issues of AI and healthcare, and so much of this is happening right now through contact tracing. You know, many governments, including Israel and Hungary, are doing these concerning projects, which are forcing people to give up their data, but also in terms of who they're contacting to make databases, which is centralized by the government. There are a whole lot of projects that we've been investigating at AI Now. Erin McElroy, another post-doc with us, specifically works on issues around rent strikes and around what's been happening with gentrification in cities like Oakland and New York. Actually, their projects could not be more relevant right now, when we all feel really precarious about our housing, and we're starting to see the cleavages between who has so much right now, and who is in positions of such wealth and privilege, and those of us who are in much more unstable situations. They've been researching that for years and creating maps of eviction patterns. Essentially, if you look at the (opens in a new window) Anti-Eviction Mapping Project online, you'll get to see these maps that they have been building for so many years now. And of course, those eviction maps are going to be looking really scary a couple months from now in the US.

TP: Absolutely.

KC: So, I couldn't have been more excited to be working with the people that I'm working with. They're also creating a lot of mutual aid projects right now, as well as stress-testing the surveillance systems and asking a lot of critical questions about how people are being surveilled.

TP: And I'm thinking about your work so much right now, too. It's funny that earlier today...so I teach a class at the University of Georgia and we were doing it remotely today, and we were doing Foucault’s panopticon, which is very standard. What I had totally forgotten until I re-read the section of the book today is that it starts out with a description of the Black Plague, and it's talking about what regulations were being put in place to quarantine people. And you know, you can't leave your house under the pain of death. You have to show up at the window every day and show that you're healthy, and it was crazy to see the resonance between that and today. And the particular point that was being made is that, in that case, to use that kind of Foucauldian language, is that the plague is like a perfect environment in which to institute new kinds of surveillance and new kinds of discipline, because you can do it in the name of public health and things that are good and that are matters of life and death. But of course, once you've created those systems, they have a tendency to stick around. And I'm imagining this is something that you're thinking about a lot right now as we're seeing governments wanting to track everybody's cell phones and things like that, in the name of public health.

KC: Yeah. It's funny because I'm writing a book at the moment called Atlas of AI, and one of the chapters looks specifically at the close relationship between the AI industry and the state, specifically the military. And of course, that was where AI was really funded and supported by DARPA in the ‘50s and ‘60s. But even now we're starting to see this very close militarization of the AI industry and AI sector. What we've been seeing with the Trump administration is that they've been inviting companies like Palantir, who I think are doing deeply problematic projects, particularly at the border and particularly tracing and deporting immigrants in the US, and inviting them in to help us design a system to deal with Covid-19. And they've also invited Clearview AI, would you believe? The company that has basically dragnet-harvested the entire internet for all of our images and without any consent or any permission is creating giant policing databases. And I don't know if you saw this, Trevor, but there was (opens in a new window) a story that came out this week that traced Clearview AI to all the people who were working there to find out they have these deep connections to the far right. I'm talking about really extreme, neo-reactionary, dark enlightenment version of the far right - working at Clearview AI. And these are the people who now have access to the highest levels of government to think about how our devices are going to be tracking us. So, I'm extremely worried about what we're seeing there, and I think it's something that we should all be talking about and thinking about how we're going to organize.

TP: I think that is so right, and that that freaks me out so much. You and I gave a talk, I think, in New Zealand last year or two years ago, and it was all about different monsters in AI. And I remember one of the monsters was the monster that's in your room, like when you're calling on the phone...

KC: ‘The call is coming from inside the house!’

TP: Exactly! And there was a section about the rise of basically far right culture within tech companies. And you're just seeing that more and more, like every time I go to San Francisco I see it more and more. And I'm like ‘what is this,’ you know? And weirdly, a friend of mine, a journalist named A.C. Thompson, who writes for ProPublica and has done a lot of reporting about the rise of fascism and tracking those kinds of groups. He's someone who's never done technology, but now he's getting interested in these technology sectors and these kind of basically fascist ideologies within technology companies as well. So, it's really real, and it’s really intense. We have that on this kind of stateside and just horrifying politics side, but also in terms of just the everyday tools that we use. I'm thinking a lot about the fact that I'm on endless Zoom meetings and why did Zoom become this thing and Zoom's a piece of malware, you know? What is that encroachment, as well? How do you see that playing out?

KC: Well, it's interesting because we're living in this period where we're seeing the collapse of the social fabric and a shift of work culture to the home, as though that's something which could be immediately normalized. And of course, it can't. I mean, the things that support the practice of going to work every day are all these other systems that we rely on, from schools to childcare to extended friend networks to being able to get our food and to being able to cook for others that we care for. So many things have actually collapsed in this moment that this idea that we all just simply work from home with that same level of productivity that we always had, I think is a farce. I also think it's incredibly dangerous that we're assuming that people under enormous stress and duress are somehow able to be doing six hours of Zoom calls a day, which is personally what I'm doing now and it's driving me crazy. So, when we start imagining the new future that we want to come on the other side of this, can we please have fewer Zoom calls? I think that would be a really great start.

TP: But do you think that that creates an even crazier golden age of data harvesting in the sense that now you do have all these people using online platforms? I imagine that if I was Skype or I was a Zoom, I'm wanting to start creating training sets from those calls. I'm wanting to, you know, build affect training sets based on, ‘oh, I know that somebody's distressed, what does the timbre of their voice look like? What do the movements of their face look like?’ And I'm wondering the extent to which the fact that we've been forced onto these online platforms represents not only a consolidation of power in terms of the economic sector, but in terms of the kinds of values that are built into those infrastructures and the ethics that are built into those. That's something that we've talked about a lot.

KC: Yeah, that's right. I mean, we should explain to people about how affect recognition works. And it's something that is often in the back end of a system. You won't see it, but these are systems that essentially are tracking your facial movements as you speak on video. So, what we're doing now, Trevor. An affect recognition system would be looking at what are called the micro-movements of our face and then mapping that to hundreds or hundreds of thousands of videos to try and make assumptions around how your external expressions of emotion relate to your inner state. And it's interesting because, of course, this technology is highly questioned right now. It's something that I've done and something that I'm doing in this book—going back and tracing where affect detection came from. And you can really go all the way back into the 1960s and ‘70s to really track the work of one psychologist called Paul Ekman, who believed that there are essentially six universal emotions and we show them on our faces, like when we're happy we have a smile like this.

TP: I have this, actually, right here.

KC: Yeah, let's show people what we're talking about.

TP: These are the Ekman-ish kind of training data that they use to make those things. The theory is that you make these different faces and it kind of betrays your inner emotional state.

KC: Right. So, we actually used this training set in our show in Milan at the Fondozione Prada. Do you want to talk a little bit about the history of these images Trevor?

TP: So, you're talking about Paul Ekman, whose research kind of underlies this. And I think Ekman was making photographs and then he was showing the photographs...he'd go to Papua New Guinea, and say ‘here's a picture of somebody making a face, what is that face?’ And so that's part of the method that he used to try to get this theory of universal emotions, right?

KC: That's right. I mean, he was not trained as an anthropologist, but going up into the hinterlands of Papua New Guinea and taking these pictures of people making cartoonish expressions and then showing them to tribes who had lived remotely and hadn't been exposed to modern media to see if he could get reactions. And his first attempt at doing this was a complete failure, but he kept developing it. Then this idea became so prevalent in psychology, even though it was deeply disputed. I mean, people like Margaret Mead were deeply critical of it. But these ideas have been adopted in machine vision because it's a metric—because we can say there are six emotions and we can map them to faces—iand it has been built into technical systems, even though it's disputed and, frankly, I think should be disputed further. But now people are using it in hiring contexts. So, you know, when you go for a job interview, someone's videoing you and then by looking at your facial expressions, deciding if you are somebody who would be a good employee.

TP: It's like those old 19th-century drawings of the person with the big forehead to be like, you know, a good father. Like the old phrenology drawings. To me it seems like that and the way we talk about that in some of the Paris stuff as well.

KC: Yeah, it's definitely phrenological. I love that somebody in the comments, by the way, just said that emojis are the latest kind of iteration of Ekman's six standard emotions.

TP: Totally. I think that's right. And I think one of the things that we have talked about, and we wrote about this a little bit, we wrote an article together called “Excavating AI,” which was trying to look at the politics of the images that are used to train AI systems. You were talking about these photographs Ekman would make and show to people around the world. Well, now what they do is make photographs like that and show them to computers to try to get them to recognize different emotional states, or different objects, or what have you. And these are called training images. We wrote a whole essay on the politics of these images and I think where we landed with that feels like a pretty relevant point to me right now, which had to do with the fundamental question being about what images mean. And who gets to decide what images mean and who gets to hardwire those into technical systems? And therefore, kind of juice those systems to see in very particular kinds of ways that have forms of power built into them. And I wonder if all of us sitting here at home and trying to work as remotely as possible are subjecting ourselves to those forms of power to a far greater extent than we might realize on one hand, and on the other hand, helping to create a situation in which those systems could expand greatly.

KC: I think that's right. And it's interesting because, of course...somebody has just said in the comments: who gets to decide what those images mean and are we seeing new forms of bias propagate? And I think bias is kind of like the very top layer, you know, it’s absolutely built into these systems. But I think what's even more interesting when you go down through the layers of how technical systems are constructed, is that ultimate question of who decides? Who decides what an image means? And because images as so multi-chromatic, and can mean so many things, it's an extraordinary kind of hubris that we have systems that they claim can detect not just how we're feeling, but also our worth as employees, or as artists or, you know, you name it. I think bringing that sort of critical capacity to opening up technical systems, which I know is something that you and I both do a lot of, Trevor. Showing that to people and showing it to people in an art context, I think is also really powerful. I mean, that was my experience of doing the (opens in a new window) Training Humans show.

TP: I think that's what I like about doing this stuff in the art context. I think if you're looking at it from an art perspective, you're looking at these images from that, you actually look at them. Like, let's look at these images that we're using to teach computers how to see. And you look at the images like, this image has nothing to do with whatever this computer is trying to...What are these decisions that are being made? It's often hilarious, but also often just terrifying when you think about the implications of it. And it does come back to that question that you mentioned about who gets to decide what images mean. And I think anybody who thinks that they can decide what images mean is incredibly hubristic, because I can tell you that the meaning of an image like flowers that are blooming right now means something really different this spring than it meant last spring.

KC: They do, they really do. This is a perfect opportunity to talk a little bit about a project that we were both involved in and that you built, Trevor, called ImageNet Roulette, which I've been thinking of a lot recently, because, of course, ImageNet Roulette—we decided to keep it up on the internet for a brief period of time and then it went absolutely viral, and now that's a word none of us should be ever able to use because viral is truly a thing that we don't want in our lives. But do you want to talk a little bit about ImageNet Roulette and what the motivation was behind that project?

TP: Yeah. So, ImageNet Roulette was a project that...it needs a bit of a background to explain. I think it was that we were looking at the actual images that are used to train artificial intelligence systems and these are datasets that can be anywhere from a couple of hundred images to millions of images, and Image Net is a dataset that is probably the most widely used, publicly available data set. Google has ones that are bigger and better. Facebook has ones that are bigger and better, but we can't see those. But we can see Image Net, because those were made at Stanford University and it's kind of a benchmark. And I was just doing a project. It's 22,000 categories and 14 million images. And fourteen million images are organized in 22,000 categories. And everybody in computer science is like, ‘oh, it's so big, nobody can actually look at this.’ And I was like, ‘sure you can, 22,000 categories, that's, you know, a quarter the size of a book, but you can look at that in a day, for example, if you really stick to it.’ And you had been doing some of this as well. We were talking about it a lot and we found there's about 2,800-something categories of people. So, it'd be like ‘cheerleader.’ And then a bunch of pictures of a cheerleader or like ‘Boy Scout’ and we look at some Boy Scout uniforms and what have you. And then we found out that it gets really gnarly really quickly. You know, because basically they just took a form of the dictionary and translated that into categories and then have Amazon tech workers put images in them, so they would have categories like a cheerleader or scuba driver, and these are things that we may or may not think are particularly controversial, but there's definitely ones that would be like ‘bad person,’ you know, ‘alcoholic,’ ‘kleptomaniac.’ And so what I did was take all of those ‘people categories’ and said, ‘okay, let's train a model on all of these categories of people,’ and then just build an application where it'll tell you who you are according to an image model trained on these. And it was really just meant to be a project that showed just how bad it is, how bad a lot of this artificial intelligence stuff is on one hand. And I think on the other hand the corollary is like, what are the norms that are used in universities and research environments to make these kinds of datasets, and just how quickly the creation of any kind of data set can go in really, really horrible directions unless people are really looking at what's being put in there. And the thing is that the scale of these things is so large that it's a lot of labor to actually look at them. So, what you have is generations of people creating training sets and then using the previous generation’s material. You see things like really bad politics of classification built into training sets that just propagate through time and propagate to all kinds of other systems. So, really that's what that project was, a way to just try to call that out. And it was a funny way of doing that.

KC: I'm so glad we included it in the show in Milan, because I could see people interacting with it. And what I really liked about ImageNet Roulette is that we can talk at this abstract level about artificial intelligence and machine vision, but the reason ImageNet Roulette was so powerful was that it was an app that you could play with and it was a website that you could play with. So, essentially what you could do is just upload a picture of yourself and then see how you would be categorized by Image Net, the most-cited, most well-known training set in the world. And people were horrified because, you know, black people were being classified very differently to white people. Women were being classified very differently to men. People could see the normative assumptions, the stereotypes, the slurs, and the insults that were coming from a landmark training set. You didn't do anything to change the categories, the categories were already there. I think it was an interesting way of doing exactly that excavation of a technical system. It's funny. I've been thinking about that in relation to all these Zoom calls and Hangouts and all these technical systems that we're using right now to communicate with each other. They're recording us, and not only are they just capturing the data, but they're making training sets based on these videos that they're going to be making a whole lot of assumptions—about who we are and what we're saying and what we're feeling. And I think that's the thing that worries me most about this moment in terms of the technical back end. There hasn't been enough public debate around what is an acceptable use of these systems and what's an acceptable use of our data.

TP: We've talked a fair amount about the book that you're working on. One of the things that I think is really great about the way that you're articulating what an AI system is, is that you're articulating it as something far broader than just what's going on inside of a neural network. You're talking about infrastructure, you're talking about geography, you're talking about classification, you're talking about geology even. What are some of the aspects of AI, according to Kate Crawford?

KC: Well, I've always been interested—certainly for the last ten years—in moving away from this idea of artificial intelligence as being, essentially, abstract code...you know, a series of technical systems that are somehow in the cloud and de-materialized. In actual fact, the opposite is true. Through years of research, I've been tracking the very material substrates of AI, everything from where the systems are built, where we extract the minerals from, through to what it takes to actually make them work in terms of labor and data extraction. And then where they end up, which is generally in massive e-waste tips in places like Ghana and Pakistan. And, certainly, one of the projects that gives you a sense of where the book begins is a project that I did with Vladan Joler, which is called (opens in a new window) Anatomy of an AI System. And if you're listening right now, you could look it up on the internet, you'll be able to see a big visualization and an essay. And the visualization basically shows you the full life cycle, that sort of birth, life, and death of a single Amazon Echo unit.. And to make one of these Echo units involves a vast logistical chain, so we traced that. And Vladan and I went through the places where these things were actually being constructed, from mining, through smelting, through container shipping. And we also looked at the pay scales, how much Jeff Bezos gets paid per day versus how much somebody who is a click-worker gets paid for tracking through all of the ways in which Alexa is speaking – is she getting the right words in the right order? So, my book really is trying to do that at a large scale and it’s looking at how this pertains across the entire understanding of AI as a system that is extractive. And it's extracting not just our data, but it's also extracting vast amounts of natural resources and vast amounts of human labor that we don't see. It's interesting, because of course that book was going to be coming out this year in September and now it won't be thanks to, of course, what’s happening to the publishing industry. Just like what you're talking about in the art world, the same is happening across publishing as well.

TP: I would imagine academia...well, I don't know. I wonder the extent to which academia is also insulated. I mean we saw the San Francisco Art Institute closed. Certainly, my classes in Georgia all kind of went online, but that doesn't really work, you know? I think that there are many, many layers of precarity and the longer this goes on, the more and more of those are going to get peeled away. And I wonder. We all see those glimpses of different kinds of maybe...solidarities that are emerging or different forms of care that are able to happen in times of crisis, because things kind of get thrown up in the air and people have to invent new rules or new ways of living together. I'm wondering if you're seeing aspects on the technology side of that happening, particularly in the places that you work.

KC: Yeah, speaking of academia, it's really hard out there. Somebody just mentioned in the comments that in London, you know, the entire system is feeling like it's collapsing in terms of the universities there. We saw this week that Princeton is going through a massive hiring freeze and that they're saying they're going to start possibly even laying off contractors and people who don't have tenure.. There's a Google doc now that a series of academics is putting together, showing all the universities that are going through hiring freezes. When you look at this list, what is extraordinary about it is that it's hard to find any university that isn't. I mean, this is all universities right now. Universities in some ways feel as though they are somehow more removed from the engines of capital and from this sort of complete precarity that we're seeing in, say, the service industries and the travel industries. But, in actual fact, they're profoundly exposed to market collapse and also rely on foreign students. I mean, the way, certainly here in the US, but also in Australia and in the UK, that universities have been relying on foreign student income to really prop up their budgets. And we're starting to see a collapse of that. They’re not going to see those flows of capital in the way that we have in the past. And so what you're seeing now is universities had to shift immediately into online teaching without really anything much by way of training their academics about how to do that and let alone, you know, the quality of education that students might be getting. And of course, many students are now saying, ‘we want refunds, this is not what we paid for.’ It is completely understandable. I think universities are in a really difficult place right now, and I think it's going to get even more difficult in the fall, because there's no guarantee that come September, this is going to be resolved.

Trevor: Well, if there's anything hopeful about that, maybe it is the reevaluation of social safety nets. You know what I mean? This has brought that into really stark relief. What are communities and nations and places that have functioning social services in place, and which don't? Working between here and Germany, the difference is dramatic. You know, a lot of friends of mine who are freelancers are artists, and Germany set up a relief program and within twenty-four hours people were getting checks, and here, I've been going out and just photographing and trying to get out of the house a little bit here. And you're seeing the rising sense of desperation on the streets and it's intense. I think this time certainly teaches us that a different world is not only possible, but probably necessary. And maybe that's a good thing.

KC: And I couldn't agree more. I think if there's one thing that a pandemic and a truly global crisis like this gives us is that all countries are impacted now. Everybody is impacted. However, it's not a great leveler in any way. In actual fact, this virus is absolutely hitting black and brown and low-income communities so much harder and it hasn't even yet really taken off in Africa and India yet. To see how this is going to highlight the level of profound inequality is the thing that is most devastating. I think it also offers the most serious provocation for us to talk about how we design the world that we live in. You could not have a better example for why we need universal health care. This is it. We have, what, eighteen million people out of work in America right now and most of those people have health insurance through their employers. As somebody who grew up in Australia, I'm used to having universal healthcare. You can get healthcare regardless of whether you're in work or not. And you cannot have a moment like this without admitting that it is essential that we completely reconstruct healthcare in the US. And I think across the board there are so many areas where this is the moment for a deep political reassessment. And the other one, of course, is climate change. I mean, with all these planes grounded, with all of industry on hold, have you seen the air quality lately? In New York you can actually see the skyline. It's like a different city, but that's across the board. And one of the things I have on my phone is an app that tells me the air quality in cities all around the world. And the reason I have this, of course, is because I was going through the Australian fires just back in December, and it was so horrifying, and we couldn't breathe, and we were being evacuated. So, we were checking these apps every day and we were seeing that we had air quality worse than had ever been seen in Beijing or Delhi because of the thick ash in the air from the fires. Right now, across the world air quality has just gone way up and particulates have gone way down. You're looking at the fact that we can stop the engines of constant production. And we're going need to talk about what comes next for so many reasons, because of healthcare, because of climate change, because of the way in which we structure our societies. And it has to be a profound moment for critiquing capital.

TP: Absolutely. Absolutely. Well, I think we've gone over our allotted amount of time, and I think that that's a great place to end the conversation. I think it's just seeing: What are the possibilities? What are the things that this moment is forcing us to urgently reassess and really underlining how serious it is?

KC: I think that's right. And take care, Trevor, I hope you're doing okay there, and I hope you can find a way to make those albumen prints, have a home project, and set up your own dark room.

TP: I definitely have plenty of stuff to do here. Don't worry about me, always keeping busy.

KC: And thank you, everyone, for joining in. We can see you all on the comments. There have been far too many questions to answer all of them, but please stay in touch, reach out. And I hope you're all doing okay and getting through this, as we are all going through this together. And it's just been really lovely to see you, friends.

TP: Absolutely. It's great to see you. Okay, take care everyone. Bye. Bye-bye.

  • Pace Live — Trevor Paglen Talks to Kate Crawford, Apr 19, 2020