AI’S UNSEEN BORDERS


ESSAY
Slide 1:

Hello! My name is Linda and I am an artist from Senegal. My background is in design, and before that, I briefly taught introductory college courses in data science. I started experimenting with AI, specifically GANs, in 2019. I was curious about what AI was and could do. The headlines at the time were scary: AI mistakes leading to false incarcerations for example. My own experience playing with GANs was softer, as I was using my own datasets. But violence was never too far behind, because it is so inextricably linked to how AI is developed and how it runs. This violence persists today. 

My presentation is going to be focused on my experience as an entity in the periphery of this technology, how I negotiate it to create with it, and why I keep using it. 




Slide 2: 


What is an image? This question is central to my practice because it underlies how meaning is constructed and by whom, how it is perceived and in what context, how it is experienced and to what ends. It’s always been clear to me that an image is a mediated expression shaped by context and intention, because images have been the most powerful instrument to create an idea of the place I come from. I also know the power that this idea has had in justifying my country and continent’s position in the geopolitical and cultural order. So when I, being who I am and coming from where I come from, examine what an image is, I am looking to understand the interplay between the viewer and the viewed, and the power relations that bind them to each other. I am always wondering whether in this performance of power, I can see clearly. 




Slide 3:

There is another instrument that has worked hand in hand with images to entrench any idea we hold of Senegal or Africa at large today, and that is data. Unlike images, data is considered empirical. We find comfort in it, because we believe data accurately describes what the world is. In the face of the unknown, we’ve braced ourselves with models of reality that give our ideas about it, authority. Just like images though, data are only deputies for reality, but not reality itself. 

What we measure, and where and how we measure it, is affected by who we are and our positionality relative to what we are measuring. The stories and the rules we create through these measurements are therefore always interpretive, speculative, incomplete and fragmentary. 

Just like an image, data also betrays context, intent, and reveals power structures. 

So what happens when data and images become more than discrete instruments used together in the making of meaning? What happens when data makes images? What happens when we see with AI? 




Slide 4: 

When interrogated about my city, I am not surprised that even the latest diffusion models see mostly dirt and dilapidated buildings. Stop most people on the street and you’d probably get the same idea. It took clear intent and considerable effort for this idea of Africa to become the dominant narrative about it. I can confidently tell you this is not actually what my city looks like, but it would only be my word against the volume of images and data to confirm this story. 

So where is the chasm? What is the truth? I have evidence to support what I know, but so do these diffusion models. When searching public domain archives about Senegal, it’s clear that an overwhelming amount of data about it comes from people who aren’t from and/or don’t live there. 




Slide 5:

AI is a groundbreaking technology to me because it tangibly confronts us with how our theories about humanity have affected the way we see and make sense of the world. AI models are memory machines - not theirs, but ours. They draw from how *we* have captured the world to synthesize their understanding. The images they produce reveal the patterns in our thinking and in our seeing, from the moment we started making images until now. 




Slide 6: 

The character of the images AI synthesizes asks many questions about taxonomy and the political nature of knowing and recording: Who decides what things (objects/ideas/places/people) are worthy of seeing, then naming? Who names these things once they are captured? Who looks for the captured and named things to train models with? Who points the system to where the named things are? What is missing? And why? And how? 

If I use AI today, it is to propose my own perspective, my way of seeing, and how that counters its understanding of me. Unlike anyone who can take this technology at face value, I have to put in considerable effort to “empiricise” my perspective. That is, to speak the language of the models, and provide context in the form of data. 




Slide 7: 

To use AI in a way that feels somewhat fair to me, I have to engage in one of the following activities. Each of these activities reveals flaws in the data that is used to train AI models. For instance, I have to: 

  • Gather my own data for training, from scratch, using my own measurement tools (boots on the ground talking to people, scanning archives, taking pictures, digitizing tapes, etc.) 
  • Spend considerable time negotiating red tape around datasets that exist (loads of public data collected and/or stored by private non-gov entities) 
  • Patch up and/or bring together fragmented datasets (entities that collect the data may change) 
  • Correct fundamental flaws with how the data is labeled which can mean re-contextualizing it altogether or even challenging the taxonomy used in the labeling process (basically the data doesn’t correspond to what it says it does) 
  • Have the time, resources, and expertise to engage in one or all of these activities.




Slide 8: 

The theme of the archive, already central in the artistic discourse, becomes even more so when confronted with AI. My work in AI is ultimately concerned with understanding the state of archives and creating my own in response. 

To me, archiving from a multitude of perspectives to counter the prevalent perspective that AI models serve is a kind of resistance against erasure. 




Slide 9: 

The archives that I create are proof that I exist. That I am of this world. That this world is my world too. As a person in the periphery of this technology, I am accustomed to being unseen. When I am seen, I am accustomed to being misrepresented. Too often, I have seen my realities ignored, distorted, or fabricated. These warped realities, often political instruments of exclusion, follow me around like shadows that I can never quite shake off. Still, like anyone, I crave to be visible. I, too, want the world (as narrated by AI) to hold space for my humanity. 




Slide 10: 

Finally, I will briefly address another important tension that I have to navigate using AI. 

The African continent prominently features in the AI value chain: being the place where an important amount of raw materials used to build the hardware enabling AI comes from, and being the place where a majority of tech waste is disposed of, as well as increasingly, providing the cheap labour that goes into moderating the outputs of AI at scale. 

But that’s a whole other topic we don’t have time for in this presentation, so I will leave it at that. 

Thank you. 















HOME