ONCE UPON A GARDEN: THE COMPLETE ARCHIVE


Once Upon A Garden is a speculative archive of critically endangered and extinct flora that we have little to no records of, and therefore no way to remember. In this project, I use Artificial Intelligence as a time machine to go back in time and piece together data about flowers we didn’t bother recording using data that we did record. 

2021 – 2024



Once Upon A Garden: The Complete Archive presents 50 emergent species of speculative flora across 5 chapters of making this body of work. The chapters map onto the chronological progress of this project between 2021 and 2024.
To be able to fill gaps in the world’s collective memory with synthetic memory is a unique opportunity that AI offers today. Between 2021 and now, Once Upon A Garden has speculated on what the flora population in West Africa (where I am from and lives) might have looked like decades ago using increasingly faster and more refined models. 

The resulting body of work, while increasingly more photo-realistic as AI became more powerful over the years, cannot be mistaken for reality due to the thin and fragmented data available for training. Global efforts to record disappearing biodiversity have not been consistent across time and geographies. In some places, when interrogating what’s been lost, we have no photographic records and can only rely on herbarium records dating back to colonial expeditions. 

Once Upon A Garden ultimately makes a case for more even efforts in recording biodiversity loss across the globe. It also shows an alternative use of AI that demonstrates how this technology can hold a mirror to what we care to remember and therefore record. Incidentally, this mirror also shows us what gets left behind in the making of an increasingly more influential tool in our lives and societies.

There is no longer any doubt about the transformative potential of AI. Yet, its proliferation has also uncovered gaping holes in how and where data about humanity is collected, stored, and preserved. AI is only ever as good as the data it’s trained on. Still, there seems to be a clear unevenness in how much information AI has about non-Western ideas, contexts, people, and environments. This is evident in how common stereotypes and hallucinations are when interrogating generative AI tools today, be it language or image based. 

Each chapter of Once Upon A Garden starts with the same foundational data: records of extinct and critically endangered flora from the Sahel region of West Africa, as classified by the International Union for Conservation of Nature. This means photographic records from the internet (seldom available), photographic records from physical archives (national archives, family photo books, journals, etc.), herbarium records, encyclopaedic records, and academic records. From these records, individual species are identified and tagged with all available information about it (texts and images).


First Generation_F1-GNS01 (2021-2022)
Second Generation_F1-GNS01 (2022)
Third Generation_F1-GNS01 (2023)
Fourth Generation_F1-GNS01 (2024)
Fifth Generation_F1-GNS01 (2024)



First Generation, 2021–2022


For the first chapter of Once Upon A Garden, I trained a GAN on real and imagined visual records of extinct and critically endangered flora from the Sahel region of West Africa. The ‘real’ visual records were all the available images gathered in the database for this project. The ‘imagined’ records were synthesised with a diffusion model using text records when ‘real’ images weren’t available. Due to the fragmentary records of biodiversity in West Africa, this GAN thus featured more imagined records than real ones. 

I knew that some distortion and lossiness was to be expected from the GAN’s outputs, considering so much of its data was created by the imagination of another model. I believed though that this lossiness provided an apt metaphor for the ongoing disappearance of the natural world, and how thorough that disappearance was in places where we didn’t care to remember what’s left and what’s been lost.

From this first chapter, individual flower outputs were curated, animated, and composited into a variety of artificial gardens. These gardens were installed at various exhibitions and art fairs around the world, starting as a special project commissioned by ARTX LAGOS in 2022. 



GAN training outputs




The Garden At Dawn, 2021








Second Generation, 2022


Once Upon A Garden’s second chapter begins with the species that emerged from the GAN, shortly after training was completed. In this chapter, I attempted to further denoise images of the flowers with a text-to-image diffusion. Using the text data gathered at the onset of the project as prompt, I endeavoured to bring more life into the GAN outputs.

The spectral suggestions of flowers from the first chapter turned into more recognizable and familiar objects. I realised then that as AI models inevitably evolved, their ability to remember would only get stronger. It felt important to continue exploring how much more refined the synthetic memories of the flowers could be with each milestone in the evolution of models. 




Text-to-Image outputs, 2022



GAN to Text-to-Image pipeline, 2022






Third Generation, 
2022–2023

The third chapter of Once Upon A Garden deepens the narrative of loss and memory, featuring the most detailed synthetic flowers up to this point. In this chapter, I compare different diffusion models using a similar approach to Chapter II, starting with text data from the initial database for the project, and using the GAN to Text-to-Image pipeline outputs as reference images. 

The release of Midjourney v.6 toward the end of 2023 provided the most detailed and faithful outputs to the initial data. This release also introduced a p5 algorithm that obscured the striking details from the outputs, reminding the audience that no matter how much more detailed and may have seemed, the flowers were still speculative. 

Flowers from this chapter were commissioned for a collection by Bright Moments, as well as for an exhibition and collage workshop in partnership with Vlisco



Text-to-Image outputs, 2022–2023



Text-to-Image (first models) to Text-to-Image (latest models) pipeline, 2023







Fourth Generation, 
2023–2024

Once Upon A Garden plays with entropy. Between the first and the third chapter, it is concerned with decreasing entropy by introducing more data that is digested by more powerful models to arrive at a system where flowers have familiar and distinct embodiments. 

Starting in the fourth chapter though, I start the process of rewilding this body of work and increasing entropy. First, lossiness is added to flowers from the fourth chapter by adding non-organic materials to their make-up. Second, gardens from the first chapter are re-explored through wilder and more chaotic floral arrangements. 

Through an Image-to-Image pipeline, flowers are corrupted. Then using a p5 collage algorithm, they are randomly picked and placed on a canvas to create a wild gardens. These gardens were commissioned by private collectors for large scale mural installations. 




Text-to-Image outputs, 2022–2023



Image-to-Image pipeline, 2024 



 Herbarium Annex, GNS015–GNS07, Commissions, 2024






Fifth Generation, 
2024

The final chapter of Once Upon A Garden, titled Synthetic Rot, is the culmination of this body of work spanning nearly four years. It traces generative AI models’ evolution over this period and their ability to help us speculate on and synthesise the past. 

This chapter presents 50 stills and 50 videos of randomly assorted flowers, often with multiple different species sharing the same stem, behaving erratically, making and remaking themselves with only chaos as a compass. Each video is accompanied with the reference still image that was used as a keyframe to create the video. 

Flowers from the previous chapter are made even less organic through an Image-to-Image pipeline. The outputs are then used as keyframes to create animations using Runway’s Gen 3 Alpha. The soundtrack for the videos is created by mixing three types of recorded sounds (nature, machinery, people at work) with desert blues musical tracks generated in Suno AI. 

Synthetic Rot is the wildest stage of this body of work, where flowers are composed of mostly inorganic material, exposing the fact that they are primordially data mediated through our screens by code made possible by machinery built with natural resources.  This chapter was released in collaboration with Fellowship















NEXT