Once Upon A Garden is a speculative archive of critically endangered and extinct flora that we have little to no records of, and therefore no way to remember. In this project, I use Artificial Intelligence as a time machine to go back in time and piece together data about flowers we didn’t bother recording using data that we did record.
The resulting body of work, while increasingly more photo-realistic as AI became more powerful over the years, cannot be mistaken for reality due to the thin and fragmented data available for training. Global efforts to record disappearing biodiversity have not been consistent across time and geographies. In some places, when interrogating what’s been lost, we have no photographic records and can only rely on herbarium records dating back to colonial expeditions.
There is no longer any doubt about the transformative potential of AI. Yet, its proliferation has also uncovered gaping holes in how and where data about humanity is collected, stored, and preserved. AI is only ever as good as the data it’s trained on. Still, there seems to be a clear unevenness in how much information AI has about non-Western ideas, contexts, people, and environments. This is evident in how common stereotypes and hallucinations are when interrogating generative AI tools today, be it language or image based.
Each chapter of Once Upon A Garden starts with the same foundational data: records of extinct and critically endangered flora from the Sahel region of West Africa, as classified by the International Union for Conservation of Nature. This means photographic records from the internet (seldom available), photographic records from physical archives (national archives, family photo books, journals, etc.), herbarium records, encyclopaedic records, and academic records. From these records, individual species are identified and tagged with all available information about it (texts and images).
First Generation, 2021–2022
I knew that some distortion and lossiness was to be expected from the GAN’s outputs, considering so much of its data was created by the imagination of another model. I believed though that this lossiness provided an apt metaphor for the ongoing disappearance of the natural world, and how thorough that disappearance was in places where we didn’t care to remember what’s left and what’s been lost.
From this first chapter, individual flower outputs were curated, animated, and composited into a variety of artificial gardens. These gardens were installed at various exhibitions and art fairs around the world, starting as a special project commissioned by ARTX LAGOS in 2022.
GAN training outputs
Second Generation, 2022
The spectral suggestions of flowers from the first chapter turned into more recognizable and familiar objects. I realised then that as AI models inevitably evolved, their ability to remember would only get stronger. It felt important to continue exploring how much more refined the synthetic memories of the flowers could be with each milestone in the evolution of models.
Text-to-Image outputs, 2022
GAN to Text-to-Image pipeline, 2022
Third Generation,
2022–2023
The release of Midjourney v.6 toward the end of 2023 provided the most detailed and faithful outputs to the initial data. This release also introduced a p5 algorithm that obscured the striking details from the outputs, reminding the audience that no matter how much more detailed and may have seemed, the flowers were still speculative.
Flowers from this chapter were commissioned for a collection by Bright Moments, as well as for an exhibition and collage workshop in partnership with Vlisco.
Text-to-Image outputs, 2022–2023
Text-to-Image (first models) to Text-to-Image (latest models) pipeline, 2023
Fourth Generation,
2023–2024
Starting in the fourth chapter though, I start the process of rewilding this body of work and increasing entropy. First, lossiness is added to flowers from the fourth chapter by adding non-organic materials to their make-up. Second, gardens from the first chapter are re-explored through wilder and more chaotic floral arrangements.
Through an Image-to-Image pipeline, flowers are corrupted. Then using a p5 collage algorithm, they are randomly picked and placed on a canvas to create a wild gardens. These gardens were commissioned by private collectors for large scale mural installations.
Text-to-Image outputs, 2022–2023
Image-to-Image pipeline, 2024
Herbarium Annex, GNS015–GNS07, Commissions, 2024
Fifth Generation,
2024
This chapter presents 50 stills and 50 videos of randomly assorted flowers, often with multiple different species sharing the same stem, behaving erratically, making and remaking themselves with only chaos as a compass. Each video is accompanied with the reference still image that was used as a keyframe to create the video.
Flowers from the previous chapter are made even less organic through an Image-to-Image pipeline. The outputs are then used as keyframes to create animations using Runway’s Gen 3 Alpha. The soundtrack for the videos is created by mixing three types of recorded sounds (nature, machinery, people at work) with desert blues musical tracks generated in Suno AI.
Synthetic Rot is the wildest stage of this body of work, where flowers are composed of mostly inorganic material, exposing the fact that they are primordially data mediated through our screens by code made possible by machinery built with natural resources. This chapter was released in collaboration with Fellowship.