AI Made Me Do It
Exploring AI in Ordinaries of Everyday
/ / on datasets, territories, and parasitic relationships
In our current era, the internet and artificial intelligence (AI) have permeated every facet of our existence, readily accessible from the convenience of our fingertips. These technologies, ranging from voice-based personal assistants to real-time generative models, have not only mediated but also actively shaped our online and offline experiences. This integration of digital intelligence into our daily lives has transformed our living spaces, personal habits, and even our interactions with various devices. This new reality challenges our understanding of vision, perception, space, and expression, introducing ambiguities in activities traditionally associated with human capabilities. Now, tasks like writing, seeing, learning, navigating, interacting, talking, driving, vacuuming, shopping, labor, and even elderly care are no longer exclusively human endeavors.
However, this technological advancement does not occur in isolation. The European Union’s recent enactment of the Artificial Intelligence Act on February 2nd, 2024, acknowledges the growing need for regulation and ethical oversight in this rapidly evolving field. This development coincides with the relentless pace of AI research and deployment by Silicon Valley giants like Nvidia, Google, and OpenAI, which are unveiling new AI models monthly, researching and expanding at an unprecedented pace, and feeding on the vast data derived from millions of human interactions and environments to learn how to operate within our physical world.
Our upcoming virtual reality seminar, AI Made Me Do It, aims to peel back the layers of AI’s presence in our lives. We will thoroughly explore the realm of AI datasets that power these technologies, exploring their origins and the often-overlooked human labor that contributes to their formation. Questions will be raised about the resources required for AI systems, like the number of home scans needed by a robotic vacuum to operate efficiently at home, the data necessary for a self-driving car to navigate urban environments, or the RayBan sunglasses dataset, which assists users with their gaze in their ordinary everyday lives.
The seminar will also explore the human and environmental costs of training AI systems. We’ll investigate the human labor behind these datasets, the territories from which data and resources are extracted, and the natural resources utilized. By examining datasets related to human habitats and everyday objects, we aim to understand how these elements are transformed into digital data and the consequent effects on our interactions with the world.
As architects and designers of future socio-spatial domains, we must confront crucial questions:
What are the sociocultural and ecological implications of AI? How do art and architecture integrate AI, considering these broader impacts?
We aim to approach AI models with a critical and cultural lens, uncovering the stories and realities behind digital intelligences. Participants will engage in the storytelling of their research through short films, utilizing spatial mediums and game engines, while gaining hands-on experience with various AI models. These models include text-to-video, text-to-3D, text-to-speech, image-to-3D, image remixing, image-to-video, and real-time diffusion conversion.
Join us in this exploration to develop a more nuanced understanding of AI and its impact. Let’s uncover the layers of artificial intelligence and their impact on the fabric of spatial human experience.
Artistic References:
Simone C Niquille ‘HOMESCHOOL’
https://www.technofle.sh/hs/homeschool.php
Trevor Paglen ‘ImageNet Roulette’
Nicolas Gourault ‘Unknown Label’
https://emare.eu/works/unknown-label
Lauren Lee McCarthy ‘I-A-Suzie’ and ‘SOMEONE’
https://lauren-mccarthy.com/I-A-Suzie
Input Literature:
Vladan Joler and Matteo Pasquinelli, ‘The Nooscope Manifested’
Kate Crawford ‘Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence’
TEACHING OBJECTIVE
+ Exploring large datasets
+ Getting familiar with a diverse range of generative AI models, encompassing text-to-3D, image-to-3D, image remixing, text-to-video, image-to-video, real-time diffusion conversion, text-to-speech and more
+ Gaining proficiency in using Unity Game Engine
+ Developing skills to convert research into storytelling and short movies
METHOD
+ research through reading and film-making, research and discussion together
+ research through hands-on approaches (research in practice) to the different topics, including workshops.
TIME SCHEDULE
First Meeting Tue 05.03.2023 // 13:00
Introduction