Today, we use generative machine learning models in various ways. They help us create music, text, pictures, videos, and even 3D models. These smart systems were once only found in labs or big tech companies. Now, they are part of our daily tools, making creation more accessible.
But these tools can be hard to control. Whether typing commands or clicking on a screen, it's not always easy to achieve what we want. This difficulty is especially prominent for tasks that require a great deal of precision.
Many interfaces use text as input. This way, users must explain what they want in natural language. Depending on the type of content being created, it's not always easy to clearly express our ideas. But sometimes, language isn't the most effective way to communicate what we have in mind.
Latent Organism proposes a novel technique for designing 3D objects using a tangible interface. Our artifact utilizes generative algorithms to enable anyone to produce unique and complex 3D shapes through natural and playful interactions with an intuitive and sensitive tactile interface. This process of co-creation between the human and the machine through balanced interaction is a method for appropriating artificial imagination. For this experiment, the machine was fed thousands of photos of living organisms. Its property of learning and generalization allows it to converge towards an abstract conception of organic flesh. Then, navigation becomes possible in this continuous space of potentiality. The spectator is in control, using the imagination of the machine as clay, as a moldable material.
We're no longer the sole craftsmen; we've become co-creators with AI.
The way we create 3D shapes has been transformed. Today, we have the option of using AI-powered tools to assist us in inventing shapes. It's a significant shift because we're no longer the sole craftsmen; we've become co-creators with AI.
Typically, this co-creation process involves back-and-forths between the AI, which generates propositions, and the user, who approves, modifies, and responds to these suggestions. The human no longer directly controls the movement that creates the shape. That's why the interface with these new tools plays a crucial role in how we gradually explore the machine's imagination. AI introduces new levels of complexity, and the usability of these tools can vary significantly in controllability.
Latent Organism is composed of an e-textile-covered controller for AI-generated 3D visuals. A handcrafted embroidery, created from silicone and fabric, is embedded with ten piezoresistive sensors. These sensors, made from fabric, conductive paint, and Vestostat, are arranged in a 2x2 pressure matrix and are integrated into the structure of a bean bag chair.
The generative aspect of our project is built on DreamField, a model trained to create a link between text and 3D. This model can generate a corresponding 3D shape and texture from a given text input. The process begins with several textual prompts generated by an embedding model trained on an encyclopedia-of-life. These prompts are used as inputs for our Dreamfields model, which creates 3D meshes from them. For each generated mesh, a blendshape is calculated, allowing a seamless transition between the initial shape and the new one. These blendshapes are controlled by the sensors of the artwork's physical interface, responding to user interactions. Multiple blendshapes can be cross-faded simultaneously, adding an extra dimension of complexity to the mesh deformation.
Our system features a feedback mechanism that updates the prompts based on the user's choices at regular intervals, while maintaining some randomness to avoid repetitive patterns.
The first prototype of the installation was developed in 2021 in collaboration with Marianne Canu and Sophie Chen during the creARTathon workshop organized by Inria, the University of Paris-Saclay, and the association Societies. Latent Organism was awarded the second prize of the creARTathon, and showcased at ENS Paris-Saclay and Gallery Joseph, Paris. This event was supported by the French Ministry of Culture.
The second iteration of the artwork was created for the Extended Senses & Embodying Technology Symposium 2022, supported by the X10DD Senses Laboratory research group at the University for the Creative Arts and the School of Design at the University of Greenwich, London. It was exhibited at Stephen Lawrence Gallery, London.
We thank Wendy Mackay for her valuable feedback, which improved the quality of the paper.