Machine Learning and Creative Applications - Collaborative Systems with A.I. -
Would Artificial Intelligence (A.I.) replace human beings for creating art? Or would it help us to hatch out and discover the new level of creativity? Is this a new question or a question that we have repeated throughout human history? Homer's Iliad describes a human being whose name was Ganymedes. He "was the loveliest born of the race of mortals, and therefore the gods caught him away to themselves" and "for the sake of his beauty, so he might be among the immortals." Now A.I. came to live among us, mortals, for the sake of its endless possibilities, even perhaps bringing in beauty and meaning to our lives.
I explore the idea of A.I. as a collaborator for human artists in producing creative application. Is it going to change our relationship with "machines"? Would it bring people who have been shunned from art creation chances to explore the area of creativity?
This series shows faces of people who do not exist. They are created in collaboration with artificial neural networks. The MMD-GAN neural networks studied 200,000 human faces and then generated new faces as a child would be learning what a human is from many encounters and then drawing an imaginary friend. It learned human faces with no distinction between gender, race, or age and as a result this co-creator has drawn gorgeously diverse and intriguingly unique faces.
"Hello, Do You See Me?", 2017
This video is made for a group exhibition, Hiding In Plain Sight, at the bicycle-powered Mobile Cinema. It shows a face that is created by A.I. and being transformed by another set of artificial neural networks. However, the face may or may not appear to the audience depending on the distance between the audience and the screen similarly to the current status of A.I. or any minority groups in our society.
GANymedesVox, 2018
GANymedesVox is an interactive audiovisual installation that was premiered at the Future Perfect exhibition. Visitors wave their hands in front of the face and its video (face) and sound (voice) changes. The face was generated by GANs (Generative Adversarial Networks) system, transformed into a 3D model by another Machine Learning method, then printed in 3D printer. A video projection on the 3D printed face adds details to the face.
Deeply Dreamed Doodle
On-going project
In collaboration with Joel Moniz, Barnabas Poczos
We aim to create an artwork using our novel approach of creating video by combining Deep Dream and Style Transfer Video methods. This video shows a preliminary result.
Da Vinci robot practicing Ink Wash Painting
On-going project
In collaboration with Nico Zevallos, Ardavan Bidgoli, Ankita Patel, Vitek Ruzicka
The robot painters in 2017 have produced impressive oil and watercolor paintings that are indistinguishable from professional human artists' works. Some of them created their own composition with the help from Machine Learning algorithms. Four students and I are working on turning the Da Vinci surgical robot (DVRK) into an artist. We have done some preliminary studies and tests. This is a video documentation from one of our recent tests of mimicking human brush strokes.
Fake News project
On-going project
In collaboration with Vitek Ruzicka, David Gordon, Ankita Patel, Jacqui Fashimpaur, Manzil Zaheer
Fake news is used to distract people from correct understanding of our world. Some news that is composed of facts could be provided for the same kind of distraction. For example, when a young female North Korean refugee's interview appears more frequently to draw attention from other issus in the US. We want to make an artistic statement about this irony by generating fake news. As a preliminary result, we have collected news articles about North Korea and generated fake news using Recurrent Neural Networks. In the future, we plan to make a website, create an automated system, broaden our subject, and collaborate with fake news detection research.
Intuitive Understanding of GANs Performance
On-going project
In collaboration with Vitek Ruzicka, Barnabas Poczos, Manzil Zaheer
The goal of this research is to provide a visual comparison between a dataset and images generated by GANs for more intuitive understanding of its performance. First we have reproduced PGGAN results that are 1024 by 1024 high resolution face images. We have made some meaningful analytical observations about the issues of GANs performance.
Interactive Systems
Interactive systems created in art, design, architecture, and computer science have been improving in quality and growing in quantity with the advance of new technologies such as motion sensing, visual recognition, and various sizes of digital display. Now we are at the tipping point of creating more genuine interactive systems that in fact promote collaboration between humans and machines, specifically automated, generative, or intelligent systems.
My interactive systems are composed of interactive video projections, interactive and spatial soundscapes, and human participants. In the process of making them, I create meaningful contents, design interesting and intuitive interfaces, and study user and participant experiences. To create an interesting and intuitive interface, I have used a jacket-shape interface that one can wear, motion detection using light sources, motion detection using the Kinect depth sensor or Leap motion sensor, as well as ambisonic sound technique and interactive visualization techniques. Sometimes I worked with specific users/participants such as dancers, who would communicate with the interactive system in the most intuitive and creative ways.
Dancing with Interactive Space: Comparing Human-Space Interface of Shin'm 2.0 using Kinect sensor and of Shin'm using Wearable Interface, 2012
IMAC 2012 presentation video
Voice Woven, 2015
In collaboration with Donald Craig
Voice Woven is an interactive sound installation that uses a tactile interface designed in the shape of a spinning wheel. When the participant walks in, the room is filled with granulated mix of 8 different voices. Each voice is a thread. The participant spins the wheel interface and "weaves" the voices. The content of 8 people's voices are clearly heard when the supporting weave is made. In the version at the Being Her Now exhibition, the participant was able to hear 4 women's stories of themselves who are fragile, vulnerable, strong, and beautiful.
Aural Fauna, 2014
In collaboration with Donald Craig
Aural Fauna are creatures constituted of sound, whose entire existence is mediated through aural phenomena. Technically the Aural Fauna application responds to pitched sound and try to mimic the pitch of the input human voice. Also it changes its sound and visualization for different length of tones. A version of it, Aural Fauna and a Bower Bird was presented in Seattle. For the bower bird (horn player) and Aural Fauna's interaction, 6 iPads running Aural Fauna application and loud speakers were surrounding the stage. Later the audience was invited to interact with Aural Fauna as seen in the above video clip.
Shin'm, 2009
In collaboration with Diana Garcia-Snyder, Donald Craig, Bo Choi
Shin'm is composed of interactive audiovisual space, wearable interface, dance performance, and participant interaction. The dancer or the participant walks into the space, wears the jacket-shape interface, and interacts with the space using their body movements. In response, the spatial soundscape and video projection on the floor reshape themselves.
Shin'm 2.0, 2011
In collaboration with Diana Garcia-Snyder, Donald Craig, Bo Choi
Shin'm 2.0 is a second version of the Shin'm project. Two video projections and four loud speakers realize its interactive space. A Kinect depth sensor diagonally looking down the space detects the dancer or the participant's movement and shape. Particles made of a nebula image respond to the movement and shape change. The visualization and sound texture are designed to enhance the illusion of attachment and dissipation through this experience.
Shin'm Pinata, 2012
In collaboration with Diana Garcia-Snyder, Donald Craig
Shin'm Pinata is virtual pinata that people can pop by moving their arms. It is also an interactive audiovisual space made with a custom software designed by artist team, the Kinect sensor, video projectors, digital audio interface, and loud speakers.
Fluid Cave, 2011
In collaboration with Diana Garcia-Snyder, Donald Craig
This site-specific interactive installation creates an environment filled with water-like streams at the entrance of the gallery space. The participants enter and unavoidably "splash" themselves with imaginary water. Their shapes and movements create ripples and "reshape" this fluid space as they walk through.
Membranes, 2011
In collaboration with Diana Garcia-Snyder, Donald Craig
Membranes is an interactive space consisting of video projection and spatial soundscape. The participant submerges into this imaginary water-filled space as if they are "swimming" in an "ocean." As they walk in about 11 meters, they move through three invisible membranes. Passing through each of them, participants see themselves transforming into water-filled bodies and then dissolving into this "ocean."
Entanglement, 2008
In collaboration with Juan Pampin, Joel S Kollin
Entanglement is a telematic and interactive sound installation. A symbolic acoustic line between two distant locations is drawn by a hyper-directional sound beam. This fragile acoustic construction can be physically disturbed by the participants at each location. This project explores the concept of "tele-absense" (rather than tele-presence), using a virtual acoustic channel to telematically project the disembodied presence of participants interacting with the acoustic waveguide.
Projector Loquens is an interactive robotic projection system representing a situation in which a new species, Projector Loquens, attempts to communicate with a different species, Homo Sapiens. Projector is regarded as a new genus coequal with hominid and the specific epithet, loquens, indicates its communicative character and anxiety for communication. Projector Loquens consists of a projector turntable system, a human detection system, a projection system and movies. Projector Loquens speaks in the form of movies in response to the participant's position and speed.
Multimodality
Where does the evicted, escaped or transient body stay? Can "elsewhere," the transformative place located on the border, become their home? In the process of traveling elsewhere without a language that would work in the boundary of normative normality, how does the body speak?
My early works started with my questions about communication. How does one with no voice (physically, mentally, or socially) would speak? This question led me to several artworks that transform through different modes such as sound to visual forms or video to sound, also that use combinations of modes such as movements that carry sounds and real-time video. Concerning and building multiple and intuitive modes of communication, eventually my works grew into interactive projects as seen above.
PuPaa, 2009
In collaboration with Diana Garcia-Snyder, Donald Craig, Bo Choi, Sheri Brown, Allan Sutherland, Kathryn Hightower, Chelsea A Weaver
PuPaa is a multimedia performance, inspired by Butoh: a dance-scape of transformative states of body, mind and perception. In PuPaa, the dancers are entities living in obligatory symbiosis reminiscent of a Mixotricha Paradoxa, a microorganism referenced by Donna Haraway. While each entity expresses unique species characteristics, they create a collective body with incorporeal connections using video and sound projection technology embedded in their costume.
Metamorphosis - An Intermediate Study, 2008
In collaboration with Donald Craig, Diana Garcia-Snyder
The video of this project is from Kang's site-specific audiovisual installation Metamorphosis that presented a transformation of a body from video into sound. Electronic music composer Donald Craig was inspired by this project and composed an accompanying music piece.
Siren 3, 2004
Siren 3 is the third work of Siren series. It is a 3D visualization of sound. Sound samples from voice recording of the artist deform volumetrically constructed 3D head from MRI scan data of the artist. Deformations are accumulated and affect new deformations continually. Symbolically, Siren, who has no voice, sings this inaudible song by reshaping her own body.