Light Leaks
Weir, Catherine (2022) Light Leaks. [Artefact]
|
|
|
Creators/Authors: | Weir, Catherine | ||||
---|---|---|---|---|---|
Abstract: | 'Light Leaks' is a three-part projected work in which an image recognition system attempts to identify patterns in a series of visual artefacts extracted from one of the artist’s early attempt to train a custom StyleGAN model using a dataset of their own photographs. In photographic practice, visual artefacts like lens flares and light leaks are considered by many photographers to be defects that draw unwanted attention to the apparatus of photography. In 'Light Leaks', the visual artefacts generated by StyleGAN – which are themselves visually reminiscent of the damage inflicted on chemical film when a camera is not fully light-tight – are similarly understood as drawing attention to the apparatus of contemporary machine learning systems. However, rather than viewing these artefacts as a defect, 'Light Leaks' employs them as a means to facilitate critical reflection on these systems without reference to iconic images: a process which Lyle Rexer, writing on abstract photography, terms ‘looking with’ as opposed to ‘looking at’. Viewed from left to right, the first panel of the projection displays one of the visual artefacts, or light leaks, in isolation on a black background. The middle panel displays the description of the image automatically generated by im2txt: a machine learning model designed to automatically caption photographic images, trained on the widely used Common Objects in Context (COCO) dataset. The final panel displays an image generated by the AttnGAN image generation model using the text from the middle panel as a prompt. The most common captions generated by the system describe illuminated objects – ‘a traffic light is lit up at night’, ‘a view of a clock tower in the dark’ – but over time less obvious descriptions begin to appear – ‘a person holding an umbrella’, ‘a black and white photo of a person on a snowboard’. Through these descriptions, viewers are afforded an insight into the eighty object categories that exist within the COCO dataset, which is known for a bias towards technological objects. In doing so, 'Light Leaks' aims to draw viewers’ attention to problems that arise in any attempt to exhaust images with words, and the larger question of who gets to decide the categories employed by contemporary image recognition systems. | ||||
Output Type: | Artefact | ||||
Uncontrolled Keywords: | artificial intelligence, machine learning, image recognition, digital photography, Processing, RunwayML | ||||
Media of Output: | Custom software program; made with Processing and Runway ML | ||||
Schools and Departments: | School of Design > Interaction Design | ||||
Dates: |
| ||||
Output ID: | 9256 | ||||
Deposited By: | Catherine Weir | ||||
Deposited On: | 27 Feb 2024 16:22 | ||||
Last Modified: | 28 Feb 2024 11:26 |