Organised by Arebyte Gallery
Please note that this is a two-part workshop series with sessions taking place on Saturday 11 and 18 May from 10am - 12pm. One ticket covers both sessions.
With the recent emergence of image-generating AI systems such as DALL-E, Stable Diffusion and Midjourney, the task of generating images with computers has been remodelled as a text-based process known as 'prompting'.
Prompting involves instructing a pre-trained AI model to hallucinate images in reaction to the specifics of text-based inputs. Prompts dictate image style, composition and genre, to the extent that images generated using these systems often appear as weird pastiches, with giveaway aesthetic signals - 8-fingered hands, for example - that point to their artificial origins.
Workshop participants will experiment with trying to find the cracks in these systems, using creative prompting to explore whether these systems really are the dawn of a new horizon, offering the potential for breakthroughs in digital image-making beyond the offset of human labour.
Workshop 1. Introduction to Stable Diffusion: Text to Image
Date: May 11th 2024, 10am - 12pm
This workshop will provide an introduction to the interface of Stable Diffusion’s Automatic 1111 software. Participants will explore the impact of the softwares wide variety of features on images generated through text-based prompts.
Workshop 2. Introduction to Stable Diffusion: Image to Image
Date: May 18th 2024, 10am - 12pm
Building on the introduction to the Automatic 1111 interface covered in the first session, this workshop will explore how Stable Diffusion's img-2-img functionality can be used as a collaborative tool to reimagine image-based practice.
Requirements:
A laptop
Prior to the session, please create an account with rundiffusion.com and add $5 credit to the account.
- ai
- promptart
- generativeart