SUPERPAINT | 2023-02
SUPERPAINT
SUPERPAINT is an experimental prototype for mobile, to explore creative potential in the intersection between human and img2img machine learning algorithms. The very simplistic pixel painting interface with additional webcam feed is equipped with a simple prompt interface to interconnect with a local Stable Diffusion API instance. Working with SUPERPAINT can be understood as a multilayered process: the image, the machine returns after processing the prompted input, can then be again „overpainted“ and „overprompted“ by the human, to make the result more precise or just to creatively wander around.

Designed for an iterative creative process
SUPERPAINT is designed with a creative process in mind. As the user delivers input for the machine via drawing, photo or prompting, each synthesized result is another step towards another level of visualization. As there is a lot of unpredictability involved in this process, this kind of tool is best used within a creative, brainstorm phase of an idea or project.


SUPERPAINT: taking a basic picture as a start

SUPERPAINT: overpaint the photo with onboard brushes

SUPERPAINT: enter prompt and weights to send to SD

SUPERPAINT: SD delivers an accordingly result

SUPERPAINT: overpaint the result from SD also with colorpicks

SUPERPAINT: revisiting another layer of SD synthesis

SUPERPAINT: overpaint with additional elements and describe them in the prompt too

SUPERPAINT: some kind of final result
The machine as creative sparring partner
Although current img2img algorithms are very capable to „recognize“ aspects of meaning in a visual prompt – in the direct dialog it is quiet a challenge to make the machine cooperate in the way a simple human2human dialog would work. It is obvious, that the potential in these early version lies in the creative visual journey/dialog between human and machine.

simple img2img transformation
The Stable Diffusion API comes with the function to mix up a text prompt with specific image visual prompt to a new synthesized result. This can be used to qucikly visualize ideas or simply to creatively play with existing visuals or material.

SUPERPAINT
SUPERPAINT is an experimental prototype for mobile, to explore creative potential in the intersection between human and img2img machine learning algorithms. The very simplistic pixel painting interface with additional webcam feed is equipped with a simple prompt interface to interconnect with a local Stable Diffusion API instance. Working with SUPERPAINT can be understood as a multilayered process: the image, the machine returns after processing the prompted input, can then be again „overpainted“ and „overprompted“ by the human, to make the result more precise or just to creatively wander around.
Designed for an iterative creative process
SUPERPAINT is designed with a creative process in mind. As the user delivers input for the machine via drawing, photo or prompting, each synthesized result is another step towards another level of visualization. As there is a lot of unpredictability involved in this process, this kind of tool is best used within a creative, brainstorm phase of an idea or project.

SUPERPAINT: taking a basic picture as a start

SUPERPAINT: overpaint the photo with onboard brushes

SUPERPAINT: enter prompt and weights to send to SD

SUPERPAINT: SD delivers an accordingly result

SUPERPAINT: overpaint the result from SD also with colorpicks

SUPERPAINT: revisiting another layer of SD synthesis

SUPERPAINT: overpaint with additional elements and describe them in the prompt too

SUPERPAINT: some kind of final result
The machine as creative sparring partner
Although current img2img algorithms are very capable to „recognize“ aspects of meaning in a visual prompt – in the direct dialog it is quiet a challenge to make the machine cooperate in the way a simple human2human dialog would work. It is obvious, that the potential in these early version lies in the creative visual journey/dialog between human and machine.
simple img2img transformation
The Stable Diffusion API comes with the function to mix up a text prompt with specific image visual prompt to a new synthesized result. This can be used to qucikly visualize ideas or simply to creatively play with existing visuals or material.