Inspiration
With powerful, collaborative tools like Figma, designers have become more productive than ever, however, they still constantly have to create and search for visual assets as they craft software designs, including illustrations, icons, stock photos, backgrounds, and other visuals that communicate their design intent to developers and other stakeholders. To search for these assets, they currently use a variety of resources including stock photo sites, google image search, manually crafting assets themselves in tools like photoshop, or taking screenshots of other websites. This process is tedious, not much fun, and takes them out of their preferred design OS - Figma. Though it seems obvious to us, designers today aren’t using AI image generation tools as they design because it’s quite difficult to get the precise results that a designer needs from existing AI image user interfaces. The generated images feel too random for designers to rely on to save time. In short, prompt engineering seems too complex and technical for designers to adopt these tools. We see this as an obvious opportunity to make high-quality image generation effortless for designers.
What it does
Conjure.ai brings the power and creativity of AI-generated digital assets directly into Figma through a UX that is tailor-made for designers. They don’t have to worry about complex prompt engineering because Conjure.ai’s guided UX allows them to simply communicate what they’re trying to create, like a background, icon, or illustration, and add a few keywords describing their aesthetic goal to get high-quality assets directly in Figma.
How we built it
In our tech stack, Conjure.ai is three systems working in tandem. We have a React application running within the Figma plugin ecosystem, a typescript app running as a Figma controller, and a python backend that integrates with Cohere’s stable diffusion API.
Challenges we ran into
The main challenges we faced were defining the user experience that would provide the right level of abstraction for designers to generate the assets they need without worrying about the technical details of prompt engineering and the relatively rough UX of a raw text-to-image API. In addition, we overcame some tricky information related to image compression and data format translation between the Figma react app and Figma document controller, making sure that we could handle multiple requests in repeated succession so users can generate multiple images, all the while making sure we had a slick UI that designers would feel at home using.
Accomplishments that we’re proud of
We’re so proud that everyone on the team has been constantly using the tool since it was finished. In short — it’s really fun! We’ve generated over 400 images in just the last few hours, and we can’t wait to allow more designers to enjoy the creative power of AI-generated visual assets directly within Figma.
What we learned
The main lesson is that sometimes all-nighters are unavoidable and software estimates are ALWAYS wrong. We estimated we could get a working prototype finished by 6 pm Saturday night. Conjure.ai didn’t complete its first end-to-end image generation within Figma until 6 am this morning! It was a team effort and we loved the process, but we look forward to a good night of sleep tonight.
What’s next for conjure.ai
We want to launch in the Figma community plugin store! We’re big believers that software should be free for anyone building in public, so we look forward to taking a community-first approach to bringing Conjure.ai to a focused set of designers as we continue to refine the prompt development interface so that designers can effortlessly create exactly the asset they need quickly and creatively. Nothing would please us more than having designers adopt Conjure.ai as part of their day-to-day workflows.
Built With
- api
- co:here
- co:paint
- google-cloud
- javascript
- python
- react
- typescript
Log in or sign up for Devpost to join the conversation.