Five Quick Workflows to Integrate AI into your Design Process
Description: Latent Diffusion Models and AI Image-Making
Author: Joshua Vermillion

EVERYONE PANIC—AI IS HERE! Ok, I’m just kidding. Putting silliness aside for just one moment, the reality is that you’ve probably heard or read experts speaking about our AI-fueled future with equal parts optimism and doom. Perhaps one day Artificial Intelligence will drive our cars and planes, become our personal assistants, help us solve a host of wickedly complex and existential global problems, and make life on earth generally easier and more leisurely. On the other hand, Artificial Intelligence might not be a benign assistant but rather a sentient tour-de-force, capable of outwitting humans, automating our jobs, all while taking over the world.

That got really dark, really quick right? It’s a good thing, then, that the current systems that we typically call AI are simply Large-Language Models (LLMs). Rather than plotting the dominion of machines over humans, these systems are really good at very narrow tasks such as writing text or generating images. And given these two skills, it’s no surprise that these tools can be helpful for designers. However, as a professor of architecture and a design technologist, I’m often asked just how these widely available AI apps actually augment the human designer. The short answer is one simple word: brainstorming.

Unlike a lot of our digital design tools, AI can help in early stages of design when divergent, out-of-the-box thinking is most valuable. I liken the use of generative-AI to having a creative team at your fingertips able to generate images of your ideas almost as quickly as you can thumb them out on your smart phone. Whether you are generating a mood board, sharpening a concept statement, or even just overcoming designer’s block, mining AI’s latent space for visual ideas is speedy, convenient, and actually a lot of fun.

If you are wanting to try these models out for yourself to see what all the fuss is about, but aren’t sure what to do, you’ve come to the right place. Here are five quick workflows to start using AI for image-making and getting the machines to work for you. One thing is for sure, a picture is worth a thousand words (at least), and that’s where we’ll start—with words!

Workflow No 1: Play a Game of MadLibs

Favored platform: Midjourney – http://midjourney.com

Midjourney is a subscription-based diffusion model with a web interface. It also runs on Discord if that’s your thing.

For interpreting words into provocative images, Midjourney is my platform of choice. But sometimes getting started can be the hardest part of writing (just ask any editor I’ve worked with). Rather than staring at a blank page, paralyzed, you can gamify the task of writing with a MadLibs framework that parallels your prompt-writing. For example, you can craft a one-sentence spatial description that combines a geometric adjective (tall, angular, undulating, etc), a material (concrete, wood, glass, etc), and a mood (dreamy, somber, monumental, etc). Enter this into Midjourney as a prompt and see what it generates. Next, start switching out words, just like a game of MadLibs. Soon you will have a series of very descriptive statements about your design goals, concepts, and desires, along with a series of generated images illustrating each statement.

And just like any other early design exercises (sketching, building study models, writing), this is a chance to explore some of your counterintuitive ideas just as quickly as your more pragmatic ones. In fact, sometimes the surprising misinterpretations or strange juxtapositions that are visually generated by Midjourney can help subvert some of your preconceptions about a project—getting you to think about your ideas in different ways or from different directions or to sharpen your design ideas.

Midjourney MadLibs example, simply switching the material each time:

Prompst for the 9 images below:

a complex 3d parametric wall in a modernist living room with a tall ceiling made from…

1) spiky plastic drinking straws

2) wooden sticks

3) translucent gummy bear candies (as one does)

4) blocks

5) golf balls

6) grapes

7) machined polystyrene

8) coral reef

9) light bulbs?! (I don’t get it either)

Workflow No 2: Render your Hand-drawn Sketches with AI

Favored platform: LookX – http://lookx.ai

LookX is a subscription-based model with a web interface. Free credits to try out LookX are available when you sign up.

We just discussed using words to take generative-AI for a quick spin, but these tools can also make images from images. This capability makes AI an interesting design partner when it comes to riffing off of sketches. One such platform, LookX, is geared toward design professionals in the building industry and can render your sketches with a variety of materials, at a variety of scales, and within a variety of settings. Need to render a sketch plan? No problem for LookX. Elevations? Perspectives? Interior? Exterior? LookX has a variety of models that are fine-tuned for many different situations and lets you also create your own fine-tuned rendering models.

LookX, and other systems like it, can detect edges in your drawings and try to interpret depths, surfaces, and other spatial relationships in order to render it in seconds. One sketch combined with LookX can suddenly expand the size of your design space, even while you’re making more sketches! Good sketches generate good renderings, which in turn, lead to more sketching, and on and on in an analog-AI-augmented loop. Remember—AI will never replace sketching, rather, it can work with and enhance this age-old medium.

Sketch rendering example from LookX:

A simple, digital line drawing rendered with particular materials in LookX—before and after. Below, a matrix of example renderings with different materials and locations.

Workflow No 3: Blend Old Work with Midjourney

If you’re like me, then you might have old process work just lying around collecting dust, waiting to be recombined with fresher ideas. You can elevate old or stale work by feeding it into Midjourney with image prompting, style references, and my favorite—blending. Blending images sounds exactly like how it works—blending the composition and aesthetics of two or more images together into novel images. The digital equivalent of putting chocolate and peanut butter together, blending can easily let you mix all of kind of mediums—analog or digital, similar or disparate, obvious or counterintuitive—to create all kinds of visual combinations. I know a lot of creatives (myself included) who really enjoy plugging in old work and giving it new life by letting the “fresh eyes” of AI blend it into new, fresh, and weird concoctions.

Image Prompt Example:

A screen capture from Rhino/Grasshopper used as an image prompt to create a cellular/modular urbanism in Midjourney.

Text prompt:

stacked modular architecture in a bustling city

Blend Example:

Blending an old graphite drawing and an old digital rendering (above) breeds new, surprising, and dynamic compositional combinations that play with motion, balance, figure/ground, and textures (below).

Workflow No 4: Animate with RunwayML

Favored platform: RunwayML – http://runwayml.com

RunwayML is a subscription-based model with a web interface and a smartphone app. Free credits for trying Runway are available when you sign up.

At the moment, everyone is talking about Sora—the new video generation model from OpenAI that produces one full minute of video and seems to be a significant breakthrough in quality and coherency. While Sora isn’t available to the public yet, you can still use generative AI to create video content. In fact, RunwayML is a publicly accessible app that has been at the forefront of AI video generation for a while. With Runway, you can generate video, four seconds at a time, from text, images, or (surprise!) a combination of text and images. Want to animate an interesting scene from a sketch, or maybe an interesting output from Midjourney? Runway lets you specify the amount and type of motion, and with motion brushes, you can paint the specific areas of an image that you want to animate.

Now you can speculate about changes in form, lighting, setting, and composition, or move the camera in “Director Mode” to generate time-based video content for storytelling. And from the examples below, you can see that it’s possible to quickly generate ideas about environments that are ever-changing and transforming—even far-fetched ideas about animate architecture.

Generative Video Examples:

Workflow No 5: Reverse Engineer an Image

Favored platform: Midjourney (although, newer multi-modal versions of ChatGPT can now perform this task as well).

We all know how Midjourney works by now—text the bot a word description, and the bot sends images back—but what if I told you that you can run the process in reverse. With Midjourney, you can use the /describe command to send the MJ bot an image. In return, the bot will generate four prompts trying to describe the image. You can see the example below, and to be clear, I would never write this myself. In fact, it’s not at all how my own human brain interprets the original image. However, it can be helpful to ask how the machine sees and interprets and image. The diffusion model doesn’t think like us, and that’s exactly the point when using it to generating alternatives to bend and extend our imaginations.

Describe Example:

I used the above screenshot image from a previous workflow, but this time asking Midjourney to write four possible prompts. The /describe command yielded the following four prompts:

art of the block 3d object on a white background, in the style of repetition and accumulation, silver, fragmented architecture, trace monotone, screen format, rough clusters, louis kahn –ar 4:3

4000px silver mesh free 3d model for architects and construction professionals, in the style of cubist fragmentation of space, rachel whiteread, mark tobey, johfra bosschart, monochromatic chaos, naïve drawing, fragmented figuration –ar 4:3

c3dartistphotoshotdiamond 3d scan preview no 1, in the style of cubist fragmentation of form, detailed architectural drawings, monochromatic chaos, the bechers’ typologies, white background, piles/stacks, screen format –ar 4:3

human 3d modeling dsl1 no 4 preview, in the style of abstracted architecture, monochromatic chaos, piles/stacks, lithograph, screen format, silver, tesseract –ar 4:3

After some trial and error and a little more word-smithing, I get pretty interesting results from these prompts such as the images below.


There you have it—five ways to integrate generative-AI into the creative process (at least for this month, after all, things are moving fast). There are clearly things that these models are and aren’t good at. For instance, if you need control and precision, you will probably be sorely disappointed with the current AI apps. The key to making these tools to work for you is understanding how you can take full advantage of their generative capacity. However, they’re all still pretty new and constantly changing, evolving, and (hopefully) improving. My best advice is to buckle up, keep an open mind, be nimble, and prepare for a lot more change. “Learning how to learn” is a well-used platitude in academia, but it’s also the zeitgeist in this new, ever-evolving era of AI. Staying up-to-date and relevant, requires engagement and experimentation with these new technologies in order to discover new workflows and adapt the old ones.

About the Author:

Joshua Vermillion is a designer, scholar, maker, and educator. Trained as an architect, a design technologist, and a storyteller, he uses machines to augment his creative work.

Stay in the loop
Join our mailing list to stay in the loop with our newest competition releases, new digital goods, NFT drops, News, and Markets.
Join ideasss Community
Copyright © 2024 ideasss Inc. All rights reserved.Privacy PolicyTerm of ServiceAbout Us
Contact Us