How One Creative Prompt Can Generate a Full Motion Project Starting Point

Learn how to take a single creative prompt through an LLM and image generator to produce a usable set of visual references for a motion graphics or video project.

One of the most practical things you can do when starting a motion graphics or video project is use a text-based AI tool to generate multiple useful outputs from a single, well-considered input. Rather than treating AI as a tool that produces a finished result, you can treat it as a workflow accelerator, something that takes a rough creative direction and helps you quickly produce a project summary, a script concept, and an image prompt, all from one starting point.

  • A single LLM prompt fed with a brand guide and a project brief can generate a project summary, multiple script concepts, and visual prompts for image generation, giving you several usable starting points in minutes.
  • Image generators like Adobe Firefly, Midjourney, and Leonardo each interpret the same prompt differently, so running identical prompts across tools or adjusting settings within one tool gives you a range of visual options to compare.
  • The output at every stage is a starting point, not a final deliverable. The creative judgment about what to keep, modify, or discard stays with you throughout the process.

This lesson is a preview from our Generative AI Certificate Online. Enroll in a course for detailed lessons, live instructor support, and project-based training.

This is a lesson preview only. For the full lesson, purchase the course here.

This kind of workflow connects two types of AI tools: text-based language models and image generators. Understanding how they pass information between them, and where human decision-making is required at each step, is the key to making this process actually useful rather than just fast.

The workflow starts before any image is generated. It starts with setting up the LLM correctly and giving it enough context to produce something meaningful. Here is how that process works in practice.

What You Need Before You Start

Before opening any AI tool, you need two things: a brand guide and a project brief. In a real client scenario, these come from the client. In a practice or speculative context, you can create them yourself or work with provided materials. A brand guide typically includes a company history, mission statement, visual identity references, and sometimes some description of the audience or market position. A project brief describes a specific animation or video the client is looking for, along with some general parameters around tone and purpose.

These two documents are what you feed into the LLM. They give the tool the context it needs to generate something relevant rather than something generic. Without this context, a language model will produce output that sounds plausible but has no real relationship to the actual project. With it, the output becomes something you can actually evaluate and use.

It is worth spending a little time reviewing both documents before you start. Knowing what is in them helps you recognize whether the AI is interpreting them correctly and lets you redirect it if it goes off track.

Setting Up Your Role and Goal in the LLM

Once you have your materials, open your LLM of choice, whether that is ChatGPT, Claude, or another tool. The free versions of most of these tools are sufficient for this kind of work, though paid versions often give you access to more capable models. The paid or pro tiers in ChatGPT, for example, include thinking models that tend to produce more detailed and nuanced responses, which can be useful when you want more depth in your project summary or script concepts.

Start your prompt by telling the tool what role you are taking on. In a motion design context, this might mean stating that you are a motion graphics designer working on a specific type of project. This framing helps the model orient its outputs toward the right professional context. Then state your goal clearly: you want a project summary, a set of script concepts, and eventually a visual prompt for image generation.

Attach your brand guide to the prompt. If you are using a paid version of a tool, you can often upload the PDF directly. On the free version, you will need to copy and paste the text. When copying from a PDF, formatting sometimes breaks down, adding line breaks or stray characters in odd places. This is a known limitation and does not significantly affect how the LLM processes the content, but it is worth being aware of.

Generating the Project Summary

With your brand guide loaded and your project brief pasted in, prompt the LLM to generate a project summary. A project summary in this context is a structured breakdown of what the project is trying to accomplish. It typically includes the creative intent, a visual direction overview, notes on tone and atmosphere, and key messages the piece should communicate.

What the LLM produces here is a distillation of the material you gave it, shaped by the role you described and the project context you provided. You should read through it carefully and evaluate whether it feels aligned with the actual brief. There will often be things you agree with and things that miss the mark. That is fine. The point at this stage is not to accept everything, but to get a structured starting point you can react to.

You do not have to ask for follow-up clarifying questions at this step, though it can be useful in a longer workflow. For a faster first pass, generating the summary and then moving forward with script concepts is a reasonable approach.

Generating Script Concepts to Choose From

After reviewing the project summary, ask the LLM to generate a set of script concepts. In a motion or video context, three concepts are a typical number, as it gives you enough variety to compare without overwhelming the process. For each concept, specify the parameters you care about. In many cases, this means requesting concepts that use on-screen text and music rather than voiceover, or specifying a particular pacing, emotional tone, or structural approach.

Read through each concept and notice which one aligns best with the visual direction you want to explore. At this stage, you are looking for a concept that has a clear identity, a specific emotional quality, and a direction that feels workable for the project. You do not have to agree with everything in the concept. You are selecting a direction to develop further, not committing to every detail the tool suggested.

In practice, most creatives would spend more time here than a single pass allows. A more thorough workflow might involve running the concept text through a review process, rewriting sections, and feeding the revised text back into the LLM before moving forward. For a first draft or exploratory pass, accepting a concept that is mostly in the right direction is a reasonable choice.

Creating an Image Prompt From Your Chosen Concept

Once you have selected a script concept, the next step is turning it into a visual prompt you can use in an image generator. Ask the LLM to generate an image prompt based on your chosen concept and the brand context you already provided. Specify that you want the prompt to include information about mood, composition, camera angle, and visual style. Ask for it to be written in paragraph form rather than as a list, since image generators often handle paragraph-style prompts better than bullet-pointed ones.

It can also be useful to ask the LLM to ask you follow-up questions before generating the final prompt. This gives you the chance to specify details that the tool might not have guessed correctly, such as whether a key visual element should fill the frame or sit within a larger environment, whether you want stylized or realistic textures, and whether there should be negative space left for future animation. These details can meaningfully change the images that get generated.

After answering those questions, let the LLM produce the final image prompt. Review it for clarity and accuracy before taking it into your image generator.

Running Your Prompt Through an Image Generator

With your prompt ready, open your image generator of choice. Adobe Firefly, Midjourney, and Leonardo AI each have different strengths, and the same prompt will produce meaningfully different results across them. Firefly is tightly integrated with Adobe tools and uses a standard licensing model that can be important for commercial work. Midjourney tends to produce images with a distinctive aesthetic that suits darker, more stylized, or more surreal visual directions. Leonardo offers a range of model options that can suit different styles and use cases.

Within any of these tools, you will have settings to adjust. In Firefly, for example, you can set a content type, adjust visual intensity, choose lighting and tone options, and select from available style effects. In Midjourney, you can control stylization, variety, and model version. These settings interact with your prompt, so the same text can produce quite different results depending on how the tool is configured.

For a first generation, it is reasonable to use default or moderate settings and see what the tool produces. The goal at this stage is not to get a perfect image, but to see what direction the prompt naturally moves toward and whether that direction is interesting.

Evaluating and Iterating on the Generated Images

Most image generators give you multiple outputs per generation, typically three or four. Review each one carefully and note what is working and what is not. Think about whether the overall visual language feels right for the project, whether the composition is usable, and whether the stylistic choices align with the brand. You do not have to find something perfect here. You are looking for images that give you a useful starting point for your style frames.

If the first set of images is in the right direction but not quite there, run the prompt again with adjusted settings. In Midjourney, for example, changing the stylization or variety settings can significantly shift the look of the output. Trying a different content type option, switching lighting, or adjusting the level of detail can all move the results in a different direction without changing the underlying prompt.

  • If the images are too literal or generic, revisit the prompt and add more specific language around atmosphere, visual texture, or compositional emphasis.
  • If the proportions are wrong for your intended format, most tools allow you to set aspect ratios. For motion graphics and video, a 16:9 landscape ratio is the standard starting point.
  • If the generated images contain elements that do not make sense visually, such as structural inconsistencies or misplaced details, note those for correction rather than treating them as reasons to start over entirely.

It is worth generating several rounds across one or more of your script concepts. If you identified three script concepts and chose one to develop, you can also run the others through the image generation step to produce a broader range of visual options for comparison or client presentation.

From Generated Images to Style Frame References

Once you have a set of generated images you find interesting, they become your visual reference material for style frames. Style frames are not final deliverables themselves. They are high-quality representations of the visual direction, used to establish the overall look and feel of a project and to get client approval before production begins.

Generated images will rarely be ready to use as style frames without further work. Structural inconsistencies, odd proportions, or elements that do not make logical sense will usually need to be corrected. For this, you would bring the images into a program like Photoshop or another image editor and fix what does not work while keeping the overall visual language intact. The generated image gives you the style, the mood, and the general compositional feel. The editing step makes it technically coherent and production-ready.

The value of this workflow is not that it replaces that editing step. The value is that it gives you a starting point with a specific visual identity much faster than building a style frame entirely from scratch. The creative judgment about which images are interesting, which direction is right for the brand, and what needs to be corrected or adjusted is all yours. The tools accelerate the generation side. The design thinking stays with the designer.

photo of Jerron Smith

Jerron Smith

Jerron has more than 25 years of experience working with graphics and video and expert-level certifications in Adobe After Effects, Premiere Pro, Photoshop, and Illustrator along with an extensive knowledge of other animation programs like Cinema 4D, Adobe Animate, and 3DS Max. He has authored multiple books and video training series on computer graphics software such as: After Effects, Premiere Pro, Photoshop, Illustrator, and Flash (back when it was a thing). He has taught at the college level for over 20 years at schools such as NYCCT (New York City College of Technology), NYIT (The New York Institute of Technology), and FIT (The Fashion Institute of Technology).

More articles by Jerron Smith

How to Learn AI

Build practical, career-focused skills in AI through hands-on training designed for beginners and professionals alike. Learn fundamental tools and workflows that prepare you for real-world projects or industry certification.