Integrating Generative AI into Workflows
Write a prompt. Get a response. Using Generative AI (GenAI) tools may seem straightforward, but this simple loop often fails in high-stakes or discipline-specific work. In fact, the ease of use of chat-based generative tools obscures the skills necessary to use them effectively, and how disciplinary knowledge and good judgement remain essential.
For complex tasks, GenAI works best as one component of a larger process that relies on human expertise. The diagram below illustrates a generalized version of this workflow.
Crafting Effective Prompts
Before writing a prompt, consider three elements:
- Tools: What tool is best for this use case? The frontier large language models–ChatGPT, Claude, and Gemini–are similarly competent at general tasks, but they have subtle differences in how they respond, and other tools exist for more specific needs.
- Context: What background, guidelines, or examples will ensure a relevant response?
- Instructions: What instructions will shape the response appropriately? Be precise about expectations, and when necessary, constraints.
Processing Responses
Once the tool responds, the work is not done. A professional workflow requires iteration as initial outputs are rarely aligned, complete, or ready for use:
- Feedback/Reprompting: Is the response sufficient or does it need refinement? Was it designed as the first step in a multi-step process?
- Validation: Outputs should be verified for accuracy and alignment with their intended use. Even as newer models hallucinate less and better prompting mitigates some risks, validation remains necessary as with any secondary source.
- Revision: Results used in public-facing contexts often need human revision to move beyond the generic AI-generated text.
While LLM models have continued to improve, some of the biggest gains–in software engineering and other professional fields for example–are being realized by people developing structured workflows to direct their use.
Critical Thinking Still Matters, Maybe More Than Ever
This framework reveals how GenAI tools are a single element in a larger process. From an education standpoint, this is significant: these tools do not displace the need to think critically or have disciplinary knowledge so much as shift which skills matter and how expertise is applied.
Consider the informed judgment required at each stage:
- What prompts are worth asking? What problems are worth solving?
- What instructions will produce the best output?
- Which tools work best for specific tasks, and how do we evaluate them?
- How do we assess outputs and provide effective feedback for iteration?
- What discipline-specific knowledge ensures that the results have genuine value, not just the appearance of it?
- How do we revise the final outcomes for our specific audience?
These questions require both deep disciplinary knowledge and sound judgment. In fact, while much of the discussion in education has centered on students being able to use GenAI to avoid thinking, many future careers may require higher levels of skills for individuals to distinguish themselves as these tools become increasingly effective at automating current tasks.
Understanding GenAI as part of a thoughtful workflow can improve our own work while providing a framework for teaching students transferable critical and disciplinary skills they will need in an AI-augmented workplace.
To learn more about this topic, register for the Prompt Engineering and Assignment Design workshop this spring.
Note on AI use in this article: GenAI was used in the final revision stages of this article to help clarify language and shorten the overall length.