Artificial intelligence (AI) is here and in many of the digital tools teachers already use for lesson planning, grading, and assessment. These platforms are doing much of the heavy lifting in designing instructional materials and assessments. If you already have a favorite tool that works for your context, you may not feel the need to build AI-prompts from scratch. However, for educators who want more control over the design process, AI chatbots can be a game-changer, as Dr. Katie Novak and I explored in our book, Elevating Educational Design with AI.
Unlike fixed tools that generate a single output, chatbots allow us to shape the process. We can clarify our goals, provide detailed information about our teaching context and unique population of students, and provide feedback needed to improve any output provided. This back-and-forth exchange enables us to improve the AI-generated materials until they align perfectly with our instructional goals, standards, and student needs. That level of control requires careful prompting.
Why REFINE Matters
AI has incredible potential to elevate educational design, reducing the cognitive load on teachers. However, AI should not replace the teacher’s professional judgement. The teacher’s role at the start, in engineering strong prompts, sets the foundation for a meaningful and relevant output. And, the teacher’s role at the end, evaluating the AI’s response, ensures the content created by AI is accurate, bias-free, and responsive to students’ needs.
Think of AI as a design thought partner. It can generate ideas, draft assessments, provide construct-specific options, and suggest scaffolds, but it cannot understand your students the way you do. Your expertise, empathy, and contextual knowledge are irreplaceable.
I developed the REFINE acronym for prompt engineering to provide educators with a structured approach to harnessing AI’s creative potential while keeping them firmly in the driver’s seat. Whether you are designing informal checks for understanding, developing performance tasks, crafting lessons and activities, generating rubrics and scaffolds, strong prompts lead to stronger outcomes.
REFINE Acronym
Without detailed instructions, AI may produce generic results. That is why I use the REFINE acronym when coaching teachers on engineering prompts that are more likely to provide valuable and usable outputs. Using an acronym to guide prompting helps teachers write clear, detailed requests that maximize the usefulness of AI in designing lessons, activities, assessments, and other learning experiences.
R – Role: Assign the AI a Specific Role
Assigning a role, like curriculum designer, master teacher, or reading specialist, helps the AI adopt domain-specific knowledge and conventions, resulting in more contextually accurate and specific outputs. Research indicates that role prompts enhance precision and influence tone (Chen et al., 2025). It’s a simple move that yields more detailed and useful results.
Ask AI to: Act as a curriculum designer with expertise in formative assessment for upper elementary students.
E – Expectation: Clearly State The Task or Question
The stronger and more specific the input, the better the output. If we are vague or unclear in our prompting, the AI will respond in kind. Clear, precise instructions reduce ambiguity and misinterpretation, which are common when prompts are too general. Research shows that vague prompts often produce overly broad or generic results.
Ask AI to: Create a short pre-assessment that helps me identify which of my fourth-grade students understand the concept of fractions as part of a whole.
F – Frame: Provide The Relevant Details and Context
Context is critical. Without it, the AI is unlikely to produce a result we can use. We need to provide details such as grade level, subject-specific standards, student needs, language proficiencies, time constraints, and other relevant information to ensure we receive an answer or output that will work for us and our students. When we provide the relevant details and context, AI is more successful at generating tasks, activities, and assessments that will meet our specific needs. This step helps us move from “good idea” to “ready-to-use resource.”
Ask AI to: Align the pre-assessment with the fourth-grade Common Core math standard “4.NF.3 Students understand fractions as a sum of unit fractions, leading to skills in adding and subtracting fractions with like denominators. They can decompose fractions in multiple ways and add/subtract mixed numbers with like denominators using visual models and solving word problems.”
I – Include: Specify What You Want AI to Include in the Response
AI chatbots can create a wide range of items, like bulleted lists, templates, rubrics, visual instructions, and content tailored to specific learning goals. That’s why it’s essential to specify exactly what elements we want in the output to the AI. If we ask for a standards-aligned, mastery-based rubric, we are much more likely to get it. Being explicit also saves time because we don’t have to go back and forth, giving multiple rounds of feedback. The clearer we are upfront, the closer AI gets to what we need the first time.
Ask AI to: Include ten questions of increasing rigor that mix multiple-choice, short answer, and at least one application problem. Provide a simple asset-based rubric aligned to standards 4.NF.3 that I can use to quickly assess student understanding.
N – Nuance: Define the Audience, Tone, and Style
The same content looks very different depending on the audience. Directions for students, for example, need to be clear, simple, and age-appropriate. By contrast, guidance for families or lessons shared with colleagues might use more complex vocabulary or technical language. Nuance helps us match the tone and style of the output to the people who will be engaging with it. Without this step, we risk ending up with content that is accurate, but it may not be accessible or engaging for the intended audience.
Ask AI to: Write the assessment directions in student-friendly language that a fourth-grade student can understand.
E – Evaluate: Analyze the AI Output for Bias, Accuracy, and Relevance; Provide Feedback
This step occurs once we have an output. It prioritizes the teacher’s role at the end of this process, using their subject area expertise and unique understanding of their students to evaluate the output created. Even the strongest prompts don’t guarantee that AI will get it right. Models can make mistakes, oversimplify, or introduce bias. That’s why teachers are essential at the end of the process. We must review the output, verify its accuracy, and ensure it is suitable for our learners. If revisions are needed, we can provide specific feedback to guide the AI in refining or improving the results.
Give AI Feedback: These word problems are great, but the language is too complex for some of my multilingual learners. Can you simplify the vocabulary while retaining the math concepts?
Putting It All Together
Here is what the full request would look like when each step of the REFINE acronym is combined to create a single, comprehensive AI prompt.
Prompt Example: Act as a curriculum designer with expertise in formative assessment for upper elementary students. Create a short pre-assessment that helps me identify which of my fourth-grade students understand the concept of fractions as part of a whole. Align the pre-assessment with the fourth-grade Common Core math standard “4.NF.3 Students understand fractions as a sum of unit fractions, leading to skills in adding and subtracting fractions with like denominators. They can decompose fractions in multiple ways and add/subtract mixed numbers with like denominators using visual models and solving word problems.” Include ten questions of increasing rigor that mix multiple-choice, short answer, and at least one application problem. Provide a simple asset-based rubric aligned to standards 4.NF.3 that I can use to quickly assess student understanding. Write the assessment directions in student-friendly language that a fourth-grade student can understand.
You can check out the output here.
Taking the time to REFINE your prompt may feel like extra work at the start, but it saves time in the long run. A vague prompt often leads to generic outputs that may require multiple rounds of revision and feedback. By investing a bit more time up front, clarifying the AI’s roles, setting clear expectations, and providing the necessary context and nuanced details, you set the AI up to generate stronger drafts the first time. This reduces the need for back-and-forth and gets you to a usable result much faster.
Wrap Up
AI can make designing dynamic, differentiated learning experiences and assessments more manageable, saving time and inspiring educators with new ideas. By assigning AI a role, setting clear expectations, framing the context, specifying inclusions, defining nuances, and critically evaluating the output, teachers can turn an AI chatbot into a powerful instructional design partner.
So, whether you are building your next unit or designing resources for diverse groups of learners, I hope the REFINE acronym provides a roadmap to get the most out of AI while keeping your professional expertise and knowledge of your students at the center of the process.
Want to learn more about designing with AI?
Check out my book, Elevating Educational Design with AI. I also partner with schools to design and facilitate professional learning that empowers educators to use AI thoughtfully to create more inclusive, responsive, and effective learning experiences.
No responses yet