
STAR case study examples for instructional designers
2026-03-22 · 8 min read
Situation, task, action, result—mapped to portfolio sections hiring teams actually read.
STAR maps to your portfolio
TrainingOS project pages align to challenge, process, solution, and outcomes so reviewers get a consistent story arc across projects. That alignment is not cosmetic—when every case study uses the same skeleton, hiring managers compare projects fairly instead of guessing what you left out. Map Challenge to the Situation/Task tension (why now, what had to change), Process to how you worked with SMEs and pilots, Solution to what learners actually experienced, and Outcomes to Result plus any measurement plan. If you export the same story to PDF or slides, keep that order—proof before adjectives.
Situation maps to business context; Task clarifies your mandate; Action covers design and collaboration; Result ties to measurable or qualitative wins. If you are used to writing “objectives” first internally, translate outward: Situation is why the work existed; Task is what “done” meant to leadership. If your internal docs listed “learning objectives” only, add one line that translates objectives into business language—what risk goes down, what revenue unlocks, what audit exposure closes—so external readers see why the course mattered beyond L&D.
Situation should name the pressure: compliance deadline, merger, new product launch, safety incident rate, or customer churn tied to training gaps. One sentence of business context beats three paragraphs of industry background. Avoid jargon soup: “digital transformation” means nothing without a concrete pain point. If you fear revealing the client, keep the pressure concrete anyway: “EU regulatory deadline” or “customer churn in enterprise accounts” reads as real without naming the account. Tie Situation to a date or window when you can—reviewers use time pressure to gauge whether your solution was realistic.
Task is where you define success criteria that existed before you opened Storyline: “Reduce time-to-first-call for new reps,” “Pass the audit with zero critical findings,” “Cut escalations to Tier 2.” If you cannot state the task, you are still describing activity, not impact. If priorities conflicted—Legal wanted length, Ops wanted speed—name the trade-off you were hired to balance. If the sponsor changed mid-project, note the pivot: “original ask was awareness; new VP demanded certification”—that explains scope swings that otherwise look like poor planning. Write Task as an outcome sentence, not a job title.
Action is not a tool dump. Walk through decisions: why scenario-based vs. click-next, how you structured practice, how you handled SMEs who disagreed, what you cut when the timeline slipped. Tools appear as evidence, not decoration. Call out review mechanics: how many pilot iterations, what changed after learner confusion, how you validated accessibility fixes. If you built job aids, explain where learners access them—LMS, PDF, Slack, printed card—because “we also made a PDF” is not helpful without the delivery context. If you inherited bad source files, say how you refactored interactions instead of defending the mess.
Result anchors to what changed for the business or the learner: fewer errors, faster proficiency, higher course completion, better audit scores, or qualitative signals from managers. If the result is “launched on time,” pair it with why that mattered (audit window, revenue kickoff). If results are pending, say “pilot passed; full rollout scheduled Q3” instead of silent optimism.
Metrics that matter
Pair qualitative narrative with structured metrics: time saved, error reduction, course completions, or business KPIs your stakeholders recognized. One number plus one quote often beats five numbers with no story—context turns data into judgment. When you only have qualitative signals, label them honestly: “three directors reported fewer policy exceptions on live calls” still counts if you explain how you sampled and avoided cherry-picking. If you ran a focus group or listening tour, say what you changed in the build because of it—otherwise it reads as theater.
If metrics are sensitive, describe ranges or directional improvement without naming confidential baselines. “Double-digit reduction in repeat safety incidents” can be enough if legal blocks exacts.
L&D-friendly metrics include pre/post knowledge checks, on-the-job observation rubrics, LMS completion and pass rates, time-on-task in the module, help-desk ticket volume, and QA defect counts caught before release. Say how you collected them: LMS report export, sampled observations, randomized QA pulls. If your client tracked the wrong metric at first, say how you aligned reporting fields so completions meant what leadership thought they meant.
Business-friendly metrics include ramp time, error rate, rework hours, customer satisfaction (CSAT/NPS) where training was a lever, and compliance audit findings. Tie the metric to the story: “We moved completion from 61% to 88% after shortening the path and adding mobile-friendly bursts.” If correlation is not causation, say what else changed organizationally so you sound credible, not salesy. If training was one lever among many, say that too—credit sharing reads as mature, not weak.
When data is messy or incomplete, say what you measured and what blocked deeper analysis (“client could not share LMS exports; we used pilot surveys instead”). That reads as honest judgment, not weakness. Offer the next measurement you would run if you stayed another quarter—interviewers like forward thinking. If leadership changed KPIs midstream, say that too—otherwise your metrics look inconsistent when they were not.
Avoid vanity metrics: slide counts, seat time alone, or smile sheets without linkage to behavior. If smile sheets are all you have, pair with one behavioral proxy you tracked. If leadership demanded completion rates but the real goal was behavior, explain how you designed assessments or observations to connect the two—even when the LMS could only report completions.
Anti-patterns
Avoid giant walls of process with no outcome. Avoid outcome claims with no trace to your design choices. The STAR frame keeps both sides honest. If your Result paragraph could apply to any vendor, rewrite until it is specific to your design. Common miss: you describe a workshop but never say what changed in the course after it—always connect facilitation to a concrete edit in the build.
Do not list every SME meeting. Summarize governance in one or two lines: “Weekly legal review; two pilot cohorts; sign-off from Risk.” Interviewers will ask for detail if they care. Long rosters of names look like padding unless a role was unusual—medical director sign-off, union steward review.
Avoid superlatives you cannot support: “best-in-class,” “game-changing,” “highly engaging.” Replace them with observable design choices: branching scenarios, spaced practice, job aids embedded at point-of-need. Let reviewers conclude quality from evidence.
Do not hide failures. A short “what we tried first” paragraph builds credibility: “Initial click-next version failed pilot; we rebuilt with scenarios and cut seat time 35%.” That signals iteration skill. Blame external factors sparingly; own the fix you controlled. If legal blocked an interaction, say what you shipped instead and how you validated it with learners.
Do not confuse activity for Action: “hosted six workshops” is not Action unless you say what changed because of them—prioritized objectives, cut scope, rewrote assessments. If facilitation was the deliverable, tie it to artifacts: “workshops produced a prioritized error list that drove scenario rewrites in module 3.”
Keep tense consistent: Situation/Task past, Action past or present for how you work, Result past when shipped. Mixed tenses make skimmers think you copied text from different documents. If you still collaborate with the client, present tense for ongoing measurement is fine—just flag what is live versus projected.
Example beats
“Cut onboarding from 6 weeks to 4 through role-specific paths and embedded practice” beats “created onboarding.” “Partnered with Legal on policy accuracy across 14 states” beats “worked with SMEs.” Specificity is kindness: it saves the reader work. Rewrite weak lines by asking: what did the learner do differently on Monday, who signed off, and what metric moved—or what behavior changed if metrics were blocked?
Weak: “Built compliance training.” Strong: “Converted 90-minute course into 20-minute role-specific paths; pilot groups cut policy-related errors 27% in 30 days.” The second sentence ties design change to a time-bound outcome.
Weak: “Used Articulate Storyline.” Strong: “Built scenario banks with remediation tied to each wrong answer; LMS completion tracked separately from quiz success to match client KPIs.” Tools support behavior; they are not the behavior. Name versions when it matters: Storyline 360 update cycles can change default player and LMS reporting behavior.
Weak: “Collaborated with stakeholders.” Strong: “Facilitated task analysis with Ops; converted 14 procedures into decision trees; cut SME review cycles from four weeks to two with a tracked comment workflow.” The second shows orchestration, not attendance. If you are early-career and light on metrics, lean on behavior change language tied to observations: “new hires stopped escalating password resets after week two” is still a Result if you can explain how you verified it.
Add micro-STARS inside Action when useful: “Situation: managers skipped coaching; Task: embed prompts in workflow; Action: job aids + Slack nudges; Result: coaching completion up 41%.” Nested beats keep long projects readable.
When you cannot share numbers, use before/after behavior: “Learners stopped asking Tier 1 to interpret policy exceptions” beats “improved learner confidence.” Observable behavior beats invented confidence. Practice swapping verbs: replace “leveraged” with the actual task—mapped, prototyped, piloted, cut, rewrote—and your STAR examples will sound human. Read your STAR aloud: if you run out of breath before Result, Situation is probably too long—tighten until you finish the arc in under ninety seconds.
Related articles
Build your portfolio on TrainingOS
Host SCORM, video, and STAR case studies on one profile URL.
Get started