New: AI-Ready Workforce-Aligned Program Design course launching Q3 2026. Get notified →
Dr. Tennyson Johnson

Outcomes measurement that doesn't lie: a framework for CTE programs

Most CTE programs track too many things badly. The framework here tracks fewer things honestly. Five metrics that survive scrutiny, with a specific method for each one.

Cover Image for Outcomes measurement that doesn't lie: a framework for CTE programs

Outcomes measurement that doesn't lie: a framework for CTE programs

Most CTE programs measure too many things badly.

If you've sat through a program review, you've seen the deck — completion rates, certification pass rates, satisfaction scores, attendance, retention, dual-credit credits earned, employer testimonials, advisory board engagement, and a dozen other numbers. Each one looks impressive. Each one is doing some work for the people who chose it. Together they tell you almost nothing about whether the program is producing the outcomes that matter.

This post proposes a different framework. Five metrics. Each one specific. Each one designed to resist the gaming that most CTE metrics invite. Together they answer the question that actually matters: is this program changing the trajectory of the students who enter it, in ways that justify the time and money?

I want to flag the underlying argument up front. I'm not proposing more measurement. I'm proposing less, done better. The reason CTE programs report so many metrics is that no single metric is defensible enough on its own to carry the case. The fix isn't more numbers. It's better numbers, fewer of them, with a methodology that holds up to scrutiny.

The five metrics

These are the metrics that tell you whether your program is doing its job. None of them are easy to measure. All of them are worth measuring well.

Metric 1: Sustained employment in field, 24 months post-completion

The single most honest measure of a CTE program is whether its graduates are still working in the field two years after they finish.

Not employed. Working in the field. The distinction matters.

A graduate who completes an IT program and works at a coffee shop is employed. They're not in the field. The program didn't change their workforce trajectory. Counting them as a successful outcome misrepresents what the program is doing.

A graduate who completes a healthcare program and is working in healthcare two years later is the outcome the program exists to produce. That's the number to track.

How to measure honestly:

  • Define "in field" specifically — link to NAICS codes, SOC codes, or specific job titles before the cohort enters the program
  • Track at 24 months post-completion, not 6 or 12 (early employment numbers are inflated by graduates taking transitional jobs)
  • Use third-party data (state UI wage records, NSC data) where available, supplemented by alumni outreach
  • Report the response rate alongside the result — a 90% in-field rate from a 30% response rate is not a 90% in-field rate

What kills this metric: counting any employment as success, measuring at graduation only, relying on self-reported survey data without verification.

Metric 2: Wage premium over comparable non-completers

Programs claim they prepare students for higher-paying careers. The honest test is whether graduates earn more than students who started but didn't complete the program.

This isn't a simple comparison to the regional median wage. It's a comparison within the program's own student population: graduates vs. people who entered the program and dropped out. The reason to compare within the population is that program participants are not a random sample of the labor force. They self-selected. Comparing them to general population averages overstates program impact.

How to measure honestly:

  • Track wages of graduates and non-completers from the same enrollment cohort
  • Use state UI wage records when available — they're more accurate than self-report
  • Report the wage difference at 24 months post-graduation, controlling for prior education
  • Be explicit about what "wage premium" means — total wages, hourly rate, or annualized

What kills this metric: comparing graduates to regional median wages instead of to non-completers, ignoring confounding factors like prior education, reporting only at graduation when wages haven't stabilized.

Metric 3: Credential portability — does the credential travel?

A credential that only matters in your immediate region, or only to employers in your advisory network, isn't really a workforce credential. It's a local certification of program completion.

The portability test is whether the credential gets your graduates hired in places they didn't train, by employers who don't know your program.

How to measure honestly:

  • Track where graduates work, specifically by employer name and location
  • Identify graduates who got hired by employers with no direct relationship to your program
  • Calculate the percentage of graduates whose credentials secured employment with arms-length employers
  • Compare to graduates whose credentials only got them hired through advisory board connections

A credential that produces employment only through advisory board connections is functioning as an introduction service. That's still useful. But it's not a portable credential, and counting it as one in reporting overstates the credential's value.

What kills this metric: counting any employment as evidence of credential value, ignoring the difference between "got hired" and "got hired by an employer who didn't know the program."

Metric 4: Continued credential progression

Strong programs produce graduates who keep learning. Weak programs produce graduates who stop.

Specifically: in fields where credential stacking is meaningful (IT, healthcare, skilled trades), the graduates who go on to earn additional credentials in the next three years are evidence that the original program built a foundation. Graduates who don't progress may have completed the program without internalizing the underlying skills.

How to measure honestly:

  • Track credential attainment for three years post-completion
  • Distinguish between credentials in the same field (progression) and credentials in unrelated fields (career change)
  • Report progression rates against industry benchmarks where available

This metric is partial. Some students take CTE programs without intending to progress further, and that's legitimate. But programs that produce zero credential progression are likely teaching to the certification rather than to the underlying capability.

What kills this metric: treating any further credentialing as progression, ignoring career changes, comparing across fields with very different progression norms.

Metric 5: Employer rehire rate

The clearest evidence that a program is producing workforce-ready graduates is whether the same employers come back for more.

If your hospital partner hired three of your graduates last year and is hiring three more this year, that's a signal the graduates are performing. If they hired three last year and zero this year, that's also a signal.

How to measure honestly:

  • Track the specific employers who hire your graduates each year
  • Calculate the percentage of last year's employers who returned to hire this year
  • Track the percentage of employers who increased their hiring of your graduates year over year
  • Note employers who explicitly stopped hiring after a bad experience

What kills this metric: counting any employer engagement as evidence, conflating "advisory board attendance" with "hired graduates," ignoring employers who stopped hiring without saying why.

The metrics this framework deliberately excludes

The five metrics above are the ones that survive scrutiny. Several common CTE metrics are missing on purpose.

Completion rate

Completion rates measure whether students who started the program finished it. They don't measure whether finishing the program produced an outcome that matters.

Completion rates are also easy to game. Programs can lower entry standards, lower exit standards, or both. A program with a 90% completion rate that produces 30% in-field employment is worse than a program with 60% completion rate that produces 80% in-field employment.

Use completion rates as input to other metrics, not as the headline.

Certification pass rate

Certification pass rates measure whether students passed the test. They don't measure whether the certification mattered for employment, whether it transferred to other contexts, or whether students retained the underlying capability.

Programs that teach to the test produce high pass rates and weak graduates. Programs that teach the underlying material may produce lower pass rates and stronger graduates.

If you're already collecting pass rates, keep collecting them. Just don't lead with them.

Student satisfaction

Satisfaction surveys measure whether students enjoyed the program. Enjoyment isn't outcome. Programs that students enjoy can fail to prepare them for work. Programs students find difficult can prepare them well.

Use satisfaction data to identify operational problems — not to evaluate program impact.

Advisory board engagement

The number of advisory board members, meetings held, or events attended is an input metric. It tells you what the program is doing, not whether what it's doing is working.

Strong advisory engagement should produce results in the other metrics — graduates getting hired, employers returning to hire, credentials traveling. If it's not producing those results, the engagement isn't accomplishing its purpose.

How to deploy this framework

The framework above is designed for honest measurement. Honest measurement is harder than dishonest measurement. Three practical approaches:

Start with one cohort, not all of them

Trying to retrofit five new metrics across all your historical cohorts is impossible. Pick one recent cohort — students who completed two to three years ago — and measure them against the framework. The data will be incomplete. That's fine. You'll learn what's gettable and what isn't.

Use state longitudinal data systems

Most states have longitudinal data systems linking K-12, postsecondary, and workforce data. Most CTE program leaders have never accessed them. They're often free to use for legitimate program evaluation. They're the most honest data source available because they don't depend on student self-report.

If your state has one and you've never used it, that's the single biggest measurement upgrade available to you.

Report response rates with every metric

Any survey-based number should be reported with the response rate alongside it. "92% of respondents are working in field" is meaningless without "response rate was 23%." Reporting response rates is a small change that dramatically increases your credibility with administrators, accreditors, and skeptical observers.

What this framework doesn't do

This framework doesn't tell you what to do with the data. That's a different post. Some programs will look at their honest numbers and discover they're producing better outcomes than they claimed. Some will discover the opposite.

Either result is useful. Programs that overperform their reported metrics can defend themselves more confidently. Programs that underperform can redesign before someone else discovers the gap.

What the framework does do: it gives you metrics that hold up to scrutiny. When an administrator, accreditor, or journalist starts asking hard questions about your outcomes, you want to be reporting numbers you trust. Five honest metrics beat fifteen inflated ones, every time the questioning gets serious.

What to do this semester

Three concrete moves to begin deploying this framework, in order:

1. Pick one cohort and identify what data you can get. Start with graduates from two to three years ago. List what you have access to: alumni contact information, state longitudinal data system permissions, employer relationships willing to confirm employment, NSC subscription. This tells you which of the five metrics are gettable now.

2. Calculate the in-field employment rate at 24 months for that cohort, with response rate reported. This is metric 1. It's the most important number and the one most CTE programs aren't tracking properly. Get it. The number itself, plus the methodology you used to get it, plus the response rate.

3. Compare your number against your prior reported outcomes. If they match, you've validated your reporting. If they don't, you've discovered a gap between what your program claims and what it produces. Either result is information you didn't have before.

Programs that adopt some version of this framework discover things their administrators don't know. Sometimes those things are positive. Sometimes negative. The point isn't to confirm what you already believe. The point is to know what's actually happening, so the program design can respond.

If your program has built an honest outcomes measurement practice that survives scrutiny, TechEd Analyst would like to learn from what you did. Reach out at hello@techedanalyst.com. The patterns from programs that have actually built this kind of measurement infrastructure are rare and worth surfacing.


TechEd Analyst publishes monthly posts for CTE and IT educators navigating workforce changes. Subscribe to get them in your inbox, or browse recent posts.

Try the product on your own workflow

Run an interactive demo (no account), or book 20 minutes with the founder to walk through curriculum, AI readiness, or program design.

Do Not Sell My Data