Deeper Look at Gainful Employment Claims

Reader request to use GE visualizations against ED / AV claims

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that matters sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Try with our 30-day free trial and Upgrade to the On EdTech+ newsletter.

In response to yesterday’s post on Department of Education (ED) / Arnold Ventures (AV) coalition claims (through Stephanie Cellini in a congressional hearing), a reader asked how this compared to the Gainful Employment (GE) visualizations that I have shared.

I went ahead and clicked the link to watch [Stephanie’s] testimony. To me another key moment is the statement that 1/3 of certificate programs by for-profit providers would fail versus only 1% of programs at community colleges.

That seems like a very different prediction from the type of graphs you have shared with subscribers, which show a much larger share of programs (I think focusing on degree programs) across all types of institutions would fail the GE test.

I am also curious how Dr. Cellini is working around the limitations of data availability to make that prediction. If her prediction is accurate, one could argue that even without better data, the proposed approach would weed out some of the worst offenders (and leave a large share of public higher-education untouched).

As a reminder, my main point yesterday was my disagreement with Cellini’s claim that the federal government has “excellent data” with which to measure the value of academic programs. Cellini’s specific claim from the hearing:

The proposed GE Rule is well-targeted to hold accountable the programs that the data show are most likely to leave students with heavy debt burdens and low earnings. Nearly one-third of for-profit certificate programs would fail GE measures compared to just one percent of programs in community colleges.

That claim makes it sound like the ratio of failing programs at for-profits is more than 30 times the ratio for community colleges. The short answer is that Cellini’s claim is backed up by the Program Performance Data (PPD 2022) data release used in negotiated rulemaking and the analysis for the new regulations, but it ignores the data limitations and what is likely to happen in the future as we gather more data. For-profit undergraduate certificate programs are more likely to fail GE metrics, but not by a factor of 30 or more.

Tracing the Data

I’m fairly confident that Cellini’s claim is based on table 3-9 from the Notice of Proposed Rulemaking (NPRM) document on page 122 of 212. I have highlighted the two relevant rows (and I am ignoring that the data is for all public institutions, not just community colleges). Look at the far right column for the referenced percentages.

As described yesterday, note the huge number of programs without valid data that in the analysis would pass by exclusion (no data to fail them), but might be included in the future if GE is implemented. Fully 95% of public institution undergraduate certificate programs do not have valid data (5th column from right), but just under 50% of for-profit programs. That is a huge difference. Also note that if you limit the failure rate to programs with data, 66% of for-profit programs would fail and 21% of public programs. (184 + 6 + 1) / (184 + 6 + 1 + 729) = roughly 21%, etc.

Subscribe to On EdTech+ to read the rest.

Become a paying subscriber of On EdTech+ to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In

A subscription gets you:
New content 3-4 times per week
Shared Q&A discussions
More coming soon