
NJ Districts Are Running Out of Money. Case Study: Newark
February 19, 2026The Adequacy Illusion — When “Costing Out” Replaces Outcomes
Evan Scott is a lifelong New Jersey resident, a veteran, and a retired military service member. He holds a bachelor’s degree in education and was elected to his hometown’s Board of Education in 1988. Now living in Evesham Township, NJ, he continues to advocate for fair and transparent school funding.
There is one question at the center of New Jersey’s school funding debate—a question so simple that it cuts through every spreadsheet, every adequacy study, and every political talking point:
Question: How do we know when we’re spending enough to provide an adequate education for NJ students?
Answer: We have no idea because the method we use for determining “adequate” school funding is divorced from what actually drives student learning gains.
That’s not rhetoric. It’s embedded in how these systems are built, using “Professional Judgement Panels” (PJP’s; Explainer here) that convene to describe “adequate” staffing ratios, program supports, specialists, etc., that are then costed out and converted into a funding formula, a consensus judgement that tells policymakers what a panel believes schools should have, not what actually drives learning gains.
The number PJPs produce is not derived from causal evidence. It is not based on validated relationships like: +$X in spending → +Y improvement in student achievement.
This is the structural flaw: PJPs tell you what a panel thinks you should buy. It cannot tell you when you’ve bought “enough” to reliably hit achievement targets. It is a normative budgeting exercise: a structured expert judgment about inputs. It can produce a number, but it cannot prove that number is sufficient to deliver the outcomes you care about.
This is the entire game in a nutshell.
We have locked ourselves into a model that compels higher spending year after year without ever defining a measurable endpoint. There is no stop rule. No outcome trigger. No performance threshold that says: “We’ve reached adequacy.”
Meanwhile, academic outcomes tell a more sobering story.
National and state benchmarks show that achievement gains plateaued before the pandemic and declined afterward, particularly among lower-performing students. Even as spending reached historic highs, reading performance fell and math recovery lagged.
If money alone were the determining factor, outcomes should have risen in tandem with funding. They did not.
That disconnect is precisely what economists like Eric Hanushek have warned about for decades: inputs—spending, staffing, class size—do not reliably translate into achievement gains unless they are tied to instructional effectiveness and incentives that reward success.
In other words: how money is used matters more than how much is spent.
There is another data point that complicates the adequacy narrative.
Some charter networks appear to produce higher growth with similar or lower spending, which suggests instructional model and execution can dominate funding levels at the margin.
That does not mean funding is irrelevant. It means funding is not the decisive variable once basic resource thresholds are met. Instructional coherence, curriculum alignment, leadership authority, teacher coaching, and time-on-task often explain performance differences more than raw expenditure levels.
If two systems spend similar dollars but produce different academic growth, the policy question shifts from “How much are we spending?” to “What are we doing with what we spend?”
Yet costing-out models invert that logic. They begin with expenditures and assume outcomes will follow.
It is a faith-based funding model.
There is a parallel shift happening on the accountability side that mirrors the funding problem: the language of success has moved from achievement to growth.
Achievement measures whether students meet a fixed academic standard—whether they can read, write, and perform math on grade level. Growth measures how much progress students make relative to where they started. Both metrics have value, but they answer fundamentally different questions.
A system can produce strong growth while students remain academically deficient. A child can make a year’s worth of progress and still be below grade level. When accountability conversations center on growth alone, improvement can be declared even while proficiency gaps remain wide.
This matters because it changes the definition of success. If adequacy is defined by inputs rather than outcomes, and performance is defined by growth rather than proficiency, then the system operates without a fixed endpoint. Spending has no stop rule, and achievement has no finish line.
The deeper risk is institutional.
If adequacy is defined by inputs rather than outcomes, it becomes perpetually expandable. Every new mandate, service layer, or staffing recommendation increases the calculated cost of “meeting standards” without requiring proof that prior spending levels failed because they were insufficient rather than ineffective.
And if objective outcome gates (such as graduation exams, proficiency thresholds, and stable longitudinal testing) are weakened or removed, the system loses the very metrics needed to evaluate whether adequacy has been achieved.
You cannot declare adequacy without a measurable endpoint.
You cannot measure effectiveness if the yardstick keeps changing.
And you cannot sustain public trust if taxpayers are asked to fund ever-rising costs without clear evidence of academic return.
This is not an argument against funding education. Resources matter, especially where genuine deprivation exists.
But funding models must be tethered to outcomes, not assumptions.
The real adequacy question is not:
“How much should we spend?”
It is:
“What level of student achievement defines success—and what funding level demonstrably produces it?”
Until that linkage exists, “adequacy” remains a moving target.
And taxpayers—and students—are left financing a system where spending is measurable, but success is not.


