
NJ Districts Are Running Out of Money. Case Study: Newark
February 19, 2026
Here Is What Senator Kim Gets Wrong About Asbury Park
February 25, 2026The Adequacy Illusion — When “Costing Out” Replaces Outcomes
Evan Scott is a lifelong New Jersey resident, a veteran, and a retired military service member. He holds a bachelor’s degree in education and was elected to his hometown’s Board of Education in 1988. Now living in Evesham Township, NJ, he continues to advocate for fair and transparent school funding.
There is one question at the center of New Jersey’s school funding debate—a question so simple that it cuts through every spreadsheet, every adequacy study, and every political talking point:
Question: How do we know when we’re spending enough to provide an adequate education for NJ students?
Answer: We have no idea because the method we use for determining “adequate” school funding is divorced from what actually drives student learning gains.
That’s not rhetoric. It’s embedded in how these systems are built, using “Professional Judgement Panels” (PJP’s; Explainer here) that convene to describe “adequate” staffing ratios, program supports, specialists, etc., that are then costed out and converted into a funding formula, a consensus judgement that tells policymakers what a panel believes schools should have, not what actually drives learning gains.
The number PJPs produce is not derived from causal evidence. It is not based on validated relationships like: +$X in spending → +Y improvement in student achievement.
This is the structural flaw: PJPs tell you what a panel thinks you should buy. It cannot tell you when you’ve bought “enough” to reliably hit achievement targets. It is a normative budgeting exercise: a structured expert judgment about inputs. It can produce a number, but it cannot prove that number is sufficient to deliver the outcomes you care about.
This is the entire game in a nutshell.
We have locked ourselves into a model that compels higher spending year after year without ever defining a measurable endpoint. There is no stop rule. No outcome trigger. No performance threshold that says: “We’ve reached adequacy.”
Meanwhile, academic outcomes tell a more sobering story.
National and state benchmarks show that achievement gains plateaued before the pandemic and declined afterward, particularly among lower-performing students. Even as spending reached historic highs, reading performance fell and math recovery lagged.
If money alone were the determining factor, outcomes should have risen in tandem with funding. They did not.
That disconnect is precisely what economists like Eric Hanushek have warned about for decades: inputs—spending, staffing, class size—do not reliably translate into achievement gains unless they are tied to instructional effectiveness and incentives that reward success.
In other words: how money is used matters more than how much is spent.
There is another data point that complicates the adequacy narrative.
Some charter networks appear to produce higher growth with similar or lower spending, which suggests instructional model and execution can dominate funding levels at the margin.
That does not mean funding is irrelevant. It means funding is not the decisive variable once basic resource thresholds are met. Instructional coherence, curriculum alignment, leadership authority, teacher coaching, and time-on-task often explain performance differences more than raw expenditure levels.
If two systems spend similar dollars but produce different academic growth, the policy question shifts from “How much are we spending?” to “What are we doing with what we spend?”
Yet costing-out models invert that logic. They begin with expenditures and assume outcomes will follow.
It is a faith-based funding model.
There is a parallel shift happening on the accountability side that mirrors the funding problem: the language of success has moved from achievement to growth.
Achievement measures whether students meet a fixed academic standard—whether they can read, write, and perform math on grade level. Growth measures how much progress students make relative to where they started. Both metrics have value, but they answer fundamentally different questions.
A system can produce strong growth while students remain academically deficient. A child can make a year’s worth of progress and still be below grade level. When accountability conversations center on growth alone, improvement can be declared even while proficiency gaps remain wide.
This matters because it changes the definition of success. If adequacy is defined by inputs rather than outcomes, and performance is defined by growth rather than proficiency, then the system operates without a fixed endpoint. Spending has no stop rule, and achievement has no finish line.
The deeper risk is institutional.
If adequacy is defined by inputs rather than outcomes, it becomes perpetually expandable. Every new mandate, service layer, or staffing recommendation increases the calculated cost of “meeting standards” without requiring proof that prior spending levels failed because they were insufficient rather than ineffective.
And if objective outcome gates (such as graduation exams, proficiency thresholds, and stable longitudinal testing) are weakened or removed, the system loses the very metrics needed to evaluate whether adequacy has been achieved.
You cannot declare adequacy without a measurable endpoint.
You cannot measure effectiveness if the yardstick keeps changing.
And you cannot sustain public trust if taxpayers are asked to fund ever-rising costs without clear evidence of academic return.
This is not an argument against funding education. Resources matter, especially where genuine deprivation exists.
But funding models must be tethered to outcomes, not assumptions.
The real adequacy question is not:
“How much should we spend?”
It is:
“What level of student achievement defines success—and what funding level demonstrably produces it?”
Until that linkage exists, “adequacy” remains a moving target.
And taxpayers—and students—are left financing a system where spending is measurable, but success is not.





3 Comments
I love that last line. New Jersey schools don’t measure success. Perfect summary.
Evan Scott may know a lot about education and school funding, but his recently expressed views seem like they’re frozen in amber—once true but now long outdated.
His focus on Professional Judgment Panels makes them seem like an active ongoing part of our school funding system when in fact their role is 20 years old when the School Funding Reform Act, our school funding law, was being formulated and they’ve played no role since. Indeed, the State has totally failed to meet an explicit constitutional requirement that it periodically evaluate whether SFRA is operating at an optimal level for every district, by means of PJPs or otherwise.
Scott also invokes the economist Eric Hanushek, the favorite expert witness of states defending unequal school funding laws in the 1970’s and 1980’s based on his opinion that money really is largely unrelated to educational quality. However, in more recent years Hanushek has moderated his opinion to the effect that money well used does matter. Who can argue with that proposition. As far as I am aware, no funding equalization advocate has ever argued that what the law requires is spending more money without regard to its use and effectiveness.
Sir, your response to my recent Op-Ed suggests that my discussion of Professional Judgment Panels (PJPs) reflects an outdated view of New Jersey’s school funding system. In fact, the opposite point was being made.
Yes, PJPs were used roughly two decades ago when the School Funding Reform Act (SFRA) was designed. That is precisely the issue. The adequacy cost assumptions that underpin the formula today trace back to those original resource models, and the state has not undertaken the type of comprehensive recalibration the law itself envisioned.
The criticism inadvertently reinforces the central concern: if PJPs are no longer part of an ongoing process, then the funding formula is operating on assumptions that have not been systematically revisited in many years. The Educational Adequacy Reports themselves acknowledge this gap and discuss the possibility of reconvening panels or pursuing other review methods to reassess costs. In other words, the state recognizes that the adequacy determination has grown stale.
This raises a straightforward policy question. If the constitutional requirement is to ensure that funding remains adequate over time, how exactly are we measuring that adequacy today?
My argument was not that PJPs are currently convening every year. It is that New Jersey’s funding framework remains anchored to an input-based costing model developed decades ago, while academic performance data show that outcomes have plateaued or declined in key areas. That disconnect deserves examination regardless of one’s position on funding levels.
The response also raises economist Eric Hanushek. It is correct that Hanushek has long argued that simply increasing spending does not guarantee better outcomes, and that how resources are used matters greatly. That position has been consistent across his work for decades. In fact, the observation that “money well used matters” is exactly the point.
No serious policy analyst argues that resources are irrelevant. Schools require teachers, materials, and support services. The real question is whether funding systems are designed to reward educational effectiveness or whether they primarily measure the cost of inputs.
New Jersey’s public schools include many exceptional educators and strong districts. But the structure of the funding debate often centers on how much we spend rather than how effectively that spending translates into student learning.
That is the conversation we should be having.
Adequacy should not simply be a number produced by a formula. It should be demonstrably connected to student outcomes.
Until that connection is clearly defined and regularly measured, the adequacy question remains unresolved.