An Energy Research & Social Science paper crossed my screen recently that put structure around something visible to anyone who has compared nuclear forecasts with build rates. Nuclear power has been projected to grow faster, cheaper and more broadly than it actually has, not once or twice, but across decades, institutions and scenario families. The authors call this the nuclear energy paradox. That is a useful phrase because it does not point to one bad forecast. It points to a recurring gap between the nuclear future that was imagined and the nuclear fleet that was actually built. The paper’s argument is not that nuclear power has no value. Existing reactors produce low-carbon electricity and remain important in several grids. The question is different. Why have so many projections assumed large future nuclear expansion when actual global nuclear capacity has been roughly stagnant for decades and nuclear’s share of global electricity has fallen from 17.5% in 1996 to below 10% in 2023? Global capacity remained around the 400 GW range, while many projections from agencies and models pointed to futures with two, three or four times that amount. The COP28 nuclear tripling pledge implies roughly 1,200 GW of global nuclear capacity by 2050, about 800 GW of additional capacity in a quarter century before accounting for retirements from the existing aging fleet. The study’s useful contribution is its treatment of projections as more than numbers. It argues that energy futures are shaped by what the authors call nuclear imaginaries. An imaginary is a shared story about what a technology is expected to become. In nuclear power, one long-running story was the plutonium economy, where fast breeder reactors would make nuclear fuel almost unlimited and allow nuclear power to dominate the long-term energy system. A newer story is the SMR economy, where small modular reactors would be factory-built, standardized, cheaper, faster and less risky than large conventional reactors. Both stories have influenced expectations without being remotely proven at scale. The paper’s strongest contribution is the triangulation: repeated projection failure, inherited model structures and the policy use of scenario outputs. A scenario is not only a calculation. It is a calculation wrapped around assumptions about cost, scale, institutions, public acceptance, supply chains and time. Once that scenario is cited by an agency, an industry association or a government, those assumptions can become much less visible than the headline result. Models do not begin with a blank page. They begin with technology menus, cost assumptions, constraints, build limits, fuel assumptions, learning rates, regional structures and views about how systems change. A model can make nuclear attractive by assuming lower capital costs, shorter construction times, learning-by-doing, high capacity factors and standardized fleet delivery. Those assumptions may be plausible in a scenario, but they are not evidence that the scenario is likely. They are inputs. When the model then produces large nuclear deployment, the result is partly the assumption returning as an output. This is where the paper’s discussion of MESSAGE and GCAM becomes important. MESSAGE is an energy-system optimization model family associated with the International Institute for Applied Systems Analysis. In simple terms, it asks what mix of technologies can meet future energy demand under constraints at the lowest modeled cost. GCAM, the Global Change Analysis Model, represents interactions among energy, economy, land, water, emissions and climate. Neither model is a nuclear model. Both are influential integrated assessment models used in work assessed by the Intergovernmental Panel on Climate Change. The paper’s concern is that some of the highest nuclear futures in the IPCC’s 1.5°C scenario set come from MESSAGE and GCAM outputs. That does not prove bad faith. It does not prove the modelers are trying to promote nuclear power. It does show that these model families deserve scrutiny when they produce high-nuclear pathways. Both have roots reaching back to periods when global energy modeling often treated nuclear power, including breeder reactors, as a long-term backstop technology. The question is not whether today’s models are frozen in 1980. They are not. The question is whether older structures, categories and expectations still influence the way future technologies are represented. It is important to be fair here. These models have improved. GCAM appears to have moved away from some of the very low nuclear capital cost assumptions that appeared in older shared socioeconomic pathway work. MESSAGEix-GLOBIOM is more transparent and modular than earlier generations, and its assumptions have been updated. REMIND-MAgPIE, from the Potsdam Institute for Climate Impact Research, now includes endogenous learning for technologies such as solar, wind, batteries and electrolysis. That is a material improvement over older approaches that treated much of technological change as an external assumption. Serious people are improving serious tools in response to real criticisms. I assessed REMIND’s hydrogen treatment a couple of years ago and found serious problems. In that work, hydrogen appeared too cheap in delivered form, too easily distributed, and too available for end uses where direct electrification should dominate. REMIND and related work have improved since then. Hydrogen infrastructure and distribution costs are now more visible in the documentation, and recent PIK-linked sector-coupling work is clearer that direct electrification dominates final energy while hydrogen plays a smaller role. That is progress. It is not a complete cure. The model still has to be tested against delivered hydrogen costs, infrastructure buildout, utilization risk and the thermodynamic advantage of direct electricity. Improvement does not eliminate structural blind spots. The central asymmetry is that nuclear and hydrogen can become attractive when future cost declines and infrastructure delivery are assumed smoothly, while renewables, storage and grid flexibility can become less attractive when the model cannot see the operational flexibility stack. Technologies that fit into clean centralized categories are easier to represent than technologies whose value comes from manufacturing scale, granular deployment, local grid conditions and operational intelligence. Nuclear is relatively easy to make attractive in models because it can be represented as a clean, centralized future option. A nuclear plant can be entered as a generator with a capital cost, operating cost, fuel cost, lifetime, capacity factor and emissions rate. If the future capital cost is assumed to fall enough, and the build rate is not constrained by actual supply chains, financing risk, construction delay or institutional capacity, the model can allocate a lot of nuclear capacity. The hard parts of nuclear are not the physics. They are delivery, finance, regulation, public consent, supply chains, project management and time. Hydrogen has a similar problem. It exists today as an industrial feedstock, mainly for refining, ammonia and chemicals, and low-carbon hydrogen will have roles in replacing existing fossil hydrogen and selected industrial processes. But it becomes too attractive when models smooth away compression, storage, pipeline conversion or new pipelines, trucking, liquefaction, dispensing, safety systems, low utilization and end-use efficiency. The inverse problem applies to renewables, storage, grid-enhancing technologies and demand flexibility. Solar and wind are not just generators. They are parts of a system that includes batteries, transmission, distribution upgrades, forecasting, market design, curtailment, flexible loads, interconnection reform, inverter controls and operational practices. A nuclear plant is one large asset. A high-renewables grid is a fabric of many smaller assets, controls and decisions. Models tend to see the large asset more cleanly than the fabric. That blindness matters because solar, wind and batteries have repeatedly beaten cost and deployment expectations, while the grid practices that enable them are often treated as generic integration costs. Manufacturing learning, Chinese industrial scale, auction discipline and modular deployment moved faster than many cost curves assumed. A simplified model may add integration costs to variable renewable energy as its share rises. That is not wrong in concept. Integration is real. But the size and timing of those costs depend on transmission, storage, geographic diversity, flexible demand, market rules, inverter services, grid operations, curtailment economics and distribution constraints. If those options are represented crudely, renewables can look more expensive or constrained than they are in practice. Grid-enhancing technologies expose the difference between the grid as a modeled abstraction and the grid as operated infrastructure. Dynamic line ratings use real weather and conductor data instead of conservative static assumptions and can reveal 10%, 20% or 30% more capacity in the right conditions. Advanced conductors and reconductoring can increase capacity in existing corridors, sometimes roughly doubling transfer capacity depending on towers, voltage and thermal limits. Power-flow controls, batteries and flexible demand can turn specific constraints into manageable operating problems. These are not substitutes for transmission, but they show that the existing grid has latent capacity that models often cannot see. That is a problem for integrated assessment models. REMIND mostly represents grids through cost markups, storage and flexibility assumptions, and broad energy-system relationships. MESSAGE includes transmission and distribution infrastructure costs, losses and interregional exchange, but it still works at large regional scale with stylized temporal and spatial detail. GCAM has regional electricity markets and load-duration treatment, especially in GCAM-USA, but it does not model distribution feeders, line ratings, substations, interconnection queues or topology. All three can represent more electricity, more transmission, more storage or higher integration costs. None natively represents the operational grid as utilities experience it. This is why the modeling discussion has policy implications. If models undersee GETs and distributed flexibility, they overstate the need for centralized firm generation. If they smooth hydrogen delivery, they overstate hydrogen’s role in buildings, road transport and general industrial heat. If they assume nuclear cost declines without construction evidence, they overstate nuclear’s feasible contribution by 2050. If they lag solar, wind and battery cost declines, they understate the fastest-moving parts of the transition. It makes hydrogen corridors look more strategic than heat pumps, grid upgrades and industrial electrification. It makes SMR demonstrations look more urgent than reconductoring, interconnection reform and storage procurement. Models are useful. They organize assumptions, test sensitivities, compare pathways and expose tradeoffs. But their outputs are conditional statements, not declarations of necessity. A high-nuclear, high-hydrogen or renewables-constrained pathway shows what happens inside that model under those assumptions. It does not prove that nuclear must triple, hydrogen distribution will be cheap and easy, or real grids cannot use more renewables. Better scenario work should respond to evidence. Models should include empirical forecast-error checks, faster updates from market data, feasibility diagnostics for high-nuclear and high-hydrogen pathways, and explicit sensitivities for GETs and distribution flexibility. Integrated assessment models do not need to become utility planning models, but when their conclusions depend on grid integration limits, they should be checked against power-system and distribution tools that represent dispatch, congestion, curtailment and local constraints. Scenario reporting should also become clearer. When an ensemble shows high nuclear, high hydrogen or high bioenergy, users should know which model families drive those tails and which assumptions are responsible. Median values can hide the assumptions doing the work. Policy makers should see the conditional logic, not just the pathway chart. The useful conclusion from the nuclear imaginaries paper is not that all models are broken or that all nuclear scenarios are invalid. It is that energy futures are imagined, structured, parameterized, modeled, published, cited and then turned into policy stories. At each step, assumptions can become less visible. By the time a scenario reaches a ministerial slide deck or an industry report, the conditional nature of the result can disappear. What began as “under these assumptions” becomes “science says.” The right response is not to discard models, but to read them with more discipline: what assumptions are doing the work, which model families produce the high tails, whether delivered costs include the full chain, whether grid constraints are real or stylized, and whether deployment rates match history. The nuclear imaginaries paper is useful because it gives us sharper language for a broader issue. Energy modeling is not only about calculation. It is also about imagination disciplined by evidence. When imagination runs ahead of delivery for nuclear and hydrogen, caution is warranted. When evidence runs ahead of imagination for renewables, storage and grid flexibility, models need to catch up.