DSICE: Dynamic Stochastic Integration of Climate and Economy

I started working on climate change policy modeling in 2008, and it has been a major focus of my efforts since then. In 2010, Yongyang Cai, Thomas Lontzek and I created the DSICE model, extending Nordhaus’ DICE to include productivity shocks as well as stochastic elements of the climate system. While we had earlier published some applications of DSICE, the most complete exposition and application appeared in the Journal of Political Economy in December, 2019. I must first clarify a detail. As the paper says, I was a coauthor in all substantive aspects, even after I removed my name as an official author. This was not due to any dispute with my coauthors or any dissatisfaction with the paper as published. JPE made it clear that the presence of my name as an author reduced the chances of it being accepted. I wanted the paper to appear in JPE and help my coauthors’ career progress. Therefore, I removed my name.

The economic question explored was “What is the social cost of carbon, and how does it depend on parameter assumptions?” Even though we examined a wide range of parameter specifications for Epstein-Zin preferences and the stochastic productivity process advocated by the macroeconomics literature, the range for the current social cost of capital (also, the optimal carbon tax from a world policy perspective) was $40-$100 per ton of carbon. This range includes the results of other models but contains a larger upper region due to our including economic uncertainty. The key intuition is that the loss function is convex, and increasing the variance of future temperatures will increase the social cost of carbon.

We also analyzed the impact of a stochastic tipping process, such as glacier melting leading to rising sea levels. Damages from tipping processes are different from damages related to business cycle fluctuations because, for example, the melting of glaciers is irreversible from the perspective of economic planning. Those damages are only moderately correlated with consumption. Therefore, the stochastic asset pricing kernel that DSICE implicitly computes will discount tipping point damages at a lower rate, magnifying their contribution to the SCC. More generally, we show that there is no one discount rate for climate change damages and that consumption CAPM considerations will affect the SCC.

Our analysis is a major advance in IAM models. We used the full, five-dimensional, climate model developed by Nordhaus, whereas many authors use far simpler climate models. Some assume that CO2 emissions immediately heat the atmosphere, ignoring the heating process in the atmosphere and the presence of the ocean as a heat sink. Climate scientists can use the simplified approach because they think in terms of millennia. Economists cannot ignore events at annual, or even quarterly, frequencies. We solve the dynamic programming model with one-year time periods and have checked that results are unchanged by reducing the time period. A few others have added economic risk to their models but they assume far less variance than standard macroeconomic estimates. Some have included tipping point phenomena in their models but using less realistic specifications.

Twenty-five years ago, I wrote in my book that if meteorologists used the same approach to research as economists, “they would ignore complex models … and instead study evaporation, or convection, or solar heating, or the effects of the earth’s rotation. Both the weather and the economy are phenomena greater than the sum of their parts, and any analysis that does not recognize that is inviting failure.” Our DSICE analysis shows that we now can solve models with realistic economic shocks, realistic specifications for tipping points, and the full Nordhaus climate model. Furthermore, it shows that this kind of multidimensional modeling can be done in many areas of economics.

This paper goes back several years. The code was developed by early 2012, applied to a simpler specification and deployed on a small supercomputer. Thomas Lontzek presented the first version at the 2012 Conference on Climate and the Economy organized by the Institute for International Economic Studies. Yongyang Cai presented this paper at the conference “Developing the Next Generation of Economic Models of Climate Change Conference” at University of Minnesota, September 2014. Earlier versions include Hoover economic working paper 18113 (2017)(https://www.hoover.org/research/social-cost-carbon-economic-and-climate-risk), arXiv:1504.06909 (2015) (https://arxiv.org/abs/1504.06909), NBER working paper 18704 (“The social cost of stochastic and irreversible climate change”), “DSICE: A dynamic stochastic integrated model of climate and economy” (2012) (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1992674), and “Tipping points in a dynamic stochastic IAM” (2012) (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1992660).

This paper introduces two features that help document the validity of our computational results. As many know, I do not trust anyone’s computational results, even my own. My lectures frequently use the phrase “Trust, but verify” taken from the Russian Doveryáy, no proveryáy. The JPE paper’s results relied on trillions of small optimization problems and billions of regressions. The sheer scale of the problem justifiably raises reliability questions. DSICE uses value function iteration over centuries, necessary because of the non-stationary nature of the problem. Each iteration takes the time t value function and computes the time t-1 value function at a set of points efficient for approximation and then applies regression to approximate the time t-1 value function. At each iteration, we check the quality of this approximation by computing the difference between the approximation and the true value at a random set of points in the state space. Our verification tests tell us that we have three- to four-digit accuracy for most of the important functions. This approach to verifying computational results can be applied to any computational work in economics, and help deal with the replication problems in economics. We are not aware of any other serious work which performs demanding verification tests, but strongly advocate its adoption by authors, editors, referees, and journals.

Many journals encourage authors to share their code, but our code contains copyrighted material and cannot be publicly posted. Readers need to be assured of accuracy of computations behind a paper’s results. To that end, we created Mathematica notebooks for a few examples which contained the solutions and code that would allow the reader to check first-order conditions.

The kind of modeling in DSICE has often been called, and is still being called, impossible to do. Many authors indicate that they would like to examine more general models (which would still be far simpler than DSICE) but assert that it is intractable. We were able to solve the models in the JPE paper for several reasons: we used high quality numerical methods for integration and optimization, we developed basis functions that were suited to the problem, and we were able to use massive parallelization. The key novelty in this paper was using supercomputing at a scale far beyond other economics work (as far as we know).

Massive parallelism is natural for solving dynamic programming problems. We got our first experience by working with Miron Livny (developer of HTCondor), Stephen Wright and Greg Thain, all at the University of Wisconsin. Livny gave us access to a UW cluster — “unlimited access with zero priority” — which worked fine for Yongyang’s 2008 PhD thesis. When, in 2010, we began developing DSICE, we had to move to supercomputers. In November, 2012, we applied to the Blue Waters supercomputer supported by the NSF, and had access to it nearly continuously until NSF ended the project last year. Access to Blue Waters made it possible to solve the most complex example in our JPE paper with about 80,000 cores for over four hours. We owe a lot to the University of Wisconsin people who got us started and to the Blue Waters project for giving us what we needed for the applications of DSICE used in our JPE paper.

This paper proves that regular economists at any university can get access to substantial supercomputing time. Many of you will be skeptical about this claim because I am a Stanford employee in the heart of Silicon Valley and Yongyang was supported by a NSF grant administered by Argonne National Labs and the University of Chicago. Stanford may have access to high-power computers, but those resources are controlled by individual University units. The Hoover Institution does not (nor should it) have a supercomputer for research. At one time, Yongyang and I thought we would have access to computers at Argonne and/or Chicago, but the co-PIs of that grant at Argonne and UC denied our request for access to any UC or Argonne computer. We instead, at the suggestion of Bob Rosner, applied for allocations on Blue Waters. Yongyang and I wrote the proposals, wrote the end-of-year reports, and made the required appearances at the annual Blue Waters Symposium. Yongyang did the coding without major support from Blue Waters staff; in fact, he found a bug in their compiler. From 2013 to 2019, we received over one million node hours, where each node had between 16 and 32 cores, adding up to over 25 million core hours.

I emphasize these facts for two reasons: to show that social scientists do not need major institutional support to get supercomputer time, and to recognize Yongyang for his hard work and determination in getting the job done with little help.

I have long advocated the use of modern computing hardware and algorithms in economics. Our JPE paper is just one demonstration of what economists can do on their own. This blog will discuss other such demonstrations. I also hope that the computational science community will see that economists can be worthy research partners.