DSICE: Dynamic Stochastic Integration of Climate and Economy

I started working on climate change policy modeling in 2008, and it has been a major focus of my efforts since then. In 2010, Yongyang Cai, Thomas Lontzek and I created the DSICE model, extending Nordhaus’ DICE to include productivity shocks as well as stochastic elements of the climate system. While we had earlier published some applications of DSICE, the most complete exposition and application appeared in the Journal of Political Economy in December, 2019. The JPE version is also by far the most computationally intensive paper ever written in the Integrated Assessment Modeling literature that combines climate with modern ideas in dynamic stochastic economic modeling.

Authorship

I must first clarify a detail. As the paper says, I was a coauthor in all substantive aspects. JPE made it clear that the presence of my name as an author reduced the chances of it being accepted. I was proud of what Yongyang, Thomas and I had accomplished. I wanted the paper to appear in JPE and did not want my name hurting my coauthors’ career progress. Therefore, I removed my name but continued to work on the paper even after I removed my name as an official author.

The economic questions

The economic question explored was “What is the social cost of carbon, and how does it depend on parameter assumptions?” Even though we examined a wide range of parameter specifications for Epstein-Zin preferences and the stochastic productivity process advocated by the macroeconomics literature, the range for the current social cost of capital (also, the optimal carbon tax from a world policy perspective) was $40-$100 per ton of carbon. This range includes the results of other models but contains a larger upper region due to our including economic uncertainty. The key intuition is that the loss function is convex, and increasing the variance of future temperatures will increase the social cost of carbon.

We also analyzed the impact of a stochastic tipping process, such as glacier melting leading to rising sea levels. Damages from tipping processes are different from damages related to business cycle fluctuations because, for example, the melting of glaciers is irreversible from the perspective of economic planning. Those damages are only moderately correlated with consumption. Therefore, the stochastic asset pricing kernel that DSICE implicitly computes will discount tipping point damages at a lower rate, magnifying their contribution to the SCC. More generally, we show that there is no one discount rate for climate change damages and that consumption CAPM considerations will affect the SCC.

Our analysis is a major advance in IAM models. We used the full, five-dimensional, climate model developed by Nordhaus, whereas many authors use far simpler climate models. Some assume that CO2 emissions immediately heat the atmosphere, ignoring the heating process in the atmosphere and the presence of the ocean as a heat sink. Climate scientists can use the simplified approach because they think in terms of millennia. Economists cannot ignore events at annual, or even quarterly, frequencies. We solve the dynamic programming model with one-year time periods and have checked that results are unchanged by reducing the time period. A few others have added economic risk to their models but they assume far less variance than standard macroeconomic estimates. Some have included tipping point phenomena in their models but using less realistic specifications.

Twenty-five years ago, I wrote in my book that if meteorologists used the same approach to research as economists, “they would ignore complex models … and instead study evaporation, or convection, or solar heating, or the effects of the earth’s rotation. Both the weather and the economy are phenomena greater than the sum of their parts, and any analysis that does not recognize that is inviting failure.” Our DSICE analysis shows that we now can solve models with realistic economic shocks, realistic specifications for tipping points, and the full Nordhaus climate model. Furthermore, it shows that this kind of multidimensional modeling can be done in many areas of economics.

This paper goes back several years. The code was developed by early 2012, applied to a simpler specification and deployed on a small supercomputer. Thomas Lontzek presented the first version at the 2012 Conference on Climate and the Economy organized by the Institute for International Economic Studies. Yongyang Cai presented this paper at the conference “Developing the Next Generation of Economic Models of Climate Change Conference” at University of Minnesota, September 2014. Earlier versions include Hoover economic working paper 18113 (2017)(https://www.hoover.org/research/social-cost-carbon-economic-and-climate-risk), arXiv:1504.06909 (2015) (https://arxiv.org/abs/1504.06909), NBER working paper 18704 (“The social cost of stochastic and irreversible climate change”), “DSICE: A dynamic stochastic integrated model of climate and economy” (2012) (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1992674), and “Tipping points in a dynamic stochastic IAM” (2012) (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1992660).

Verification to demonstrate accuracy of our numerical results

This paper introduces two features that help document the validity of our computational results. As many know, I do not trust anyone’s computational results, even my own. My lectures frequently use the phrase “Trust, but verify” taken from the Russian Doveryáy, no proveryáy. The JPE paper’s results relied on trillions of small optimization problems and billions of regressions. The sheer scale of the problem justifiably raises reliability questions. DSICE uses value function iteration over centuries, necessary because of the non-stationary nature of the problem. Each iteration takes the time t value function and computes the time t-1 value function at a set of points efficient for approximation and then applies regression to approximate the time t-1 value function. At each iteration, we check the quality of this approximation by computing the difference between the approximation and the true value at a random set of points in the state space. Our verification tests tell us that we have three- to four-digit accuracy for most of the important functions. This approach to verifying computational results can be applied to any computational work in economics, and help deal with the replication problems in economics. We are not aware of any other serious work which performs demanding verification tests, but strongly advocate its adoption by authors, editors, referees, and journals.

Code for reader use

Many journals encourage authors to share their code, but our code contains copyrighted material and cannot be publicly posted. Our second novel feature will help readers check the accuracy of our computations and use the computed value and policy functions for their own simulations. To that end, we created Mathematica notebooks for a few examples which contained the solutions and code that would allow the reader to check first-order conditions.

Doing the “impossible” despite fierce opposition

The kind of modeling in DSICE has often been called, and is still being called, impossible to do. Many authors indicate that they would like to examine more general models (which would still be far simpler than DSICE) but assert that it is intractable. We were able to solve the models in the JPE paper for several reasons: we used high quality numerical methods for integration and optimization, we developed basis functions that were suited to the problem, and we were able to use massive parallelization. The key novelty in this paper was using supercomputing at a scale far beyond other economics work (as far as we know).

Massive parallelism is natural for solving dynamic programming problems. We got our first experience by working with Miron Livny (developer of HTCondor), Stephen Wright and Greg Thain, all at the University of Wisconsin. Livny gave us access to a UW cluster — “unlimited access with zero priority” — which worked fine for Yongyang’s 2008 PhD thesis. When, in 2010, we began developing DSICE, we had to move to supercomputers. In November, 2012, Bob Rosner, former director of Argonne, suggested we apply for time on the Blue Waters supercomputer supported by the NSF. We had nearly continuous access to Blue Waters from March 2013 until NSF ended the project in 2019. Yongyang and I wrote the five proposals, wrote the end-of-year reports, and made the required appearances at the annual Blue Waters Symposium. Yongyang did the coding without major support from Blue Waters staff; in fact, he found a bug in their compiler. From 2013 to 2019, we received over one million node hours, where each node had between 16 and 32 cores, adding up to over 25 million core hours. Access to Blue Waters made it possible to solve the most complex example in our JPE paper with about 80,000 cores for over four hours. We owe a lot to the University of Wisconsin people who got us started and to the Blue Waters project for giving us what we needed for the applications of DSICE used in our JPE paper.

This paper proves that regular economists at any university can get access to substantial supercomputing time. Many of you will be skeptical about this claim because I am a Stanford employee in the heart of Silicon Valley. Stanford may have access to high-power computers, but those resources are controlled by individual University units. The Hoover Institution does not (nor should it) have a supercomputer for research.

Yongyang was supported by a NSF grant administered by Argonne National Labs and the University of Chicago. At one time, Yongyang and I thought we would have access to computers at Argonne and/or Chicago, but the grant’s co-PIs denied our request for access to any UC or Argonne computer.

I emphasize these facts for two reasons: to show that beginning in about 2006 we did not need special privileges nor major institutional support to get access to massively parallel computer systems, and to recognize Yongyang for his hard work and determination in getting the job done with little help.

Lessons for all economists

I have long advocated the use of modern computing hardware and algorithms in economics. Our JPE paper is just one example of what economists can do on their own. This blog will discuss other such demonstrations. I also hope that the computational science community will see that economists can be worthy research partners.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments