When (model) worlds collide

Apr 05, 2017

When computational Earth system scientists (Earth system modelers and climate analytics experts) get together, there are bound to be marathon efforts to confront models with new data. That’s what happened in May 2016, when more than 60 of the world’s leading climate and Earth system modelers gathered in Washington for the second US ILAMB Workshop. ILAMB, which stands for the International Land Model Benchmarking Project, is an international model–data comparison and integration activity designed to improve the performance of land surface models (LSMs) and inform the design of new experiments to reduce the uncertainties associated with key land surface processes.

Although LSMs have improved greatly over the last 20 years, LSMs still exhibit a large spread in predictions and their results often differ from observations. This is troubling given the importance of LSMs to Earth system models (ESMs). In fact, says Forrest Hoffman of the ORNL Climate Change Science Institute, this variation in model results is one of the motivations for the workshop. Participants rigorously evaluate model results to build confidence in ESMs.  

The workshop included leading scientists such as Gab Abramowitz of the University of New South Wales, lead developer of the Protocol for the Analysis of Land Surface Models (PALS), a web application that provides model evaluation tools and observational data sets; Martin Best from the UK, lead developer of the JULES community LSM and one of the architects of the PALS Land Surface Model Benchmarking Evaluation Project; Hyungjun Kim at the University of Tokyo, leader for various hydrological model comparison activities; and Hoffman, one of the developers of the Biogeochemistry (BGC) Feedbacks Scientific Focus Area benchmarking tool (also called ILAMB) and an organizer of the workshop.

In discussing BGC Feedbacks and the ILAMB Project, Hoffman says the project is an international activity involving people who are interested in model analysis, evaluation, and benchmarking and who want to be part of the solution to modeling challenges. “ILAMB serves as a focus for our activities and such workshops are a rallying point every few years.”

The BGC Feedbacks SFA is a DOE-funded project to develop new diagnostic approaches for evaluating ESM representations of biogeochemical processes, and the BGC Feedbacks research is applicable to the ILAMB Project. In fact, one of the goals of BGC Feedbacks is to support the ILAMB Project through the development of an open source benchmarking system that leverages the growing collection of laboratory, field, and remote sensing data.

The direct focus of BGC Feedbacks is to analyze and evaluate current models so that much more rigor can be applied to constrain them and improve their fidelity and performance over the contemporary period, resulting in better predictions of future climate. “To me that is one of the most important things you can do to address concerns about uncertainties,” Hoffman says.

So far, BGC Feedbacks has developed two versions of its benchmarking tool, presenting the first at the 2015 American Geophysical Union Fall Meeting and the second at the ILAMB Workshop for evaluation and feedback from the international modeling community.

Why now?

The first US ILAMB Workshop was held in 2011. Since that time, CMIP5 (the Coupled Model Intercomparison Project, Phase 5) has taken place, and as a result of CMIP5, various members of the modeling community have developed competing but similar model diagnostics tools. Hoffman says it was time for everyone to come together, share results, and plan for the future. “We had new tools available and, in BGC Feedbacks, an effort specifically geared to contributing to this type of model benchmarking. So, we thought we should host the meeting here in the United States and bring the international modeling community together.”

With this in mind, the workshop opened with sessions in which attendees had opportunities to present their tools and diagnostics, with time for everyone present to evaluate the strengths and weaknesses of the individual approaches. Hoffman says the level of enthusiasm and engagement was gratifying, with participants putting in 12- to 14-hour days and working through meal times and other breaks.

“Of course, there will always be people developing their own tools, and that’s great,” Hoffman says, “but at some point, it may be advantageous to converge on a software architecture for doing metrics, even if we don’t converge on exactly what those metrics are.”

The ILAMB tool developed as part of BGC Feedbacks got high marks from workshop attendees, and the BGC Feedbacks team received useful feedback on priorities for future research. This, for Hoffman, was one of the most valuable parts of the workshop. Some of the suggestions, he says, were so obvious and so easy to incorporate that they were adopted “on the fly” before the workshop even concluded.

In addition to discussions of the diagnostic tools, time was also allotted for considering various model intercomparison projects, particularly those associated with CMIP6, and ways the community might move forward together.

So what’s next?

The 2016 ILAMB Workshop Report, now available here, provides a synopsis of the current state of the science and highlights challenges and opportunities for benchmarking models, model development, and field and laboratory measurements needed to improve model projections. The report's appendix contains topical white papers that summarize workshop presentations and breakout meetings. The report offers community-developed recommendations for future ESM research.

The Biogeochemistry Feedbacks (BGC Feedbacks) Scientific Focus Area (SFA) is a multilaboratory–multi-university project funded by the DOE Office of Science through the Regional and Global Climate Modeling Program. For more information on the project, go to the project website at https://www.bgc-feedbacks.org/. For more information on BGC Feedbacks research or the ILAMB Workshop or Project, contact Forrest Hoffman.

 

By VJ Ewing, Forrest Hoffman, and John Sanseverino