We are pleased to announce the release of ILAMB v2.7. In addition to many bug fixes and enchancements, this version contains the following content additions:
The code, datasets, and auxillary files needed to reproduce the analysis described in Fu2022 and which generated figure 5.22 of the AR6 WG1 report (shown below) is now a supported configuration in the ILAMB codebase. You will need to update to at least v2.7
and then use the following resources.
The Environmental System Science (ESS) Cyberinfrastructure Model-Data Integration Working Group organized a 2-part virtual hackathon which featured ILAMB during the 2nd session. These sessions were organized by Xingyuan Chen (PNNL) and Forrest Hoffman (ORNL) and supported by the RUBISCO SFA, InteRFACE SFA, COMPASS-GLM SFA, PNNL Watershed SFA, NGEE Arctic SFA, ORNL TES SFA, and E3SM SFA. Our portion of the virtual hackathon focus on adapting ILAMB for watershed analysis.
We are pleased to announce that the reference datasets that we have reprocessed and can be mass downloaded via ilamb-fetch are now also available as an intake catalog. Intake is a lightweight set of python tools for loading and sharing data in data science projects. It allows you to write python code referencing the ILAMB datasets by name, and then intake manages the download, using cached versions if available on your system.
We have added 5 new datasets to the ILAMB collection. Please run ilamb-fetch
to update your local collection and check ilamb.cfg for details on how to include them in your local runs. Alternatively you may browse some results against a subset of CMIP6 models. The new additions include:
Help us learn what scientists think about model performance by evaluating pairs of model biases on this feedback form. Simply click on which bias plot you consider to be ‘better’ from the 20 randomly given pairs. While our intention is for you to select the model with the lower error relative to observations, please use whatever definition of ‘better’ makes sense to you as you examine the differences between the plots. We will use your collective responses to evaluate how well our methodolgy captures community opinion. For context, each plot represents either the Gross Primary Productivity (gpp), Sensible Heat (hfss), or Surface Air Temperature (tas) bias of a model in the CMIP5 or CMIP6 era, relative to a reference data product. When you are finished, click the ‘complete’ button to see how well your choices align with our current methodologies.
It has been a while since our last release, but ILAMB continues to evolve. Many of the changes are ‘under the hood’ or bugfixes that are not readily seen. In the following, we present a few key changes and draw attention in particular to those that will change scores. We also have worked to make ILAMB ready to integrate with tools being developed as part of the Coordinated Model Evaluation Capabilities CMEC.
We are pleased to announce a new version of the ILAMB package. In addition to a new version, the ILAMB repository is now hosted at:
The report from the second ILAMB Workshop in the U.S. has been published. We have generated both a 4-page flyer or the full report.