Computational Science Community Wiki

Running WRFCHEM

Traceability

Doug Lowe (8/6/2011): We have to come up with a method of ensuring that we know (or can relatively quickly determine) what settings and model versions were used to generate each model output file that we store. So I think we need a check list of chunks of information that have to be stored with each output file.

The information that we need to store is:

Storing the namelist options should be straight-forward - we simply copy the namelist.input file which is used to a safe location.

To record information on the model code I think we can use the svn server. As long as we only use code that has been stored on the svn server then we can create an info file with the svn revision number, and some architecture information, in order to keep track of this information.

The main information I don't know how to store is the input file info. Could someone (Steve?) suggest a manner in which we can record this information with traceability (and without using large amounts of disk space)?

Requirements

The emission data files contain hourly emissions data - split into two files (although these can be stored as one data file).

Running WRF

Single/Initial Runs

When first running WRF-Chem you must:

Using Restart Files

To use a restart file you must:

You can change the values of history_interval and restart_interval when using restart.

Issues with restart files:

Solution (DL 18/7/2011):

Reinitialising Meteorology with Previous Chemistry

When running large domains, it is recommended to reinitialise meteorology roughly every 3-7 days (depending on size of domain and meteorological conditions) as the meteorology within the domain with diverge from the operational/reanalysis over time. There are options with WRF-Chem to reinitialise the meteorology, while using the chemistry data from the previous days WRF-Chem run:

Once you have run real.exe with these settings then the wrfinput_d01 file created by this process will contain the chemistry information from your previous model run. You have to make sure that you take a copy of this file before running the mozbc script, as this will overwrite the chemistry data in that file with data from MOZART (or MACC).

Nesting using ndown

The process of nesting using ndown is covered in the WRF users manual - so the instructions here are supplemental to that documentation.

For running off-line nesting using ndown it's best to use separate folders for each domain. Within each of these folders you should link to the met_em and wrfchemi input files for the relevant domain as if they are domain 1 (i.e. ln -s /src_fldr/met_em.d02.2010-07-10_00:00:00.nc met_em.d01.2010-07-10_00:00:00.nc)

The procedural summary is:

Key namelist.input requirements are:

Computational Costs

WRF outputs times taken for each model step in the rsl.error.0000 data file. These can provide guidance on the speed up of the model on different architecture, and with different numbers of nodes and cores.

My educated guess (DL) is that:

From the timings for the specific model run given below we see that:

HECToR phase2b, 64 nodes (24 cores per node) - organic aerosol model, 400x380x27 UK domain (9/6/2011)

HECToR phase2b, 32 nodes (24 cores per node) - organic aerosol model, 400x380x27 UK domain (9/6/2011)

HECToR phase2b, 16 nodes (24 cores per node) - organic aerosol model, 400x380x27 UK domain (9/6/2011)