Computational Science Community Wiki

Operating WRF on the CSF - Version 3.4.1, May 2013

As of May 2014, the following modules are available on the CSF

These WPS and WRF (versions 3.4.1) were built against the PGI compiler (v12.10) and the InfiniBand enabled OpenMPI implmentation of MPI for the AMD "bulldozer" nodes. They will not run on the nodes supporting the standard (non-InfiniBand) interconnect and will only run on AMD nodes.

WRF was build against hdf/5 (v1.8.11) and netcdf (v4.3)

(Full build details)

Running WRF

  1. load the relevant modules eg

     module load compilers/PGI/12.10  mpi/pgi-12.10/openmpi/1.6-ib-amd-bd  libs/pgi-12.10/hdf/5/1.8.11-ib-amd-bd  libs/pgi-12.10/netcdf/4.3-ib-amd-bd
  2. change directory to where you namelist file exists
  3. note all output files will be over-written
  4. submit a job to the batch scheduler asking to use -l bulldozer resource eg

     qsub -l bulldozer -l short -V -cwd -b y wrf.exe
  5. use qstat to monitor the job via the batch scheduler; view wrf.rsl output files (NB these are buffered) during the run to get a feel for progress

Operating WRF on the CSF - Version 3.6, June 2014

This is meant to describe the operating procedure meant to run WRF from the module built on the CSF, using WRF 3.6. It describes an example test case (one similar to the ManUniCast setup for a 54-hour simulation over two nested grids for a wind event in February 2014). This setup information is meant to provide everything necessary for a successful WRF simulation.

Before You Start

1. WRF is available on the CSF by loading this module:

This will load all of the specific modules that WRF requires (PGI, MPI, zlib, HDF5, NetCDF) as well as set several system variables:

2. After loading the module, create a WPS run directory (hereby referred to as $WPS_RUN_DIR) and WRF run directory (hereby $WRF_RUN_DIR). These are necessary for the creation of the WPS and WRF output files separately from the installed software in a directory that you have read/write access to. For performance and capacity both of these directories should be in your scratch directory. For example:

Step 1: Boundary Conditions

There are several boundary conditions available to run WRF. This example uses GFS analysis files (Global Forecast System, http://www.emc.ncep.noaa.gov/index.php?branch=GFS).

1. For this example on the CSF we have downloaded all of the required files. You can copy them to your own scratch directory using the command:

Once you've finished this tutorial and are running your own simulations you should download the boundary conditions necessary for your simulation - you will need them from the start time until the end time of the model simulation. Download them from the NOAA web servers. You can do this directly on the CSF by setting your HTTP_PROXY variable by this command:

In practice, it takes less time to download the files on your own local machine and then upload them to the CSF.

GFS Analysis files are available from this address:

For example, to download a specific file:

Step 2 : WPS Pre-Processing

1. Go in to your $WPS_RUN_DIR:

2. If this is the first time that we are running WPS on the CSF, we will need to follow these steps

This sets up the WPS directory with the files we need for local operation of WPS from the preinstalled directory. Once the directory is set up, the files do not need to be linked again.

3. Create a namelist.wps file - an example of which follows below

4. IMPORTANT: You must expand the value of $WPS_GEOG in the above file to be an actual path rather than $WPS_GEOG. If you've copied the above text literally in to a file you can now expand the variable using the command:

5. Link the GRIB files into the format that WPS wants using this command

6. Create a qsub script (e.g. qsub_wps.sh, example follows) to perform the rest of the post-processing. NB: Do not indent lines in the jobscript.

7. Submit the jobscript using qsub qsub_wps.sh (assuming you named your file qsub_wps.sh). This will create your met_em files that are necessary to operate WRF.

8. When the job has finished (use qstat to check) you can check for successful execution using:

Step 3: Running WRF

If you're coming back to this tutorial after completing the WPS section (above) but have logged out and back in to the CSF, ensure you do the following on the CSF login node before proceeding:

1. Go in to your $WRF_RUN_DIR:

2. Copy all of your met_em files to your $WRF_RUN_DIR:

3. WRF requires several different data files to be present in the directory that it is operating in for quick file IO. These are look-up tables and various forms of flat binary information that WRF will not operate without that are located in the $WRF_DIR/run (the installation directory run directory). So, in order to get them to run in this directory, we need to run these steps, which need to only be done once for a new $WRF_RUN_DIR

By linking the executable files after copying over the I/O data, any changes to the WRF installation due to changes in compilation will translate through.

4. Remove any existing namelist.input file using:

    then create a new namelist.input file as follows:

5. Create a qsub script, (e.g. qsub_real_wrf.sh) which runs wrf in parallel. NB: Do not indent lines in the jobscript.

6. Submit the qsub script using qsub qsub_real_wrf.sh

These processes should lead to a WRF simulation of a 54 hour period. When using 256 AMD Bulldozer cores on the CSF it takes approximately 1 hour to complete.

When the simulation has finished you should have wrfout_* files in your $WRF_RUN_DIR (110 files in this example):

Step 4 : NCL Post-Processing

We now check the results of the simulation by plotting some of the time steps to a PDF file.

1. On the CSF the NCL utilities are available via a modulefile:

2. Copy our example NCL script that will read the precipitation variables from the third time step:

3. Run ncl to generate a pdf file. We do this as a batch job on the CSF because ncl can consume a lot of memory. Running this directly on the login node could cause problems for other users. You should not run applications on the login node. A quick one-liner to submit the batch job is as follows:

4. Finally, if you logged in to the CSF with remote X11 enabled, view the .pdf file using

Note that the example .ncl file can be edited to process all files (read the comments near the top of the file). It also requires you have set the $WRF_RUN_DIR environment variable. You can remove this requirement from the .ncl file if using it for other simulations.