Operating WRF on the CSF - Version 3.4.1, May 2013
These WPS and WRF (versions 3.4.1) were built against the PGI compiler (v12.10) and the InfiniBand enabled OpenMPI implmentation of MPI for the AMD "bulldozer" nodes. They will not run on the nodes supporting the standard (non-InfiniBand) interconnect and will only run on AMD nodes.
WRF was build against hdf/5 (v1.8.11) and netcdf (v4.3)
load the relevant modules eg
module load compilers/PGI/12.10 mpi/pgi-12.10/openmpi/1.6-ib-amd-bd libs/pgi-12.10/hdf/5/1.8.11-ib-amd-bd libs/pgi-12.10/netcdf/4.3-ib-amd-bd
- change directory to where you namelist file exists
- note all output files will be over-written
submit a job to the batch scheduler asking to use -l bulldozer resource eg
qsub -l bulldozer -l short -V -cwd -b y wrf.exe
use qstat to monitor the job via the batch scheduler; view wrf.rsl output files (NB these are buffered) during the run to get a feel for progress
Operating WRF on the CSF - Version 3.6, June 2014
This is meant to describe the operating procedure meant to run WRF from the module built on the CSF, using WRF 3.6. It describes an example test case (one similar to the ManUniCast setup for a 54-hour simulation over two nested grids for a wind event in February 2014). This setup information is meant to provide everything necessary for a successful WRF simulation.
Before You Start
1. WRF is available on the CSF by loading this module:
module load apps/pgi-13.6-acml-fma4/wrf/3.6-ib-amd-bd
This will load all of the specific modules that WRF requires (PGI, MPI, zlib, HDF5, NetCDF) as well as set several system variables:
- $WRF_DIR - main directory of the model file
- $WPS_DIR - main directory of the model pre-processor WPS
- $WPS_GEOG - main directory of the WPS geography files.
2. After loading the module, create a WPS run directory (hereby referred to as $WPS_RUN_DIR) and WRF run directory (hereby $WRF_RUN_DIR). These are necessary for the creation of the WPS and WRF output files separately from the installed software in a directory that you have read/write access to. For performance and capacity both of these directories should be in your scratch directory. For example:
cd ~/scratch mkdir wrf_test cd wrf_test mkdir my_wps_run my_wrf_run # # choose more meaningful names to reflect your simulation export WPS_RUN_DIR=/scratch/$USER/wrf_test/my_wps_run export WRF_RUN_DIR=/scratch/$USER/wrf_test/my_wrf_run # We'll also need this directory later on - create it now!! mkdir $WPS_RUN_DIR/gribfiles
Step 1: Boundary Conditions
There are several boundary conditions available to run WRF. This example uses GFS analysis files (Global Forecast System, http://www.emc.ncep.noaa.gov/index.php?branch=GFS).
1. For this example on the CSF we have downloaded all of the required files. You can copy them to your own scratch directory using the command:
cp $WRF_BASE/example/gribfiles/* $WPS_RUN_DIR/gribfiles/
Once you've finished this tutorial and are running your own simulations you should download the boundary conditions necessary for your simulation - you will need them from the start time until the end time of the model simulation. Download them from the NOAA web servers. You can do this directly on the CSF by setting your HTTP_PROXY variable by this command:
HTTP_PROXY="vm-webproxy1.its.manchester.ac.uk:80" export HTTP_PROXY
In practice, it takes less time to download the files on your own local machine and then upload them to the CSF.
GFS Analysis files are available from this address:
For example, to download a specific file:
filename='gfsanl_4_20140223_0000_000.grb2' address=http://nomads.ncdc.noaa.gov/data/gfsanl/201402/20140223/$filename # Download the file wget -nd -N $address
Step 2 : WPS Pre-Processing
1. Go in to your $WPS_RUN_DIR:
2. If this is the first time that we are running WPS on the CSF, we will need to follow these steps
cp $WPS_DIR/link_grib.csh . ln -s $WPS_DIR/geogrid . ln -s $WPS_DIR/geogrid.exe . ln -s $WPS_DIR/ungrib . ln -s $WPS_DIR/ungrib.exe . ln -s $WPS_DIR/metgrid . ln -s $WPS_DIR/metgrid.exe . ln -s $WPS_DIR/ungrib/Variable_Tables/Vtable.GFS Vtable
This sets up the WPS directory with the files we need for local operation of WPS from the preinstalled directory. Once the directory is set up, the files do not need to be linked again.
3. Create a namelist.wps file - an example of which follows below
&share wrf_core = 'ARW', max_dom = 2, start_date = '2014-02-23_18:00:00', '2014-02-23_18:00:00', end_date = '2014-02-26_00:00:00', '2014-02-26_00:00:00', interval_seconds = 10800, io_form_geogrid = 2, debug_level = 0, / &geogrid parent_id = 1,1, parent_grid_ratio = 1,5, i_parent_start = 1,112, j_parent_start = 1,75, e_we = 261,241, e_sn = 221,301, geog_data_res = '2m','30s', dx = 20000, dy = 20000, map_proj = 'lambert', ref_lat = 55.00, ref_lon = -6.0, truelat1 = 30.0, truelat2 = 60.0, stand_lon = 0.0, geog_data_path = '$WPS_GEOG', / &ungrib out_format = 'WPS', prefix = 'WUK', / &metgrid fg_name = 'WUK', io_form_metgrid = 2, /
4. IMPORTANT: You must expand the value of $WPS_GEOG in the above file to be an actual path rather than $WPS_GEOG. If you've copied the above text literally in to a file you can now expand the variable using the command:
sed -i "s@\$WPS_GEOG@$WPS_GEOG@g" namelist.wpsor simply run the following command to find out what the value is:
echo $WPS_GEOGand re-edit your file to insert the actual path in to the file.
5. Link the GRIB files into the format that WPS wants using this command
6. Create a qsub script (e.g. qsub_wps.sh, example follows) to perform the rest of the post-processing. NB: Do not indent lines in the jobscript.
#!/bin/bash #$ -S bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -N WPS_Running ##### Single-node MPI or OpenMP ##### #$ -pe smp-64bd.pe 4 # 64 cores or fewer (a low number is OK for WPS) # Optional if you prefer not to load the wrf modulefile on the login node first # source /etc/profile.d/modules.sh # module load /apps/pgi-13.6-acml-fma4/wrf/3.6-ib-amd-bd ./ungrib.exe mpirun -n $NSLOTS ./geogrid.exe mpirun -n $NSLOTS ./metgrid.exe # # $NSLOTS is automatically set to number of cores (se -pe above)
7. Submit the jobscript using qsub qsub_wps.sh (assuming you named your file qsub_wps.sh). This will create your met_em files that are necessary to operate WRF.
8. When the job has finished (use qstat to check) you can check for successful execution using:
grep Success WPS_Running.oNNNNNN # # Replace NNNNN with the unique job number returned by qsub # This should display the following: ! Successful completion of ungrib. ! ! Successful completion of geogrid. ! ! Successful completion of metgrid. !
Step 3: Running WRF
If you're coming back to this tutorial after completing the WPS section (above) but have logged out and back in to the CSF, ensure you do the following on the CSF login node before proceeding:
module load apps/pgi-13.6-acml-fma4/wrf/3.6-ib-amd-bd export WPS_RUN_DIR=/scratch/$USER/wrf_test/my_wps_run export WRF_RUN_DIR=/scratch/$USER/wrf_test/my_wrf_run # # Use the directory names you created # when you first started the tutorial
1. Go in to your $WRF_RUN_DIR:
2. Copy all of your met_em files to your $WRF_RUN_DIR:
cp $WPS_RUN_DIR/met_em.d0* $WRF_RUN_DIR
3. WRF requires several different data files to be present in the directory that it is operating in for quick file IO. These are look-up tables and various forms of flat binary information that WRF will not operate without that are located in the $WRF_DIR/run (the installation directory run directory). So, in order to get them to run in this directory, we need to run these steps, which need to only be done once for a new $WRF_RUN_DIR
cp $WRF_DIR/run/* $WRF_RUN_DIR # Replace copied exe's with symlinks in case central install is re-compiled. rm -f real.exe ln -s $WRF_DIR/run/real.exe real.exe rm -f wrf.exe ln -s $WRF_DIR/run/wrf.exe wrf.exe rm -f ndown.exe ln -s $WRF_DIR/run/ndown.exe ndown.exe rm -f nup.exe ln -s $WRF_DIR/run/nup.exe nup.exe rm -f tc.exe ln -s $WRF_DIR/run/tc.exe tc.exe
By linking the executable files after copying over the I/O data, any changes to the WRF installation due to changes in compilation will translate through.
4. Remove any existing namelist.input file using:
rm -f namelist.input
then create a new namelist.input file as follows:
&time_control run_days = 2, run_hours = 6, run_minutes = 0, run_seconds = 0, start_year = 2014, 2014, start_month = 02, 02, start_day = 23, 23, start_hour = 18, 18, start_minute = 00, 00, start_second = 00, 00, end_year = 2014, 2014, end_month = 02, 02, end_day = 26, 26, end_hour = 00, 00, end_minute = 00, 00, end_second = 00, 00, interval_seconds = 10800 input_from_file = .true.,.true., history_interval = 60, 60, frames_per_outfile = 1, 1, restart = .false., restart_interval = 5000, io_form_history = 2 io_form_restart = 2 io_form_input = 2 io_form_auxinput2 = 2 io_form_boundary = 2 debug_level = 0 / &domains time_step = 120, time_step_fract_num = 0, time_step_fract_den = 1, max_dom = 2, e_we = 261, 241, e_sn = 221, 301, e_vert = 45, 45, p_top_requested = 5000, num_metgrid_levels = 27, num_metgrid_soil_levels = 4, dx = 20000, 4000, dy = 20000, 4000, grid_id = 1, 2, parent_id = 0, 1, i_parent_start = 1, 112, j_parent_start = 1, 75, parent_grid_ratio = 1, 5, parent_time_step_ratio = 1, 5, feedback = 0, smooth_option = 0 / &physics mp_physics = 8, 8, ra_lw_physics = 1, 1, ra_sw_physics = 1, 1, radt = 20, 20, sf_sfclay_physics = 1, 1, sf_surface_physics = 2, 2, bl_pbl_physics = 1, 1, bldt = 0, 0, cu_physics = 1, 0, cudt = 5, 5, isfflx = 1, ifsnow = 1, icloud = 1, surface_input_source = 1, num_soil_layers = 4, sf_urban_physics = 0, 0, do_radar_ref = 1 / &fdda / &dynamics w_damping = 0, diff_opt = 1, km_opt = 4, diff_6th_opt = 0, 0, diff_6th_factor = 0.12, 0.12, base_temp = 290. damp_opt = 0, zdamp = 5000., 5000., dampcoef = 0.2, 0.2, khdif = 0, 0, kvdif = 0, 0, non_hydrostatic = .true., .true., moist_adv_opt = 1, 1, scalar_adv_opt = 1, 1, / &bdy_control spec_bdy_width = 5, spec_zone = 1, relax_zone = 4, specified = .true., .false., nested = .false., .true., / &grib2 / &namelist_quilt nio_tasks_per_group = 0, nio_groups = 1, /
5. Create a qsub script, (e.g. qsub_real_wrf.sh) which runs wrf in parallel. NB: Do not indent lines in the jobscript.
#!/bin/bash #$ -S bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -N WRF_Running ##### Multi-node MPI ##### #$ -pe orte-64bd-ib.pe 128 # Use a high core-count for WRF # Optional if you prefer not to load the wrf modulefile on the login node first # source /etc/profile.d/modules.sh # module load apps/pgi-13.6-acml-fma4/wrf/3.6-ib-amd-bd mpirun -n $NSLOTS ./real.exe mpirun -n $NSLOTS ./wrf.exe # # $NSLOTS is automatically set to number of cores (se -pe above)
6. Submit the qsub script using qsub qsub_real_wrf.sh
These processes should lead to a WRF simulation of a 54 hour period. When using 256 AMD Bulldozer cores on the CSF it takes approximately 1 hour to complete.
When the simulation has finished you should have wrfout_* files in your $WRF_RUN_DIR (110 files in this example):
wrfout_d01_2014-02-23_18:00:00 wrfout_d01_2014-02-23_19:00:00 ... wrfout_d02_2014-02-26_00:00:00
Step 4 : NCL Post-Processing
We now check the results of the simulation by plotting some of the time steps to a PDF file.
1. On the CSF the NCL utilities are available via a modulefile:
module load apps/binapps/ncl/6.2.0
2. Copy our example NCL script that will read the precipitation variables from the third time step:
cp $WRF_BASE/example/nclfiles/wrf_Precip_multi_files.ncl $WRF_RUN_DIR
3. Run ncl to generate a pdf file. We do this as a batch job on the CSF because ncl can consume a lot of memory. Running this directly on the login node could cause problems for other users. You should not run applications on the login node. A quick one-liner to submit the batch job is as follows:
qsub -b y -V -cwd ncl wrf_Precip_multi_files.ncl # # The job writes a file named plt_Precip_multi_files.pdf in your $WRF_RUN_DIR directory.
4. Finally, if you logged in to the CSF with remote X11 enabled, view the .pdf file using
Note that the example .ncl file can be edited to process all files (read the comments near the top of the file). It also requires you have set the $WRF_RUN_DIR environment variable. You can remove this requirement from the .ncl file if using it for other simulations.