Compiling reorder on Mac OS X 10.3

Instructions based on reorder 1.36 from NCAR

Prerequisites

These instructions work with g77, but haven't been tested with the xlf compiler.

g77 and netCDF can be installed via Fink if desired.

Numerical precision issues

Reorder (and possibly the conversion process to UF if you're doing that) rounds its output values to the nearest 1/64 = 0.015625. This is true for both NetCDF and CEDRIC output.

While this is plenty of resolution for fields like reflectivity, some polarimetric data have a much smaller range of values, and resolution in the hundredths place becomes important. This is especially true for correlation coefficent (Rhv), where important (or at least physically valid) variations of .001 appear in the output fields. Similar reasoning may apply to Kdp; check with your local polarimetric expert to be sure you're retaining enough resolution in each field. Since it never hurts to retain as much detail as possible (in the computational numerics sense), I use 1/10000 for Rhv, and 1/1000 for Zdr and Kdp.

Increasing the resolution in Rhv (or any other field) is possible by editing qreo2.F, near line 665. Add three-line if statement involving RH to the code:

c   and the scale value
            call put16(hed,i16w+4,nint(scale))
            if( mnames(mp:mp+3) .eq. 'TIME' ) then
               call put16( hed, i16w+4, 10 )
            endif
            if( mnames(mp:mp+1) .eq. 'RH' ) then
               call put16( hed, i16w+4, 10000)
            endif
The the third parameter in the put16 function call is the scaling factor; Rhv will have resolution of 1/10000 in this case. You can specify scaling for other fields by adding another if statement for each field.

The default scaling is 64; that can be changed on line ~399:

      parameter(scale = 64.0)
If you change the default scaling make sure that the expected maximum data value (say ~100 dBZ for reflectivity) times the scaling factor does not exceed ~30000 (specifically, 2^16 / 2 - 1)

Special OS X instructions

To let autoconf detect that it's on a Mac, add

   elif eval "echo -n \"$ac_uname_stuff\" | grep \"Darwin\" >/dev/null"; then
      ac_os_type="Darwin"
      AC_MSG_RESULT(IT'S $ac_os_type)
to line 35 in configure.ac, just below where it does the same thing for Linux.

Because Mac OS X returns a 64 bit integer for the current file position, some of the routines that read files don't work; they expect a 32 bit integer. More details are available here. To fix this, add the line #include <unistd.h> along with the other includes in fofclibPlus.c, xcedcio.c, and fofclib.c.

In qreo2.F, the minimum grid x,y,z locations are are set the first time filter() is called. These are not saved for subsequent calls of filter(). xmin, ymin, and zmin all need to be added to the save statement after the common blocks at the top of filter(). (This is at roughly line 2250 in qreo2.F)

    	save ikount, first, tcount
     +, copln, nxval, nyval, nzval, rcpdx, rcpdy, rcpdz
     +, xrad, yrad, zrad, xval, yval, zval, dx, dy, dz
     +, xmin, ymin, zmin
Apparently other compilers automatically saved the values between function calls, allowing this code to work in the past on other architectures.

Instructions for Fink users

If netCDF is installed through Fink, you'll need to edit the configure script in two places so it will detect netCDF properly:

Before configuring and compiling, you also need to set environment variables as described in Q8.3 in the Fink FAQ.

Building

Then, configure: ./configure --prefix=/usr/local --exec_prefix=/usr/local/reorder --bindir=/usr/local/reorder/bin --with-soloii . Note the dependency on soloii for dorade reading. In the resulting makefile, three more edits are needed:

Then, make install.

This should successfully build. qreou will read in and process .uf data; qreod has not yet been tested on the Mac as of our knowledge on 2005/03/08.

If any of the manual edits I list above are fixable through other means, let me know and I'll add those changes to this page.

Running a barnes analysis: choosing filter parameters

Trapp and Doswell (2000, J. Atmos. Ocean. Tech., hereafter TD2000) and Koch et al. (1983, J. Clim. and Appl. Met., hereafter K83) describe the theory behind the following prescription for choosing weights. The reader is advised to consult these papers. The information provided here is by no means complete, and is mainly intended to aid in the subtleties of making reorder work.

Let's say we desire to analyze a storm within a domain whose maximum distance from the radar is 50km. Assume the radar has elevation and aziumth spacing of 1 degree, with 1 degree beamwidth (e.g., KOUN), such that the data spacing is maximally 1 degree. (There is higher data spacing along the radials). TD2000 advise choosing a modified weight function such that

k= (k*) (2*DLT)^2
where Æ is the maximum data spacing. In cartesian space, this is
DLT = (50km) * tan (1 degree)  = 0.875 km
The value of k* controls the filter response function. K83, fig. 1, shows the response for various k*. From a sampling theory perspective, it is desirable to filter out all wavelengths below the Nyquist, i.e., all wavelengths less than twice the data spacing. (The Nyquist has a nondimensional wavelength of 1 in the language of TD2000 and K83). For this reason, TD2000 use k*=0.5. However, this also attenuates signals at four times the data spacing (twice the nyquist) by about 70%. For our data, this means that a line of idealized reflectivity cores spaced 4*0.875=3.5 km apart would be diminished in their maximum intensity by 70%. This is rather undesirable in the case of a multicell complex, for instance, where features of interest may be of that scale. To ameliorate this problem, one may choose smaller k* at the expense of passing through features below the nyquist frequency. For k*=0.1, there is 70% attenuation at the nyquist and only 20% attenuation at twice the nyquist. We choose this as an acceptable compromise; experimentation with actual data is necessary to asses the tradeoffs being made. Thus,
k = (0.1) (2*.875 km)^2 = 0.306 km^2

As if that wasn't enough thought, we now have to make reorder understand what we want it to do. Reorder formulates its exponential weighting mode as

w=exp(a * r^2 / R^2)
while the Barnes is
w=exp(- r^2 / k)
and thus,
a = -R^2 / k
a is the attenuation parameter, and may be specified directly, while R^2 is calculated by reorder from the XRADIUS, YRADIUS, and ZRADIUS parameters in the input command file. Denoting these spacings as DX, DY, DZ,
R^2 = DX^2 + DY^2 + DZ^2
. While a true Barnes analysis incorporates all data out to a distance of infinity, it is only practical to include data out to some maximum distance, which is conveniently chosen to be the distance d where the weight w reaches some small value, say w'=0.01. So,
0.01 = exp ( -d^2 / 0.306)   =>   d = 1.187 km
Reorder excludes source data from its calculations if the distance from a gridpoint to the source data exceeds DX in the x direction, DY in y, or DZ in z. Thus, for some source data value directly to the east (+x) of the grid value, we want w=0.01, which reqires DY=DZ=0 and DX=1.187 km. Since symmetry is desirable in the analysis, we set DX=DY=DZ=1.187, implying that reorder will calculate
R^2 = 3 * DX^2 = 4.227
which then requires that
a = -4.227 / 0.306 = -13.815
to achieve k*=0.1 for our data spacing of 1 degree at 50 km.

Contributors: Curtis Alexander, Eric Bruning, David Dowell, and Conrad Ziegler.

Changes to fix the Barnes analysis (deprecated 31 May 2005)

Equivalent results also can be obtained by carefully tweaking the parameters in the reorder command file, which doesn't require as much modification of the code. I'll update these instructions once I've figured that out.

Testing of reorder indicates that its "EXPONENTIAL" weighting scheme, which should be barnes-like, does not compute weights out to a large enough distance from the source data. Several changes are needed in the filter routine to accomodate this. First, declare

real dxyzmult
at the beginning of filter(). Then, the following change should be made where the x, y, and z indices are calculated:
c   The value of dxyzmult is used to increase the cutoff distance
c     for the barnes weight function. At this distance, the weight
c     (w, below) should be some number approaching zero (e.g., 0.01)
      dxyzmult=3.0
c
c  determine the range of the x indices
      do 303 i = 1, ngts
        ixlo(i) = ( x(i)-dxyzmult*dx(i)-xmin )*rcpdx +1.
 303  continue
c
c
      do 305 i = 1, ngts
        ixhi(i) = ( x(i)+dxyzmult*dx(i)-xmin )*rcpdx +1.5
 305  continue
Repeat this for y and z. Note that you should print out values of w to make sure that w becomes small enough. A bit further down, in the "filter loop," add dxyzmult to the following lines as shown:
      do 503 l3=izlo(igt),izhi(igt)
        if( abs( zval(l3)-z(igt) ).gt. dxyzmult*dz(igt) ) then
          go to 503
        endif
Once again, repeat for x and y as well.

One more change will make it easier to figure out what the barnes scheme is doing to the data. When calculating the weights, change the exponential section to:

c   exponential weighting
c       w = exp(atten*rsq/bigrsq)

c   Modified so that bigrsq plays no role. Simply choose
c       atten (positive) as the desired value of the smoothing
c       parameter (kappa in Trapp and Doswell, 2000)
        w = exp( -rsq/atten )
The value of atten is specifed in the reorder input commands as: WEIGHTING FUNCTION: EXPONENTIAL, 0.5; where 0.5 is the value for atten.