Heterodyne Instrument Data Reduction Tutorial 2

Note: The following tutorial instructions assume that the user has already completed HARP DR Tutorial 1, and has the associated raw data and basic directory structure already in place.

This tutorial also assumes that the computer to be used already has a functioning installation of a recent version of the Starlink software suite (the latest release is available for download here).

Additional reduction/analysis instructions

This tutorial demonstrates how to determine what reduction recipe was run on your data if you send it through the default reduction. How to specify a different reduction recipe. How to apply an efficiency correction to your data (converting your data to TMB or TR*):

  1. Each ACSIS science observation includes the specification of a default data reduction recipe in the header information. This is originally set by selecting a RECIPE option in the Observing Tool (OT) during MSB preparation. To determine the default recipe for a given dataset, switch to the raw data directory:

    cd data/

    Use either of the following to examine the relevant part of the FITS header component of the data:

    fitsval a20110103_00025_01_0001.sdf RECIPE

    or

    fitslist a20110103_00025_01_0001.sdf | grep RECIPE

  2. It is also possible to re-run the pipeline reduction process using a different recipe from the default specified in the FITS header by simply appending the name of the recipe to the command line instruction. To try this, switch to the reduced data directory and try re-running the pipeline reduction with a different recipe:

    cd ../reduced/
    oracdr -files mylist -nodisplay -log sf REDUCE_SCIENCE_NARROWLINE

  3. Try doing the same with the other data. Note that the wide-band observation has two raw data files.

    Relevant recipe names are:

    REDUCE_SCIENCE_NARROWLINE
    REDUCE_SCIENCE_BROADLINE
    REDUCE_SCIENCE_GRADIENT
    REDUCE_SCIENCE_LINEFOREST
    REDUCE_SCIENCE_CONTINUUM

    Experiment to find out how the results of the reduction of a certain data file vary using the different recipes.

  4. It is possible to go and “tweak” a reduction by specifying your own recipe parameters. This is done by providing the pipeline with your own parameters for use with the specified recipe. For example you could make the following recipe file (called here myparams.ini) containing:

    [REDUCE_SCIENCE_NARROWLINE] PIXEL_SCALE=4
    REBIN=1

    Then run the pipeline as follows:

    oracdr -files mylist -nodisplay -log sf -recpars myparams.ini REDUCE_SCIENCE_NARROWLINE

  5. Once your data have been reduced it is possible to convert your data to TMB or TR*. To do so you will need to take into account ηMB and ηfss i.e:

    cdiv in=file.sdf scalar=0.63 out=file_Tmb

  6. It is possible to create channel maps of your data using chanmap command (note this does not make too much sense for stare observations):

    chanmap in=cube axis=3 low=-10 high=5 nchan=6 shape=3 estimator=mean

    It is also possible to produce channel maps using GAIA. You might also wish to look at a smaller velocity range than is contained within the cube when making such maps (in the following I make the assumption that the line is found between pixels -5 and 15 in the velocity axis):

    chanmap in=\"cube\(::-5:15\)\" axis=3 low=-10 high=5 nchan=6 shape=3 estimator=mean

  7. It is possible to create position-velocity diagrams from your data using the collapse command:

    collapse in=reduced.sdf axis=skylat estimator=sum out=pv

    Again it is also possible to do this in GAIA.

  8. Clump finding has many uses (and can form a topic for discussion on its own – see this Master Thesis by M. Watson for further discussion), the two popular clump finding algorithms in the Starlink software suite are: clumpfind and fellwalker.  These are CUPID packages and so to run you will need to have run up CUPID before you begin:

    cupid

    Then run the findclumps command:

    findclumps in=file.sdf out=clump-file.sdf outcat=clump-cat.FIT

    To run with bespoke parameters (as outlines in the findclumps documentation) you will need to create a configuration file such as the following config.lis:

    ClumpFind.AllowEdge = 0
    CLUMPFIND.TLOW=5*RMS
    ClumpFind.MinPix=16
    ClumpFind.VeloRes=1
    CLUMPFIND.DELTAT=3*RMS

    Then execute the command findclumps with:

    findclumps in=file.sdf out=clump-file.sdf outcat=clump-cat.FIT config=^config.lis

    To examine the clumps identify open the clump-file.sdf in GAIA. To examine the properties of the clumps as reported in the clump-cat.FIT file you can open the file in topcat:

    topcat -f fits clump-cat.FIT


Other JCMT data reduction/analysis tutorials are available here.

Comments are closed.