Summit/EAO Computing Guide

This is a quick guide for the access to and reduction of data obtained by the JCMT as an observer.

As is outlined in the Summit Duties page, two of the tasks that should be undertaken by an observer is to i) to monitor the scientific quality of the data obtained on your own project when conditions are appropriate for your project to be observed ii) to monitor and report back the scientific quality of the data obtained on behalf of other JCMT users. This page has been created to assist with these tasks.

Please note – currently the summit pipeline does not handle POL-2 data. Any analysis of POL-2 will require running the POL-2 data reduction commands yourself. For a POL-2 reduction guide see here.

Account information

JCMT guest accounts (jcmtXX) are valid for 21 days.

Note that it is strongly advised that all visiting observers first log in to their guest account from Hilo (ideally, just after the guest account has been assigned to them). The guest account settings are reset for a new user when that user first logs in from Hilo. Logging in to a guest account at the summit first to work, and then subsequently logging in from Hilo for the first time will mean that any prior work saved while at the summit will be erased.

Observing Software – jcmtObs

Start the GUI by either clicking on the jcmtObs icon or by typing ‘jcmtObs’ in a terminal and click on ‘observer up’. For instructions on getting started and a review of the various observer screens initiated by the command ‘observer up’ click here.

Where to Work

When doing any data reduction, or downloading a large number of files either at the summit or in Hilo, use sc2dr5. We encourage all users to simply log into this computer at the start of a night. Please note heavy processing on ulili or pueo (the two observer computers at the summit) can inadvertently slow down our observing software!  Be aware and always log into/use sc2dr5:

>> ssh jcmtXX@sc2dr5 -X

You might be asked to approve this connection to sc2dr5 – if so simply type yes at the command line prompt.

You must also work in your guest account user area (this is independent of the computer you are on. We ask work to be undertaken at this location because home space is limited):

>> cd /export/data/visitors/jcmtXX/

Observers tend to find it useful to make a separate folder for each UT night (denoted on this page as <utdate>.) of observing:

>> mkdir <utdate>
>> cd <utdate>

Data reduction software

To have the JCMT’s STARLINK software running and available for use you should run:

>> starlink
>> kappa
>> smurf

starlink gets the starlink software, both kappa (commands for image processing, data visualisation, and manipulation of the standard Starlink data format—the NDF) and smurf (useful commands for reducing SCUBA-2). This software is not needed for running the data reduction pipeline but is useful for post reduction steps and inspecting files.

Links to manuals of the most often used Starlink programs can be found here.

SCUBA-2 Pipelines at the Summit

Which data

To see which data have been taken by SCUBA-2 on a particular night you can look at the tonight page or simply run:

>> scuba2_index <utdate>

Please replace <utdate> with the date of interest in the form: YYYYMMDD

Pipeline Summary

There are currently four pipelines that are running nightly at the summit. One set of pipelines known as the ‘quick look’ (QL) pipelines to check data quality, and the ‘summit’ pipelines which reduce the data and produce science maps. These pipelines run independently at 450 and 850 microns.

  • QL pipeline: For a full summary see: QL_pipeline_summary
    – Produces statistics on each 30s chunk of data.
    – Rapid look at data quality – this can occur as no map produced for science observations
    – Produces point and focus for telescope adjustments
    – *mos.sdf files are pointing observations
  • Summit pipeline: For a full summary see: SUMMIT_pipeline_summary
    – Quick reduction on all data from an observation.
    – Science data reduced using dimmconfig.lis with 3 iterations
    – Calibrators reduced using dimmconfig_bright_compact.lis with up to 10 iterations
    – Co-adds of combined observations are also produced
    – Maps produced have default FCF values applied so maps are calibrated in mJy/beam
    – *mos.sdf files are science observations, *reduced.sdf files are from pointings

Monitoring the pipeline output

You can monitor the messages and results produced by the pipeline using the ORAC-DR monitor program. The easiest way to do this is to run the setup script for the pipeline of interest and then start oracdr_monitor with the --uselocation option. For example for the summit pipeline:

oracdr_scuba2_850_summit
oracdr_monitor --useloc --nodisp

or for the quick-look pipeline:

oracdr_scuba2_850_ql
oracdr_monitor --useloc --nodisp

Where to find the pipeline files

Typically on a night the pipelines are run on the following machines:

QL pipeline: Summit pipeline:
dr2 : 450µm dr1 : 450µm
dr4 : 850µm dr3 : 850µm

You can find the results from the summit pipeline for 450µm and 850µm here:

>> ls /jcmtdata/reduced/dr1/scuba2_450/<utdate>/*
>> ls /jcmtdata/reduced/dr3/scuba2_850/<utdate>/*

If you are unable to find the data ask your Telescope Operator.

Main files of interest

The main files to inspect are the individual or mosaic files from the summit pipeline. The individual  reduced files  have the format s<UTDATE>_<OBSNUM>_<WAVELENGTH>_reduced.sdf (for example s20130408_38_850_reduced.sdf) and can be listed by doing:

>> ls /jcmtdata/reduced/dr3/scuba2_850/<utdate>/s20*_reduced.sdf

These are individual files only. The mosaic files are created one per target per frequency. These are the science images and will be single observations (if only a single target per frequency was observed) or co-adds if there were multiple repeats of an MSB during the night. These have the format gs<UTDATE>_<OBSNUM>_<WAVELENGTH>_mos.sdf (for example, gs20130408_38_850_mos.sdf) and can be listed by doing:

>> ls /jcmtdata/reduced/dr3/scuba2_850/<utdate>/g*mos.sdf

To ensure all processing is done on the sc2dr5 computer it may be helpful to first copy over the files of interest to your working area on sc2dr5:

>> cp /jcmtdata/reduced/dr3/scuba2_850/<utdate>/s20*_reduced.sdf .

or

>> cp /jcmtdata/reduced/dr3/scuba2_850/<utdate>/g*mos.sdf .

when examining mosaic files it is possible to find out how many observations went into a single group file by running the following:

>> provshow file.sdf roots | grep reduced 

will display a list of all the raw files that were combined to make up the group combined reduction.

Visualizing the data

To view the file you are interested in you can use GAIA:

>> gaia file.sdf

GAIA allows you to visually inspect the data and has some tools to examine the noise and any visible signal. You can use the “select NDF in container file” window to look at the data, the error, the variance, the exposure time etc. At this point you should be noting: is there emission? is it bright? compact? are there structures within the map? is the map empty? – this feedback can go into a log comment for a particular observation.

Estimating the noise

The summit pipeline – although not good enough to use for science, it provides a useful noise estimate. You can estimate the rms within an observation/file by one of two ways:

  1. Produce a copy of the file containing the error component of the map and open up this file in GAIA:
    >> ndfcopy in=file.sdf comp=err out=file_error
    >> gaia file_error.sdf &

    once the map is open in GAIA. Select a region of noise using the ‘Image region’ tool under the ‘Image-Analysis’ menu and chose ‘Selected Stats’ to return the mean of your selected area. When doing this you should be aware of the size of the observation (see the SCUBA-2 observing modes sections here)

  2. Use the PICARD  recipe named SCUBA2_MAPSTATS which will calculate various properties of the observation, its noise and its average NEFD given a single reduced SCUBA-2 observation. This command will produce an output file named ‘log.mapstats’ in the directory specified by the environmental variable  ‘ORAC_DATA_OUT’ (if set), or in the current directory if that is not set. You can run the command as follows:

    >> picard -log sf SCUBA2_MAPSTATS file.sdf

    Be cautious – as this method only examines the rms in the central 120″ radius of the map for daisy observations and the central 900″ radius of the map for PONG1800 observations.

    Although the output file ‘log.mapstats’ can be viewed with any text editor, it is useful to read this file with TOPCAT:

    >> topcat -f ascii log.mapstats

    After running the command above, click on “Views -> Table Data” to display the table.

    The log.mapstats file generates useful information such as the observation UT, HST, Scan number, elevation, airmass, opacity, noise equivalent flux density, etc. The main columns of interest, however, are ‘rms’ and ‘rms_units’ which report the rms noise and their respective units in the map.

Which ever method you use for estimating an rms in a map – be sure to be clear in the log comments for a particular observation:

i) Which data did you look at: single observation/mosaiced observation (see the “provshow” command, above)? Include the filename to be clear

ii) How did you estimate the rms – in gaia on the error component? in the central 90″ by radius? using MAPSTATS?

iii) What are the units of the rms? This can be found by using the kappa command ndftrace, looking at the FITS header (visualise your data using GAIA -> select ‘View’ from the headings -> select ‘Fits header…’), or as a column in the log.mapstats file if you estimated the noise using the second method presented above.

ACSIS Pipeline at the Summit

Which data

To see which data have been taken by ACSIS on a particular night you can look at the tonight page or simply run:

>> acsis_index <utdate>

Pipeline Summary

There is a single ACSIS pipeline which runs at the summit, usually run on dr1. This provides both the QA check for the TSS and the pipeline images for observers to view. The summit pipeline reduces data with the heterodyne pipeline. Links to the the specific summit reductions can be found in Appendix D – Summit Recipes and are as follows:

please note these are almost (but not quite as good) as the offline ACSIS pipeline recipes that all data are reduced with at EAO prior to being sent and stored in the archive at CADC. For more information about these recipes you are recommended to read the ACSIS pipeline chapter of the The Heterodyne Data Reduction Cookbook.

Monitoring the pipeline output

You can monitor the messages and results produced by the pipeline using the ORAC-DR monitor program. The easiest way to do this is to run the setup script for the pipeline of interest and then start oracdr_monitor with the --uselocation option. For example:

oracdr_acsis_summit
oracdr_monitor --useloc --nodisp

Where to find pipeline files

Reduced ACSIS data can be found here:

>> ls /jcmtdata/reduced/dr1/acsis/<UTdate>/*

This directory will contain files prefixed by ‘a’ and ‘ga’. The ‘a’ files are individual observations while the ‘ga’ files are the group reductions (note the group file may only contain a single member).

Main files of interest

For observers wishing to quickly assess their data, the main files of interest are:

ga…reduced001.sdf The final cube. This has been thresholded and baseline subtracted. If there are multiple *reduced00* files associated with one raster map, use gaia to visualise each one (see below) and determine which one is the main data file (the file containing the raster map). Discontinuities in the data reduction sometimes lead to the creation of small sections of the edge of the main map and additional files do not contain useful information.
ga…integ.sdf Integrated intensity image. Essentially the baselined cube collapsed along its frequency axis, but with regions without emission masked out. This is identical to ga…rimg.sdf.
ga…iwc.sdf Intensity weighted velocity/frequency map. As ga…integ.sdf but deriving the intensity-weighted “average” velocity along each line-of-sight. (I.e. 1st moment).

For a full description of all the files written out by the ACSIS pipeline

Visualising the data

It is typically recommended that you copy over the files you are interested in into your own working area on sc2dr5:

>> cp /jcmtdata/reduced/dr1/acsis/<utdate>/files.sdf .

To view the file you are interested in you can use GAIA:

>> gaia file.sdf

GAIA allows you to visually inspect the data and has some tools to examine the noise and any visible signal. You can use the “select NDF in container file” window to look at the data, the error, the variance, the exposure time etc. You can also visualize spectra in GAIA or “Send: replace” to send a spectrum from GAIA into SPLAT.

To open up a spectrum directly in SPLAT simply run:

>> splat file.sdf

Estimating the Noise

For a quick check of the noise for ACSIS data (not individual spectra), you have two options.

  1. Open your map with GAIA. Select a region of noise using the ‘Image region’ tool under the ‘Image-Analysis’ menu and chose ‘Selected Stats’ to return the standard deviation of your selected area.
  2. Use the KAPPA command stats. You may chose to use the comp=err option which will report back the statistics of the error component of the map and thus not be contaminated by any strong sources.
    >> stats map.sdf comp=err
  3. a third option is to use the representative spectrum SPLAT and use the stats tool within – before or after binning the data. The representative spectrum can be found in the same /jcmtdata/reduced/dr1/acsis/<UTdate>/ directory as the *reduced001.sdf file. It has the suffix *rsp.sdf, which stands for “representative spectrum”.

For details on the error within a map see sc20, The Heterodyne Data Reduction Cookbook.

Which ever method you use for estimating an rms in a map – be sure to be clear in the log comments for a particular observation:

i) Which data did you look at: single observation/mosaiced observation (check which observations are included in your file by typing: ‘ provshow file.sdf ‘ at the prompt)? Include the filename to be clear

ii) Which method did you estimate the rms? Be specific.

iii) What are the units of the rms? This can be found by using the kappa command ndftrace, or by looking at the FITS header (visualise your data using GAIA -> select ‘View’ from the headings -> select ‘Fits header…’).

Other useful DR commands

There are specialized post-processing applications in PICARD. These include:

MOSAIC_JCMT_IMAGES An alternative to wcsmosaic, it preserves all the HDS components for the individual observations. This allows you to inspect the exposure time/error maps and weight in your final co-added maps.
CROP_JCMT_IMAGES Use to remove the rough edges of your data, especially valid for SCUBA-2 maps. This will default to the map size defined in the MSB.

to run the picard recipes you can do:

>> picard -log sf MOSAIC_JCMT_IMAGES file1.sdf file2.sdf

 Other commands

Find your pixel size and axes:

>> ndftrace file.sdf 

Find the individual observations that went into your map:

>> provshow file.sdf 

Find out what commands have previously been run on your data:

>> hislist file.sdf 

To see the header information of your file:

>> fitslist file.sdf

If you know the keyword you want to extract, this can be explicitly specified:

>> fitsval file.sdf OBJECT
>> fitsval file.sdf OBSNUM

It is possible to retrieve your data from sc2dr5 from anywhere in the world as follows:

>> scp jcmtXX@ssh.eao.hawaii.edu:/net/sc2dr5/export/data/visitors/jcmtXX/... <destination>

It is possible to log into sc2dr5 from anywhere in the world as follows (remember that your account is only valid for 21 days before all your data is purged!):

>> ssh jcmtXX@ssh.eao.hawaii.edu
>> ssh -X sc2dr5

The raw data

Some visitors may wish to access the raw data themselves- particularly if they wish to run their own data reductions.

The raw data – SCUBA-2

SCUBA-2 data can be found here (450 microns vs 850 microns and sub arrays a/b/c and d, it is possible to use wildcards to select all data):

>> ls /jcmtdata/raw/scuba2/s<4/8><a/b/c/d>/<UT date>/<obsnumber>/s*.sdf

e.g.

>> ls /jcmtdata/raw/scuba2/s8a/20150616/00016/s*.sdf

to get the data from the a sub-array. To get all 850 micron data:

>> ls /jcmtdata/raw/scuba2/s8?/20150616/00016/s*.sdf

you may add observations to a text file by the following (useful to do it you want to run your own reductions on the raw data as described here for SCUBA-2):

>> ls /jcmtdata/raw/scuba2/s<4/8><a/b/c/d>/<UT date>/<obsnumber>/s*.sdf > mylist.lis

and append by:

>> ls /jcmtdata/raw/scuba2/s<4/8><a/b/c/d>/<UT date>/<obsnumber>/s*.sdf >> mylist.lis

if the data have been removed from the summit computer it may be accessed via:

>> ls /net/mtserver/export/data/jcmtdata/raw/scuba2/s<4/8><a/b/c/d>/<UT date>/<obsnumber>/s*.sdf

 The raw data – ACSIS

ACSIS data can be found here:

>> ls /jcmtdata/raw/acsis/spectra/<UT date>/<obsnumber>/a*.sdf

e.g.

>> ls /jcmtdata/raw/acsis/spectra/20150623/00025/a*.sdf

You may add observations to a text file with the following (useful to do it you want to run your own reductions on the raw data as described here for ACSIS):

>> ls /jcmtdata/raw/acsis/spectra/<UT date>/<obsnumber>/a*.sdf > mylist.lis

and append by:

>> ls /jcmtdata/raw/acsis/spectra/<UT date>/<obsnumber>/a*.sdf >> mylist.lis

if the data have been removed from the summit computer it may be accessed via:

>> ls /net/mtserver/export/data/jcmtdata/raw/acsis/spectra/<UT date>/<obsnumber>/a*.sdf

Reducing your data

Please remember to do all reductions on sc2dr5.

SCUBA-2

Data reduction information can be found here for SCUBA-2.

POL-2

Data Reduction information cab be found here for POL-2.

Note when running POL-2 reduction on sc2dr5 you will also need to ensure temp files are also saved on sc2dr5. Once you have logged into sc2dr5 and moved to your working space it is recommended you create a tmp folder. and then run:

>> mkdir tmp

>> setenv STAR_TEMP tmp/

prior to running pol2map.

ACSIS

Data reduction information can be found here for ACSIS.

Computing resources

Private laptops – wireless

Wireless network is available at the lower sites through EAO (Hilo) or MKSS (HP) for your personal use. Wireless network devices are not permitted to be switched on at the summit.  This also includes Bluetooth.

Hilo
Make sure your computer is set up to obtain an address from the network (use DHCP). Boot your computer, start your web browser, and go to any page. You will see a registration page for the EAO wireless network. Enter your EAO guest username and password (NOTE: these are never sent over the network), then read and agree to the EAO Acceptable Use Policy. If the registration page does not appear, make sure the URL is http://192.168.20.1. If you are successful, the next page will tell you to reboot your computer (if not, please contact a member of the Computing Services Group). You can reboot, or, if you know how, just wait 2 minutes and then renew your DHCP lease. At this point, you should be able to get onto the Internet – connecting to JAC systems will require you to use SSH to connect to our public SSH server, first.
HP
Simply make sure you are set up to get an address from the network via DHCP. No login is required.
Summit
You are required to turn off all wireless devices including Wi-Fi, Bluetooth and cellphones at the summit to prevent interference with sensitive instrumentation.

Printing

The printer queues are called hilo, hp,, jcmt as appropriate for your physical location. From a unix system, type

lp -d <queue> filename

where <queue> is one of the aforementioned queues.

Additional Hale Pohaku restrictions

Note that because our HP terminal room cannot be physically secured, we have had to provide very restrictive firewall rules at this site. We regret the inconvenience, but unfortunately we are liable for any criminal acts committed through our equipment, and this site has been compromised in the past. Only the following services are open from the EAO network at HP: No incoming connections, outgoing HTTP (port 80), HTTPS (port 443), SSH (port 22) and FTP (ports 20/21). The MKSS wireless network is more permissive – if you need another service (such as POP) try going through that.

The primary purpose of the HP workstations is to allow access to the main EAO facilities (Hilo and summit) and very little software is installed, but the Observing Tool and standard browsers are available.

  • To log onto Hilo, type:
    ssh -l jcmtXX ssh
    

    You can also ssh to any remote machine which supports ssh; in that case, of course, the full hostname is required. X windows should tunnel through ssh and so there is no need to set up X display parameters.

  • To run the OT, simply start a terminal and type jcmtot as appropriate.
  • If for some reason there is a problem with the OT installation at HP, ssh into one of the summit machines, such as ulili or pueo for JCMT, and go from there.
Print Friendly, PDF & Email

Comments are closed.