Using the GDC for SDO

gdcimage

Cleaning up

Your job is successfully running, but what is it doing?

First it has created a list of files to map with some parameters which is stored in your "Inputs" directory which was specified in your params.in file in the "lfmaindir" parameter.

Each job you submit has an independent folder named "run0001" (or similar) in both of these directories:
(1) /scratch/seismo/pegasus_data/exec/dags/your_user_name/pegasus/trackmap/
(2) /scratch/seismo/pegasus_data/exec/your_user_name/pegasus/trackmap/

In the "run" directory in (1) are error ".err" logs and dag ".dag" files for the code to run. I don't usually need to check these if I get an error, but if I do I need Ray to decipher them.

In the "run" directory in (2) each mapped frame is stored, so you can check the progress of the data here. When all the frames are mapped, it bundles them into a datacube and ingests it into the DRMS. This directory also contains output ".out" logs and errors ".err" logs. If something goes wrong the error files are the first place you should look.

If the run is successful, however, you don't need either of these "run" directories anymore, and they can become quite large. So if everything is successful - PLEASE DELETE THEM!

You can also delete anything in your "Inputs" directory.

 

 

Navigation

Home

Key datasets

Accessing data

Header keywords

Running the Track'n'Map pipeline

Updates

 

Links:
GDC/SDO
JSOC wiki
HMI release notes
SDO/HMI webpage
HELAS local helioseismology

 

web-master: schunker [at] mps.mpg.de