Your job is successfully running, but what is it doing?
First it has created a list of files to map with some parameters which is stored in your "Inputs" directory which was specified in your params.in file in the "lfmaindir" parameter.
Each job you submit has an independent folder named "run0001" (or similar) in all of these directories:
(1) /scratch/seismo/pegasus_data/exec/your_user_name/Inputs/ (contains output of running the java code, i.e. lists for mapping)
(2) /scratch/seismo/pegasus_data/exec/your_user_name/dags/ (contains code for running the pipeline)
(3) /scratch/seismo/pegasus_data/exec/your_user_name/pegasus/trackmap/ (contains output of drms_mapping.c which is input for drms_cube_care.c)
In the "run" directory in (1) are error ".err" logs and dag ".dag" files for the code to run. I don't usually need to check these if I get an error, but if I do I need Ray to decipher them.
In the "run" directory in (2) each mapped frame is stored, so you can check the progress of the data here. When all the frames are mapped, it bundles them into a datacube and ingests it into the DRMS. This directory also contains output ".out" logs and errors ".err" logs. If something goes wrong the error files are the first place you should look.
If the run is successful, however, you don't need either of these "run" directories anymore, and they can become quite large. So if everything is successful - PLEASE DELETE THEM!
You can also delete anything in your "Inputs" directory.