Execution at: 2018-04-26 09:51:18
Pass
DM Illinoise dev01
Prepare for release installation
Execution at: 2018-04-25 14:56:05
Pass
Prepare for release installation
The environment is set-up
Set-up the GPFS filesystem accessible at /software on lsst-dev01
following the instructions at:
https://pipelines.lsst.io/install/newinstall.html.
- download newinstall.sh (curl)
- execute newinstall.sh
This will prepare the environment for the latest version of the stack,
currently 15.0.
Activity done:
- download using curl of newinstall.sh
- execution of newinstll.sh with option -ct
- setup of the environment
- download of the release using eups
The steps need to be review and made more general.
Load environment
Execution at: 2018-04-25 15:20:08
Pass
Load environment
The corresponding version of lsst_distrib is downloaded in the local filesystem, currently 15.0.
The lsst_distrib top level package will be enabled:
- source /software/lsstsw/stack3/loadLSST.bash
- setup lsst_distrib
This actually requires:
- running of shebangtron
- setup lsst_distrib
The steps need to be reviewed and made more general.
Download "LSST Stack Demo"
Execution at: 2018-04-25 15:22:30
Pass
Download "LSST Stack Demo"
The “LSST Stack Demo” package will be downloaded onto the test system from
and uncompressed.
Note that this has to be consistent with the stack version at point 1
and 2
Demo Execution
Execution at: 2018-04-25 18:16:52
Pass
Demo Execution
The string “Ok.“ should be returned.
The demo package will be executed by following the instructions in its “README“ file.
This actually requires:
-
create a folder
-
download the test dataset (using curl)
-
setup obs_sdss
-
run the demo
-
./bin/compare detected-sources.txt
The result is: Ok.
Test steps need to be reviewde.
Preparing LSST-VC
Execution at: 2018-04-26 09:50:05
Pass
Preparing LSST-VC
A shell on an LSST-VC compute node will now be obtained by executing:
- $ srun -I --pty bash
This step imply the repetition of the above steps 1 to 4 on a different platform. Here is marked as passed. However this will require to run twice the same test case on different environments.
Demo Execution on LSST-VC
Execution at: 2018-04-26 09:51:18
Pass
Demo Execution on LSST-VC
The same result obtained.
The demo package will be executed on the compute node.
This step is the repetition of the above steps 1 to 4 on a different platform. Here is marked as passed. However this will require to run twice the same test case on different environments.
Execution at: 2018-04-26 10:22:04
Pass
DM Verification Cluster
Setup
Execution at: 2018-04-26 10:13:23
Pass
Setup
-
The LSST Science Pipelines and the DESDM Framework, plugins, and integration codes as described in §4.2.4.2 have already been installed. The Operator merely sets up the expanded stack using eups.
-
Input raw and calibration data must exist in the Data Backbone. If not, the data will be ingested into Data Backbone.
-
The operator tags and blacklists input data as appropriate for test (§4.2.5).
-
Given the LSST Science Pipelines version, the operator will generate the full config files and schema files (§4.2.7), which are then ingested into the Data Backbone.
-
Write a DRP pipeline workflow definition file from scratch or modify an existing file from github making its operations- and dataset-specific inputs match this test.
- (a) For LDM-503-2, the pipeline workflow definition file is written in a workflow control language (wcl) format as used by the DESDM Framework.
-
Make special hardware requests (e.g., disk or compute node reservations) if needed.
Setup done as requested. To update test case to avoid cross references, the step shall be self contained in the test run context.
Execution
Execution at: 2018-04-26 10:15:31
Pass
Execution
-
If HTCondor processes are not already running, start HTCondor processes on compute nodes. This step makes the compute nodes join the HTCondor Central Manager to create a working HTCondor Pool.
-
The execution for each tract of the input data in §4.2.5 will be submitted to the hardware in §4.2.4.1 using the configuration in §4.2.7.
-
During execution, the operator will use software to demonstrate the ability to check the processing status.
- (a) For LDM-503-2, the available Batch Production Service monitoring software consists of two commands: one to summarize currently submitted processing, one to get more details of single submission.
-
If the processing attempt completes, the attempt is marked as completed and tagged as potential for the next test steps. These campaign tags are used to make pre-release QA queries simpler.
-
If the processing attempt fails, the attempt is marked as failed.
-
If the processing attempt fails due to certain infrastructure configuration or transient instability (e.g., network blips), the processing of the tract can be tried again after the problem is resolved.
Execution completed without problems.
Basic Data Completeness and Integrity Checks
Execution at: 2018-04-26 10:22:04
Pass
Basic Data Completeness and Integrity Checks
-
When the execution finishes, the success of the execution will be verified by checking the existence of the expected output data.
- (a) For each of the expected data products types (listed in §4.3.2) and each of the expected units (visits, patches, etc), verify the data product is in the Data Backboneand has filesize greater than zero via DB queries.
- (b) Verify the physical and location information in Data Backbone DB matches the DataBackbone filesystem and vice-versa.
-
Check that for each data product type has appropriate metadata saved for each file as defined in §4.2.7
-
Check provenance
- (a) Verify that each file can be linked with the step and processing attempt that created it via the Data Backbone.
- (b) Verify that the information linking input files to each step was saved to the Oracledatabase.
-
Check runtime metrics, such as the number of executions of each code, wallclock per step, wallclock per tract, etc.
1. Check the existence of the expected files: PASSED
- (a) Table 1 list all release data products per tract. For each product, we provide the expected number of files to be generated (where available) and the number of files generated in practice. Each of these files was checked to ensure that it containedsome data (i.e., the size of the file was non-zero).
- (b) To verify the physical location of files on the filesystem match the location information tracked in the Data Backbone database tables, we used the tool compare_db.py from the DESDM FileMgmt package. Paths, file sizes, and checksums (MD5) were compared. The test results were that both the database and filesystem matched with 50656 files in tract 8766, 52041 files in tract 8767, and 273375 files in tract9813.
2. Check existence of the expected metadata: PASSEDThe following metadata is expected to have been saved
- calexp: tract, visit, filter, ccd
- deepCoadd_calexp: tract, patch, filter
It was verified that the above mentioned metadata had non-NULL values
stored for thedata products in the Data Backbone database tables
3. Check existence of the expected provenance: PASSED
- (a) For each file, the provenance system was checked to ensure that
there were no:
- i. Missing direct association of output files with the processing attempt;
- ii. Missing “was-generated-by” association (per the Open Provenance Model, [5]);
- iii. Instances in which the “was-generated-by” association did not belong to the specificed processing attempt.
- (b) Via manual spot checks, it was verified that information linking input files to eachstep was saved to the Data Backbone database tables.
4. Check (existence) runtime metrics: PASSED
- (a) Table 2 shows wall-clock time for running the entire pipeline for each tract and totalcpu time per execution.
- (b) Table 3 provides details of execution time and memory usage at a per-process level.
Execution at: 2018-04-26 10:32:10
Pass
DM Illinoise dev01
Initialize Stack
Execution at: 2018-04-26 10:29:56
Pass
Initialize Stack
The DM Stack shall be initialized using the loadLSST script (as described in DRP-00-00)
DM Stack initialized
Initialize "Data Butler"
Execution at: 2018-04-26 10:30:43
Pass
Initialize "Data Butler"
A “Data Butler” will be initialized to access the repository.
Step marked as passed, but, it seems this should be a prerequisite, like the previous step 1.
Execution
Execution at: 2018-04-26 10:32:10
Pass
Execution
For each of the expected data products types (listed in Test Items - §4.3.2) and each of the expected units (PVIs, coadds, etc), the data product will be retrieved from the Butler and verified to be non-empty.
The week 44 reprocessing of the Hyper Suprime-Cam RC1 dataset was used to execute thistest case. Details of this reprocessing—including failures, which are acceptable per the testspecification—are available at DM-12388. All expected products were found to exist.
Execution at: 2018-04-26 11:05:47
Pass
DM Illinoise dev01
Initialize Stack
Execution at: 2018-04-26 11:01:46
Pass
Initialize Stack
The DM Stack shall be initialized using the loadLSST script (as described in DRP-00-00).
Stack Initialized
Initialize "Data Butler"
Execution at: 2018-04-26 11:01:47
Pass
Initialize "Data Butler"
A “Data Butler” will be initialized to access the repository.
Data Butler initialized
Execution
Execution at: 2018-04-26 11:05:47
Pass
Execution
Scripts from the pipe_analysis package will be run on every visit to check for the presence of data products and make plots.
The week 44 reprocessing of the Hyper Suprime-Cam RC1 dataset was used
to execute this test case. Details of this reprocessing—including
failures, which are acceptable per the test specification—are available
at DM-12388.
Scientific assessment was carried out using the qa-image-browser.ipynb
Jupyter notebook,made available from
https://github.com/lsst-dm/pipe\_analysis/. The version of the
notebookfrom commit 8705ef7 was used.
All plots produced by that notebook were scrutinized by the test team.
It was noted that:
- When comparing aperture corrections across photometry algorithms,
some scatter wasobserved at the bright end and that the
narrow-band (NB9021) observations had moreoutliers than others.
- This was assessed as falling within normal tolerances, and no further action is required.
- When comparing photometric measurements with the reference catalog,
a significant(20 mmag) offset was observed in tracts 8766 and 8767
in the R band.
- This offset is regarded as falling within normal tolerances, but worthy of further investigation.
- Ticket DM-13056 has been filed.
- When comparing astrometric measurements with the reference catalog,
a small but systematic offset was observed in tracts 8766 and 8767
in the I band, and in tract 9813 inthe R band.
- This offset is regarded as falling within normal tolerances, but worthy of further investigation.
- Ticket DM-13057 has been filed.
The test team agreed that all measurements fall within acceptable
tolerances, and therefore regard the test case as having been passed.
DM-13056 and DM-13057 will be scheduled for further investigation as
part of the regular development cycle.