Skip to content

Instantly share code, notes, and snippets.

@brianv0
Last active April 26, 2018 21:30
Show Gist options
  • Save brianv0/97090df453d177ff15243985807c91b7 to your computer and use it in GitHub Desktop.
Save brianv0/97090df453d177ff15243985807c91b7 to your computer and use it in GitHub Desktop.
Test Report Output

DRP-00-00: Installation of the Data Release Production science payload

Execution at: 2018-04-26 09:51:18

Status

Pass

Environment

DM Illinoise dev01

Test Script

Description

Prepare for release installation

Execution at: 2018-04-25 14:56:05

Status

Pass

Description

Prepare for release installation

Expected Result

The environment is set-up

Test Data

Set-up the GPFS filesystem accessible at /software on lsst-dev01 following the instructions at:
https://pipelines.lsst.io/install/newinstall.html.

  • download newinstall.sh (curl)
  • execute newinstall.sh


This will prepare the environment for the latest version of the stack, currently 15.0.

Comment

Activity done:

  • download using curl of newinstall.sh
  • execution of newinstll.sh with option -ct
  • setup of the environment
  • download of the release using eups


The steps need to be review and made more general.

Description

Load environment

Execution at: 2018-04-25 15:20:08

Status

Pass

Description

Load environment

Expected Result

The corresponding version of lsst_distrib is downloaded in the local filesystem, currently 15.0.

Test Data

The lsst_distrib top level package will be enabled:

  • source /software/lsstsw/stack3/loadLSST.bash
  • setup lsst_distrib

Comment

This actually requires:

  • running of shebangtron
  • setup lsst_distrib


The steps need to be reviewed and made more general.

Description

Download "LSST Stack Demo"

Execution at: 2018-04-25 15:22:30

Status

Pass

Description

Download "LSST Stack Demo"

Test Data

The “LSST Stack Demo” package will be downloaded onto the test system from

and uncompressed.

Note that this has to be consistent with the stack version at point 1 and 2

Description

Demo Execution

Execution at: 2018-04-25 18:16:52

Status

Pass

Description

Demo Execution

Expected Result

The string “Ok.“ should be returned.

Test Data

The demo package will be executed by following the instructions in its “README“ file.

Comment

This actually requires:

  • create a folder

  • download the test dataset (using curl)

  • setup obs_sdss

  • run the demo

  • ./bin/compare detected-sources.txt


The result is: Ok.


Test steps need to be reviewde.

Description

Preparing LSST-VC

Execution at: 2018-04-26 09:50:05

Status

Pass

Description

Preparing LSST-VC

Test Data

A shell on an LSST-VC compute node will now be obtained by executing:

  • $ srun -I --pty bash

Comment

This step imply the repetition of the above steps 1 to 4 on a different platform. Here is marked as passed. However this will require to run twice the same test case on different environments.

Description

Demo Execution on LSST-VC

Execution at: 2018-04-26 09:51:18

Status

Pass

Description

Demo Execution on LSST-VC

Expected Result

The same result obtained.

Test Data

The demo package will be executed on the compute node.

Comment

This step is the repetition of the above steps 1 to 4 on a different platform. Here is marked as passed. However this will require to run twice the same test case on different environments.

DRP-00-05: Execution of the DRP Science Payload by the Batch Production Service

Execution at: 2018-04-26 10:22:04

Status

Pass

Environment

DM Verification Cluster

Test Script

Description

Setup

Execution at: 2018-04-26 10:13:23

Status

Pass

Description

Setup

Test Data

  1. The LSST Science Pipelines and the DESDM Framework, plugins, and integration codes as described in §4.2.4.2 have already been installed. The Operator merely sets up the expanded stack using eups.

  2. Input raw and calibration data must exist in the Data Backbone. If not, the data will be ingested into Data Backbone.

  3. The operator tags and blacklists input data as appropriate for test (§4.2.5).

  4. Given the LSST Science Pipelines version, the operator will generate the full config files and schema files (§4.2.7), which are then ingested into the Data Backbone.

  5. Write a DRP pipeline workflow definition file from scratch or modify an existing file from github making its operations- and dataset-specific inputs match this test.

    • (a) For LDM-503-2, the pipeline workflow definition file is written in a workflow control language (wcl) format as used by the DESDM Framework.
  6. Make special hardware requests (e.g., disk or compute node reservations) if needed.

Comment

Setup done as requested. To update test case to avoid cross references, the step shall be self contained in the test run context.

Description

Execution

Execution at: 2018-04-26 10:15:31

Status

Pass

Description

Execution

Test Data

  1. If HTCondor processes are not already running, start HTCondor processes on compute nodes. This step makes the compute nodes join the HTCondor Central Manager to create a working HTCondor Pool.

  2. The execution for each tract of the input data in §4.2.5 will be submitted to the hardware in §4.2.4.1 using the configuration in §4.2.7.

  3. During execution, the operator will use software to demonstrate the ability to check the processing status.

    • (a) For LDM-503-2, the available Batch Production Service monitoring software consists of two commands: one to summarize currently submitted processing, one to get more details of single submission.
  4. If the processing attempt completes, the attempt is marked as completed and tagged as potential for the next test steps. These campaign tags are used to make pre-release QA queries simpler.

  5. If the processing attempt fails, the attempt is marked as failed.

  6. If the processing attempt fails due to certain infrastructure configuration or transient instability (e.g., network blips), the processing of the tract can be tried again after the problem is resolved.

Comment

Execution completed without problems.

Description

Basic Data Completeness and Integrity Checks

Execution at: 2018-04-26 10:22:04

Status

Pass

Description

Basic Data Completeness and Integrity Checks

Test Data

  1. When the execution finishes, the success of the execution will be verified by checking the existence of the expected output data.

    • (a) For each of the expected data products types (listed in §4.3.2) and each of the expected units (visits, patches, etc), verify the data product is in the Data Backboneand has filesize greater than zero via DB queries.
    • (b) Verify the physical and location information in Data Backbone DB matches the DataBackbone filesystem and vice-versa.
  2. Check that for each data product type has appropriate metadata saved for each file as defined in §4.2.7

  3. Check provenance

    • (a) Verify that each file can be linked with the step and processing attempt that created it via the Data Backbone.
    • (b) Verify that the information linking input files to each step was saved to the Oracledatabase.
  4. Check runtime metrics, such as the number of executions of each code, wallclock per step, wallclock per tract, etc.

Comment

1. Check the existence of the expected files: PASSED

  • (a) Table 1 list all release data products per tract. For each product, we provide the expected number of files to be generated (where available) and the number of files generated in practice. Each of these files was checked to ensure that it containedsome data (i.e., the size of the file was non-zero).
  • (b) To verify the physical location of files on the filesystem match the location information tracked in the Data Backbone database tables, we used the tool compare_db.py from the DESDM FileMgmt package. Paths, file sizes, and checksums (MD5) were compared. The test results were that both the database and filesystem matched with 50656 files in tract 8766, 52041 files in tract 8767, and 273375 files in tract9813.

2. Check existence of the expected metadata: PASSEDThe following metadata is expected to have been saved

  • calexp: tract, visit, filter, ccd
  • deepCoadd_calexp: tract, patch, filter

It was verified that the above mentioned metadata had non-NULL values stored for thedata products in the Data Backbone database tables
3. Check existence of the expected provenance: PASSED

  • (a) For each file, the provenance system was checked to ensure that there were no:
    • i. Missing direct association of output files with the processing attempt;
    • ii. Missing “was-generated-by” association (per the Open Provenance Model, [5]);
    • iii. Instances in which the “was-generated-by” association did not belong to the specificed processing attempt.
  • (b) Via manual spot checks, it was verified that information linking input files to eachstep was saved to the Data Backbone database tables.

4. Check (existence) runtime metrics: PASSED

  • (a) Table 2 shows wall-clock time for running the entire pipeline for each tract and totalcpu time per execution.
  • (b) Table 3 provides details of execution time and memory usage at a per-process level.

DRP-00-10: Data Release Includes Required Data Products

Execution at: 2018-04-26 10:32:10

Status

Pass

Environment

DM Illinoise dev01

Test Script

Description

Initialize Stack

Execution at: 2018-04-26 10:29:56

Status

Pass

Description

Initialize Stack

Test Data

The DM Stack shall be initialized using the loadLSST script (as described in DRP-00-00)

Comment

DM Stack initialized

Description

Initialize "Data Butler"

Execution at: 2018-04-26 10:30:43

Status

Pass

Description

Initialize "Data Butler"

Test Data

A “Data Butler” will be initialized to access the repository.

Comment

Step marked as passed, but, it seems this should be a prerequisite, like the previous step 1.

Description

Execution

Execution at: 2018-04-26 10:32:10

Status

Pass

Description

Execution

Test Data

For each of the expected data products types (listed in Test Items - §4.3.2) and each of the expected units (PVIs, coadds, etc), the data product will be retrieved from the Butler and verified to be non-empty.

Comment

 The week 44 reprocessing of the Hyper Suprime-Cam RC1 dataset was used to execute thistest case. Details of this reprocessing—including failures, which are acceptable per the testspecification—are available at DM-12388. All expected products were found to exist.

DRP-00-15: Scientific Verification of Source Catalog

Execution at: 2018-04-26 11:05:47

Status

Pass

Environment

DM Illinoise dev01

Test Script

Description

Initialize Stack

Execution at: 2018-04-26 11:01:46

Status

Pass

Description

Initialize Stack

Test Data

The DM Stack shall be initialized using the loadLSST script (as described in DRP-00-00).

Comment

Stack Initialized

Description

Initialize "Data Butler"

Execution at: 2018-04-26 11:01:47

Status

Pass

Description

Initialize "Data Butler"

Test Data

A “Data Butler” will be initialized to access the repository.

Comment

Data Butler initialized

Description

Execution

Execution at: 2018-04-26 11:05:47

Status

Pass

Description

Execution

Test Data

Scripts from the pipe_analysis package will be run on every visit to check for the presence of data products and make plots.

Comment

The week 44 reprocessing of the Hyper Suprime-Cam RC1 dataset was used to execute this test case. Details of this reprocessing—including failures, which are acceptable per the test specification—are available at DM-12388.

Scientific assessment was carried out using the qa-image-browser.ipynb Jupyter notebook,made available from https://github.com/lsst-dm/pipe\_analysis/. The version of the notebookfrom commit 8705ef7 was used.

All plots produced by that notebook were scrutinized by the test team. It was noted that:

  • When comparing aperture corrections across photometry algorithms, some scatter wasobserved at the bright end and that the narrow-band (NB9021) observations had moreoutliers than others.
    • This was assessed as falling within normal tolerances, and no further action is required.
  • When comparing photometric measurements with the reference catalog, a significant(20 mmag) offset was observed in tracts 8766 and 8767 in the R band.
    • This offset is regarded as falling within normal tolerances, but worthy of further investigation.
    • Ticket DM-13056 has been filed.
  • When comparing astrometric measurements with the reference catalog, a small but systematic offset was observed in tracts 8766 and 8767 in the I band, and in tract 9813 inthe R band.
    • This offset is regarded as falling within normal tolerances, but worthy of further investigation.
    • Ticket DM-13057 has been filed.


The test team agreed that all measurements fall within acceptable tolerances, and therefore regard the test case as having been passed. DM-13056 and DM-13057 will be scheduled for further investigation as part of the regular development cycle.

\section{DRP-00-00: Installation of the Data Release Production science
payload}\label{drp-00-00-installation-of-the-data-release-production-science-payload}
\textbf{Execution at: 2018-04-26 09:51:18}
\subsection{Status}\label{status}
Pass
\subsection{Environment}\label{environment}
DM Illinoise dev01
\subsection{Test Script}\label{test-script}
\subsubsection{Description}\label{description}
Prepare for release installation
\textbf{Execution at: 2018-04-25 14:56:05}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Prepare for release installation
\subsubsection{Expected Result}\label{expected-result}
The environment is set-up
\subsubsection{Test Data}\label{test-data}
Set-up the GPFS filesystem accessible at /software on lsst-dev01
following the instructions at:\\
https://pipelines.lsst.io/install/newinstall.html.
\begin{itemize}
\tightlist
\item
download newinstall.sh (curl)
\item
execute newinstall.sh
\end{itemize}
This will prepare the environment for the latest version of the stack,
currently 15.0.
\subsubsection{Comment}\label{comment}
Activity done:
\begin{itemize}
\tightlist
\item
download using curl of newinstall.sh
\item
execution of newinstll.sh with option -ct
\item
setup of the environment
\item
download of the release using eups
\end{itemize}
The steps need to be review and made more general.
\subsubsection{Description}\label{description}
Load environment
\textbf{Execution at: 2018-04-25 15:20:08}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Load environment
\subsubsection{Expected Result}\label{expected-result}
The corresponding version of lsst\_distrib is downloaded in the local
filesystem, currently 15.0.
\subsubsection{Test Data}\label{test-data}
The lsst\_distrib top level package will be enabled:
\begin{itemize}
\tightlist
\item
source /software/lsstsw/stack3/loadLSST.bash
\item
setup lsst\_distrib
\end{itemize}
\subsubsection{Comment}\label{comment}
This actually requires:
\begin{itemize}
\item
running of shebangtron
\item
\begin{verbatim}
setup lsst_distrib
\end{verbatim}
\end{itemize}
The steps need to be reviewed and made more general.
\subsubsection{Description}\label{description}
Download ``LSST Stack Demo''
\textbf{Execution at: 2018-04-25 15:22:30}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Download ``LSST Stack Demo''
\subsubsection{Test Data}\label{test-data}
The ``LSST Stack Demo'' package will be downloaded onto the test system
from
\begin{itemize}
\tightlist
\item
https://github.com/lsst/lsst\_dm\_stack\_demo/releases/tag/15.0~
\end{itemize}
and uncompressed.\\[2\baselineskip]Note that this has to be consistent
with the stack version at point 1 and 2
\subsubsection{Description}\label{description}
Demo Execution
\textbf{Execution at: 2018-04-25 18:16:52}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Demo Execution
\subsubsection{Expected Result}\label{expected-result}
The string ``Ok.`` should be returned.
\subsubsection{Test Data}\label{test-data}
The demo package will be executed by following the instructions in its
``README`` file.
\subsubsection{Comment}\label{comment}
This actually requires:
\begin{itemize}
\item
create a folder
\item
download the test dataset (using curl)
\item
\begin{verbatim}
setup obs_sdss
\end{verbatim}
\item
\begin{verbatim}
run the demo
\end{verbatim}
\item
\begin{verbatim}
./bin/compare detected-sources.txt
\end{verbatim}
\end{itemize}
The result is: Ok.\\[3\baselineskip]Test steps need to be reviewde.
\subsubsection{Description}\label{description}
Preparing LSST-VC
\textbf{Execution at: 2018-04-26 09:50:05}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Preparing LSST-VC
\subsubsection{Test Data}\label{test-data}
A shell on an LSST-VC compute node will now be obtained by executing:
\begin{itemize}
\tightlist
\item
\$ srun -I --pty bash
\end{itemize}
\subsubsection{Comment}\label{comment}
This step imply the repetition of the above steps 1 to 4 on a different
platform. Here is marked as passed. However this will require to run
twice the same test case on different environments.
\subsubsection{Description}\label{description}
Demo Execution on LSST-VC
\textbf{Execution at: 2018-04-26 09:51:18}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Demo Execution on LSST-VC
\subsubsection{Expected Result}\label{expected-result}
The same result obtained.
\subsubsection{Test Data}\label{test-data}
The demo package will be executed on the compute node.
\subsubsection{Comment}\label{comment}
This step is the repetition of the above steps 1 to 4 on a different
platform. Here is marked as passed. However this will require to run
twice the same test case on different environments.
\section{DRP-00-05: Execution of the DRP Science Payload by the Batch
Production
Service}\label{drp-00-05-execution-of-the-drp-science-payload-by-the-batch-production-service}
\textbf{Execution at: 2018-04-26 10:22:04}
\subsection{Status}\label{status}
Pass
\subsection{Environment}\label{environment}
DM Verification Cluster
\subsection{Test Script}\label{test-script}
\subsubsection{Description}\label{description}
Setup
\textbf{Execution at: 2018-04-26 10:13:23}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Setup
\subsubsection{Test Data}\label{test-data}
\begin{enumerate}
\tightlist
\item
The LSST Science Pipelines and the DESDM Framework, plugins, and
integration codes as described in §4.2.4.2 have already been
installed. The Operator merely sets up the expanded stack using eups.
\item
Input raw and calibration data must exist in the Data Backbone. If
not, the data will be ingested into Data Backbone.
\item
The operator tags and blacklists input data as appropriate for test
(§4.2.5).
\item
Given the LSST Science Pipelines version, the operator will generate
the full config files and schema files (§4.2.7), which are then
ingested into the Data Backbone.
\item
Write a DRP pipeline workflow definition file from scratch or modify
an existing file from github making its operations- and
dataset-specific inputs match this test.
\begin{itemize}
\tightlist
\item
(a) For LDM-503-2, the pipeline workflow definition file is written
in a workflow control language (wcl) format as used by the DESDM
Framework.
\end{itemize}
\item
Make special hardware requests (e.g., disk or compute node
reservations) if needed.
\end{enumerate}
\subsubsection{Comment}\label{comment}
Setup done as requested. To update test case to avoid cross references,
the step shall be self contained in the test run context.
\subsubsection{Description}\label{description}
Execution
\textbf{Execution at: 2018-04-26 10:15:31}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Execution
\subsubsection{Test Data}\label{test-data}
\begin{enumerate}
\tightlist
\item
If HTCondor processes are not already running, start HTCondor
processes on compute nodes. This step makes the compute nodes join the
HTCondor Central Manager to create a working HTCondor Pool.
\item
The execution for each tract of the input data in §4.2.5 will be
submitted to the hardware in §4.2.4.1 using the configuration in
§4.2.7.
\item
During execution, the operator will use software to demonstrate the
ability to check the processing status.
\begin{itemize}
\tightlist
\item
(a) For LDM-503-2, the available Batch Production Service monitoring
software consists of two commands: one to summarize currently
submitted processing, one to get more details of single submission.
\end{itemize}
\item
If the processing attempt completes, the attempt is marked as
completed and tagged as potential for the next test steps. These
campaign tags are used to make pre-release QA queries simpler.
\item
If the processing attempt fails, the attempt is marked as failed.
\item
If the processing attempt fails due to certain infrastructure
configuration or transient instability (e.g., network blips), the
processing of the tract can be tried again after the problem is
resolved.
\end{enumerate}
\subsubsection{Comment}\label{comment}
Execution completed without problems.
\subsubsection{Description}\label{description}
Basic Data Completeness and Integrity Checks
\textbf{Execution at: 2018-04-26 10:22:04}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Basic Data Completeness and Integrity Checks
\subsubsection{Test Data}\label{test-data}
\begin{enumerate}
\tightlist
\item
When the execution finishes, the success of the execution will be
verified by checking the existence of the expected output data.
\begin{itemize}
\tightlist
\item
(a) For each of the expected data products types (listed in §4.3.2)
and each of the expected units (visits, patches, etc), verify the
data product is in the Data Backboneand has filesize greater than
zero via DB queries.
\item
(b) Verify the physical and location information in Data Backbone DB
matches the DataBackbone filesystem and vice-versa.
\end{itemize}
\item
Check that for each data product type has appropriate metadata saved
for each file as defined in §4.2.7
\item
Check provenance
\begin{itemize}
\tightlist
\item
(a) Verify that each file can be linked with the step and processing
attempt that created it via the Data Backbone.
\item
(b) Verify that the information linking input files to each step was
saved to the Oracledatabase.
\end{itemize}
\item
Check runtime metrics, such as the number of executions of each code,
wallclock per step, wallclock per tract, etc.
\end{enumerate}
\subsubsection{Comment}\label{comment}
1. Check the existence of the expected files: PASSED
\begin{itemize}
\tightlist
\item
(a) Table 1 list all release data products per tract. For each
product, we provide the expected number of files to be generated
(where available) and the number of files generated in practice. Each
of these files was checked to ensure that it containedsome data (i.e.,
the size of the file was non-zero).
\item
(b) To verify the physical location of files on the filesystem match
the location information tracked in the Data Backbone database tables,
we used the tool compare\_db.py from the DESDM FileMgmt package.
Paths, file sizes, and checksums (MD5) were compared. The test results
were that both the database and filesystem matched with 50656 files in
tract 8766, 52041 files in tract 8767, and 273375 files in tract9813.
\end{itemize}
2. Check existence of the expected metadata: PASSEDThe following
metadata is expected to have been saved
\begin{itemize}
\tightlist
\item
calexp: tract, visit, filter, ccd
\item
deepCoadd\_calexp: tract, patch, filter
\end{itemize}
It was verified that the above mentioned metadata had non-NULL values
stored for thedata products in the Data Backbone database tables\\
3. Check existence of the expected provenance: PASSED
\begin{itemize}
\tightlist
\item
(a) For each file, the provenance system was checked to ensure that
there were no:
\begin{itemize}
\tightlist
\item
i. Missing direct association of output files with the processing
attempt;
\item
ii. Missing ``was-generated-by'' association (per the Open
Provenance Model, {[}5{]});
\item
iii. Instances in which the ``was-generated-by'' association did not
belong to the specificed processing attempt.
\end{itemize}
\item
(b) Via manual spot checks, it was verified that information linking
input files to eachstep was saved to the Data Backbone database
tables.
\end{itemize}
4. Check (existence) runtime metrics: PASSED
\begin{itemize}
\tightlist
\item
(a) Table 2 shows wall-clock time for running the entire pipeline for
each tract and totalcpu time per execution.
\item
(b) Table 3 provides details of execution time and memory usage at a
per-process level.
\end{itemize}
\section{DRP-00-10: Data Release Includes Required Data
Products}\label{drp-00-10-data-release-includes-required-data-products}
\textbf{Execution at: 2018-04-26 10:32:10}
\subsection{Status}\label{status}
Pass
\subsection{Environment}\label{environment}
DM Illinoise dev01
\subsection{Test Script}\label{test-script}
\subsubsection{Description}\label{description}
Initialize Stack
\textbf{Execution at: 2018-04-26 10:29:56}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Initialize Stack
\subsubsection{Test Data}\label{test-data}
The DM Stack shall be initialized using the loadLSST script (as
described in DRP-00-00)
\subsubsection{Comment}\label{comment}
DM Stack initialized
\subsubsection{Description}\label{description}
Initialize ``Data Butler''
\textbf{Execution at: 2018-04-26 10:30:43}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Initialize ``Data Butler''
\subsubsection{Test Data}\label{test-data}
A ``Data Butler'' will be initialized to access the repository.
\subsubsection{Comment}\label{comment}
Step marked as passed, but, it seems this should be a prerequisite, like
the previous step 1.
\subsubsection{Description}\label{description}
Execution
\textbf{Execution at: 2018-04-26 10:32:10}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Execution
\subsubsection{Test Data}\label{test-data}
For each of the expected data products types (listed in Test Items -
§4.3.2) and each of the expected units (PVIs, coadds, etc), the data
product will be retrieved from the Butler and verified to be non-empty.
\subsubsection{Comment}\label{comment}
~The week 44 reprocessing of the Hyper Suprime-Cam RC1 dataset was used
to execute thistest case. Details of this reprocessing---including
failures, which are acceptable per the testspecification---are available
at DM-12388. All expected products were found to exist.
\section{DRP-00-15: Scientific Verification of Source
Catalog}\label{drp-00-15-scientific-verification-of-source-catalog}
\textbf{Execution at: 2018-04-26 11:05:47}
\subsection{Status}\label{status}
Pass
\subsection{Environment}\label{environment}
DM Illinoise dev01
\subsection{Test Script}\label{test-script}
\subsubsection{Description}\label{description}
Initialize Stack
\textbf{Execution at: 2018-04-26 11:01:46}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Initialize Stack
\subsubsection{Test Data}\label{test-data}
The DM Stack shall be initialized using the loadLSST script (as
described in DRP-00-00).
\subsubsection{Comment}\label{comment}
Stack Initialized
\subsubsection{Description}\label{description}
Initialize ``Data Butler''
\textbf{Execution at: 2018-04-26 11:01:47}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Initialize ``Data Butler''
\subsubsection{Test Data}\label{test-data}
A ``Data Butler'' will be initialized to access the repository.
\subsubsection{Comment}\label{comment}
Data Butler initialized
\subsubsection{Description}\label{description}
Execution
\textbf{Execution at: 2018-04-26 11:05:47}
\subsubsection{Status}\label{status}
Pass
\subsubsection{Description}\label{description}
Execution
\subsubsection{Test Data}\label{test-data}
Scripts from the pipe\_analysis package will be run on every visit to
check for the presence of data products and make plots.
\subsubsection{Comment}\label{comment}
The week 44 reprocessing of the Hyper Suprime-Cam RC1 dataset was used
to execute this test case. Details of this reprocessing---including
failures, which are acceptable per the test specification---are
available at DM-12388.\\[2\baselineskip]Scientific assessment was
carried out using the qa-image-browser.ipynb Jupyter notebook,made
available from https://github.com/lsst-dm/pipe\_analysis/. The version
of the notebookfrom commit 8705ef7 was used.\\[2\baselineskip]All plots
produced by that notebook were scrutinized by the test team. It was
noted that:
\begin{itemize}
\tightlist
\item
When comparing aperture corrections across photometry algorithms, some
scatter wasobserved at the bright end and that the narrow-band
(NB9021) observations had moreoutliers than others.
\begin{itemize}
\tightlist
\item
This was assessed as falling within normal tolerances, and no
further action is required.
\end{itemize}
\item
When comparing photometric measurements with the reference catalog, a
significant(20 mmag) offset was observed in tracts 8766 and 8767 in
the R band.
\begin{itemize}
\tightlist
\item
This offset is regarded as falling within normal tolerances, but
worthy of further investigation.
\item
Ticket DM-13056 has been filed.
\end{itemize}
\item
When comparing astrometric measurements with the reference catalog, a
small but systematic offset was observed in tracts 8766 and 8767 in
the I band, and in tract 9813 inthe R band.
\begin{itemize}
\tightlist
\item
This offset is regarded as falling within normal tolerances, but
worthy of further investigation.
\item
Ticket DM-13057 has been filed.
\end{itemize}
\end{itemize}
The test team agreed that all measurements fall within acceptable
tolerances, and therefore regard the test case as having been passed.
DM-13056 and DM-13057 will be scheduled for further investigation as
part of the regular development cycle.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment