LVV-T17 - AG-00-00: Installation of the Alert Generation science payload.
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Approved | Normal | Test | False | Eric Bellm |
- LVV-139 - DMS-REQ-0308-V-01: Software Architecture to Enable Community Re-Use
None.
- Environmental needs
Software All prerequisite packages listed at https://pipelines.lsst.io/install/ prereqs/centos.html must be available on the test system and on the LSST-VC compute node.
- Input specification
No input data is required for this test case.
- Output specification
The Alert Generation science payload will be made available on a shared filesystem accessible from LSST-VC compute notes.
Step | Description |
---|---|
1 | Description Release 16.0 of the LSST Science Pipelines will be installed into the GPFS filesystem accessible at /software on lsst-dev01 following the instructions at https://pipelines.lsst.io/install/newinstall.html . |
2 | Description The lsst_distrib top level package will be enabled: source /software/lsstsw/stack3/loadLSST.bash setup lsst_distrib |
3 | Description The “LSST Stack Demo” package will be downloaded onto the test system from https://github.com/lsst/lsst_dm_stack_demo/releases/tag/16.0 and uncompressed. |
4 | Description The demo package will be executed by following the instructions in its “README“ file. The string “Ok.“ should be returned. Specifically, we execute: setup obs_sdss ./bin/demo.sh python bin/compare expected/Linux64/detected-sources.txt |
5 | Description A shell on an LSST-VC compute node will now be obtained by executing: $ srun -I --pty bash |
6 | Description The demo package will be executed on the compute node and the same result obtained. |
7 | Description The Alert Production datasets and packages are not yet part of lsst_distrib and so must be installed separately. They will be installed as follows on the GPFS filesystem: setup git_lfs git clone https://github.com/lsst/ap_verify_hits2015.git export AP_VERIFY_HITS2015_DIR=$PWD/ap_verify_hits2015 cd $AP_VERIFY_HITS2015_DIR setup -r . cd- setup obs_decam git clone https://github.com/lsst-dm/ap_association cd ap_association setup -k -r . scons cd- git clone https://github.com/lsst-dm/ap_pipe cd ap_pipe setup -k -r . scons cd- git clone https://github.com/lsst-dm/ap_verify cd ap_verify setup -k -r . scons cd- and any errors or failures reported. |
LVV-T18 - AG-00-05: Alert Generation Produces Required Data Products
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Approved | Normal | Test | False | Eric Bellm |
- LVV-29 - DMS-REQ-0069-V-01: Processed Visit Images
- LVV-7 - DMS-REQ-0010-V-01: Difference Exposures
- LVV-100 - DMS-REQ-0269-V-01: DIASource Catalog
- LVV-102 - DMS-REQ-0271-V-01: DIAObject Catalog
LVV-T-17 (AG-00-00)
- Environmental needs - Software
Release 16.0 of the DM Software Stack will be pre-installed (following the procedure described in AG-00-00).
- Input specification
A complete processing of the DECam “HiTS” dataset, as defined at
https://dmtn-039.lsst.io/ and
https://github.com/lsst/ap\_verify\_hits2015, through the Alert
Generation science payload.
This dataset shall be made available in a standard LSST data repository,
accessible via the “Data Butler”.
It is not required that all combinations of visit and CCD have been
processed successfully: a number of failures are expected. However,
documentation to describe processing failures should be provided.
- Output specification
None.
Step | Description |
---|---|
1 | Description The DM Stack and Alert Processing packaged shall be initialized as described in LVT-T17 (AG-00-00). |
2 | Description The alert generation processing will be executed using the verification cluster: ```bash python ap_verify/bin/prepare_demo_slurm_files.py # At present we must run a single ccd+visit to handle ingestion before # parallel processing can begin ./ap_verify/bin/exec_demo_run_1ccd.sh 410915 25 ln -s ap_verify/bin/demo_run.sl ln -s ap_verify/bin/demo_cmds.conf sbatch demo_run.sl ``` and any errors or failures reported. |
3 | Description A “Data Butler” will be initialized to access the repository. |
4 | Description For each of the expected data products types (listed in §4.2.2) and each of the expected units (PVIs, catalogs, etc.), the data product will be retrieved from the Butler and verified to be non-empty. |
5 | Description DIAObjects are currently only stored in a database, without shims to the Butler, so the existence of the database table and its non-empty contents will be verified by directly accessing it using sqlite3 and executing appropriate SQL queries. |
LVV-T19 - AG-00-10: Scientific Verification of Processed Visit Images
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Approved | Normal | Test | False | Eric Bellm |
- LVV-29 - DMS-REQ-0069-V-01: Processed Visit Images
- LVV-158 - DMS-REQ-0327-V-01: Background Model Calculation
- LVV-12 - DMS-REQ-0029-V-01: Generate Photometric Zeropoint for Visit Image
- LVV-30 - DMS-REQ-0070-V-01: Generate PSF for Visit Images
- LVV-13 - DMS-REQ-0030-V-01: Generate WCS for Visit Images
- LVV-31 - DMS-REQ-0072-V-01: Processed Visit Image Content
LVT-T17 (AG-00-00)
LVT-T18 (AG-00-05)
- Environmental needs - Software
Release 14.0 of the DM Software Stack will be pre-installed (following the procedure described in AG-00-00).
Input specification
A complete processing of the DECam “HiTS” dataset, as defined at
https://dmtn-039.lsst.io/ and
https://github.com/lsst/ap\_verify\_hits2015, through the Alert
Generation science payload.
This dataset shall be made available in a standard LSST data repository,
accessible via the “Data Butler”.
It is not required that all combinations of visit and CCD have been
processed successfully: a number of failures are expected. However,
documentation to describe processing failures should be provided.
- Output specification
None.
Step | Description |
---|---|
1 | Description The DM Stack shall be initialized using the loadLSST script (as described in LVV-T17 - AG-00-00). |
2 | Description A “Data Butler” will be initialized to access the repository. |
3 | Description For each processed CCD, the PVI will be retrieved from the Butler, and the existence of all components described in §4.3.2 will be verified. |
4 | Description Five sensors will be chosen at random from each of two visits and inspected by eye for unmasked artifacts. |
LVV-T20 - AG-00-15: Scientific Verification of Difference Images
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Approved | Normal | Test | False | Eric Bellm |
- LVV-7 - DMS-REQ-0010-V-01: Difference Exposures
- LVV-32 - DMS-REQ-0074-V-01: Difference Exposure Attributes
LVV-T17 (AG-00-00)
LVV-T18 (AG-00-05)
- Environmental needs - Software
Release 14.0 of the DM Software Stack will be pre-installed (following the procedure described in AG-00-00).
- Input specification
A complete processing of the DECam “HiTS” dataset, as defined at
https://dmtn-039.lsst. io/ and
https://github.com/lsst/ap\_verify\_hits2015, through the Alert
Generation science payload.
This dataset shall be made available in a standard LSST data repository,
accessible via the “Data Butler”.
It is not required that all combinations of visit and CCD have been
processed successfully: a number of failures are expected. However,
documentation to describe processing failures should be provided.
- Output specification
None.
Step | Description |
---|---|
1 | Description The DM Stack shall be initialized using the loadLSST script (as described in LVV-T-17 AG-00-00). |
2 | Description A “Data Butler” will be initialized to access the repository. |
3 | Description For each processed CCD, the difference image will be retrieved from the Butler, and the existence of all components described in §4.4.2 will be verified. |
4 | Description Five sensors will be chosen at random from each of two visits and the masks of the input and difference images compared by eye. |
LVV-T21 - AG-00-20: Scientific Verification of DIASource Catalog
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Approved | Normal | Test | False | Eric Bellm |
- LVV-100 - DMS-REQ-0269-V-01: DIASource Catalog
- LVV-101 - DMS-REQ-0270-V-01: Faint DIASource Measurements
- LVV-178 - DMS-REQ-0347-V-01: Measurements in catalogs
- LVV-162 - DMS-REQ-0331-V-01: Computing Derived Quantities
LVT-T17 (AG-00-00)
LVT-T18 (AG-00-05)
- Environmental needs - Software
Release 14.0 of the DM Software Stack will be pre-installed (following the procedure described in AG-00-00).
- Input specification
A complete processing of the DECam “HiTS” dataset, as defined at
https://dmtn-039.lsst. io/ and
https://github.com/lsst/ap\_verify\_hits2015, through the Alert
Generation science payload.
This dataset shall be made available in a standard LSST data repository,
accessible via the “Data Butler”.
It is not required that all combinations of visit and CCD have been
processed successfully: a number of failures are expected. However,
documentation to describe processing failures should be provided.
- Output specification
None.
Step | Description |
---|---|
1 | Description The DM Stack shall be initialized using the loadLSST script (as described in LVV-T17 - AG-00-00). |
2 | Description A “Data Butler” will be initialized to access the repository. |
3 | Description DIASource records will be accessed by querying the Butler, then examined interactively at a Python prompt. |
LVV-T22 - AG-00-25: Scientific Verification of DIAObject Catalog
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Approved | Normal | Test | False | Eric Bellm |
- LVV-116 - DMS-REQ-0285-V-01: Level 1 Source Association
- LVV-102 - DMS-REQ-0271-V-01: DIAObject Catalog
- LVV-103 - DMS-REQ-0272-V-01: DIAObject Attributes
- LVV-178 - DMS-REQ-0347-V-01: Measurements in catalogs
- LVV-162 - DMS-REQ-0331-V-01: Computing Derived Quantities
LVT-T17 (AG-00-00)
LVT-T18 (AG-00-05)
Environmental needs - Software
Release 14.0 of the DM Software Stack will be pre-installed (following the procedure described in AG-00-00).
Input specification
A complete processing of the DECam “HiTS” dataset, as defined at
https://dmtn-039.lsst. io/ and
https://github.com/lsst/ap\_verify\_hits2015, through the Alert
Generation science payload.
This dataset shall be made available in a standard LSST data repository,
accessible via the “Data Butler”.
It is not required that all combinations of visit and CCD have been
processed successfully: a number of failures are expected. However,
documentation to describe processing failures should be provided.
Output specification
None.
Step | Description |
---|---|
1 | Description The DM Stack shall be initialized using the loadLSST script (as described in LVV-T17 - AG-00-00). |
2 | Description sqlite3 or Python’s sqlalchemy module will be used to access the Level 1 database. |
LVV-T216 - Installation of the Alert Distribution payloads.
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Draft | Normal | Test | False | Eric Bellm |
- LVV-139 - DMS-REQ-0308-V-01: Software Architecture to Enable Community Re-Use
Step | Description |
---|---|
1 | Description Download Kafka Docker image from https://github.com/lsst-dm/alert_stream. Expected Result Runs without error Download Kafka Docker image from https://github.com/lsst-dm/alert_stream. |
2 | Description Change to the alert_stream directory and build the docker image.
Expected Result Runs without error Change to the alert_stream directory and build the docker image.
|
3 | Description Register it with Kubernetes docker push lsst-kub001:5000/alert_stream Expected Result Runs without error Register it with Kubernetes docker push lsst-kub001:5000/alert_stream |
4 | Description From the alert_stream/kubernetes directory, start Kafka and Zookeeper:
(use kubectl get pods/services between each command to check status; wait until each is "Running" before starting the next command) Expected Result Runs without error From the alert_stream/kubernetes directory, start Kafka and Zookeeper:
(use kubectl get pods/services between each command to check status; wait until each is "Running" before starting the next command) |
5 | Description Confirm Kafka and Zookeeper are listed when running kubectl get pods and kubectl get services Expected Result Output should be similar to: kubectl get pods NAME READY STATUS RESTARTS AGE kafka-768ddf5564-xwgvh 1/1 Running 0 31s zookeeper-f798cc548-mgkpn 1/1 Running 0 1m kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kafka ClusterIP 10.105.19.124 <none> 9092/TCP 6s zookeeper ClusterIP 10.97.110.124 <none> 32181/TCP 2m Confirm Kafka and Zookeeper are listed when running kubectl get pods and kubectl get services |
LVV-T217 - Full Stream Alert Distribution
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Draft | Normal | Test | False | Eric Bellm |
- LVV-3 - DMS-REQ-0002-V-01: Transient Alert Distribution
The Kafka cluster and Zookeeper shall be instantiated according to the procedure described in LVV-T216.
Input data: A sample of Avro-formatted alert packets.
Multiple Kafka consumers will run and write log files to disk.
The logs will include printing every Nth alert to to the log as well
as a log summarizing the queue offset.
Step | Description |
---|---|
1 | Description Download Kafka Docker image from https://github.com/lsst-dm/alert_stream. |
2 | Description Change to the alert_stream directory and build the docker image.
|
3 | Description Register it with Kubernetes docker push lsst-kub001:5000/alert_stream |
4 | Description From the alert_stream/kubernetes directory, start Kafka and Zookeeper:
(use kubectl get pods/services between each command to check status; wait until each is "Running" before starting the next command) |
5 | Description Confirm Kafka and Zookeeper are listed when running kubectl get pods and kubectl get services |
6 | Description Start a consumer that monitors the full stream and logs a deserialized version of every Nth packet:
Expected Result Runs without error Start a consumer that monitors the full stream and logs a deserialized version of every Nth packet:
|
7 | Description
Expected Result Runs without error
|
8 | Description Determine the name of the alert sender pod with kubectl get pods Examine output log files. kubectl logs <pod name> Verify that alerts are being sent within 40 seconds by subtracting the timing measurements. Expected Result Similar to kubectl logs sender-7d6f98586f-nhwfj visit: 1570. time: 1530588618.0313473 visits finished: 1 time: 1530588653.5614944 visit: 1571. time: 1530588657.0087624 visits finished: 2 time: 1530588692.506188 visit: 1572. time: 1530588696.0051727 visits finished: 3 time: 1530588731.5900314 Determine the name of the alert sender pod with kubectl get pods Examine output log files. kubectl logs <pod name> Verify that alerts are being sent within 40 seconds by subtracting the timing measurements. |
9 | Description Determine the name of the consumer pod with kubectl get pods Examine output log files. kubectl logs <pod name> The packet log should show deserialized alert packets with contents matching the input packets. Expected Result Similar to {'alertId': 12132024420, 'l1dbId': 71776805594116, 'diaSource': {'diaSourceId': 73499448928374785, 'ccdVisitId': 2020011570, 'diaObjectId': 71776805594116, 'ssO bjectId': None, 'parentDiaSourceId': None, 'midPointTai': 59595.37041, 'filterNa me': 'y', 'ra': 172.24912810036074, 'decl': -80.64214929176521, 'ra_decl_Cov': { 'raSigma': 0.0003428002819418907, 'declSigma': 0.00027273103478364646, 'ra_decl_ Cov': 0.000628734880592674}, 'x': 2979.08837890625, 'y': 3843.328857421875, 'x_y _Cov': {'xSigma': 0.6135467886924744, 'ySigma': 0.77132648229599, 'x_y_Cov': 0.0 007463791407644749}, 'apFlux': None, 'apFluxErr': None, 'snr': 0.366516500711441 04, 'psFlux': 7.698232025177276e-07, 'psRa': None, 'psDecl': None, 'ps_Cov': Non e, 'psLnL': None, 'psChi2': None, 'psNdata': None, 'trailFlux': None, 'trailRa': etc. Determine the name of the consumer pod with kubectl get pods Examine output log files. kubectl logs <pod name> The packet log should show deserialized alert packets with contents matching the input packets. |
LVV-T218 - Simple Filtering of the LSST Alert Stream
Version | Status | Priority | Verification Type | Critical Event | Owner |
---|---|---|---|---|---|
1 | Draft | Normal | Test | False | Eric Bellm |
- LVV-173 - DMS-REQ-0342-V-01: Alert Filtering Service
- LVV-179 - DMS-REQ-0348-V-01: Pre-defined alert filters
- LVV-174 - DMS-REQ-0343-V-01: Performance Requirements for LSST Alert Filtering Service
The Kafka cluster and Zookeeper shall be instantiated according to the procedure described in LVV-T216.
Input data: A sample of Avro-formatted alert packets derived from LSST simulations corresponding to one night of simulated LSST observing.
Step | Description |
---|---|
1 | Description Download Kafka Docker image from https://github.com/lsst-dm/alert_stream. |
2 | Description Change to the alert_stream directory and build the docker image.
|
3 | Description Register it with Kubernetes docker push lsst-kub001:5000/alert_stream |
4 | Description From the alert_stream/kubernetes directory, start Kafka and Zookeeper:
(use kubectl get pods/services between each command to check status; wait until each is "Running" before starting the next command) |
5 | Description Confirm Kafka and Zookeeper are listed when running kubectl get pods and kubectl get services |
6 | Description Start 100 consumers that consume the filtered streams and logs a deserialized version of every Nth packet:
Expected Result Runs without error Start 100 consumers that consume the filtered streams and logs a deserialized version of every Nth packet:
|
7 | Description Start 5 filter groups:
Expected Result Runs without error Start 5 filter groups:
|
8 | Description Start a producer that reads alert packets from disk and loads them into the Kafka queue:
Expected Result Runs without error Start a producer that reads alert packets from disk and loads them into the Kafka queue:
|
9 | Description Determine the name of the alert sender pod with kubectl get pods Examine output log files. kubectl logs <pod name> Verify that alerts are being sent within 40 seconds by subtracting the timing measurements. Expected Result Similar to kubectl logs sender-7d6f98586f-nhwfj visit: 1570. time: 1530588618.0313473 visits finished: 1 time: 1530588653.5614944 visit: 1571. time: 1530588657.0087624 visits finished: 2 time: 1530588692.506188 visit: 1572. time: 1530588696.0051727 visits finished: 3 time: 1530588731.5900314 Determine the name of the alert sender pod with kubectl get pods Examine output log files. kubectl logs <pod name> Verify that alerts are being sent within 40 seconds by subtracting the timing measurements. |
10 | Description Determine the name of the consumer pods with kubectl get pods Examine output log files. kubectl logs <pod name> The packet log should show deserialized alert packets with contents matching the input packets. Expected Result Similar to {'alertId': 12132024420, 'l1dbId': 71776805594116, 'diaSource': {'diaSourceId': 73499448928374785, 'ccdVisitId': 2020011570, 'diaObjectId': 71776805594116, 'ssO bjectId': None, 'parentDiaSourceId': None, 'midPointTai': 59595.37041, 'filterNa me': 'y', 'ra': 172.24912810036074, 'decl': -80.64214929176521, 'ra_decl_Cov': { 'raSigma': 0.0003428002819418907, 'declSigma': 0.00027273103478364646, 'ra_decl_ Cov': 0.000628734880592674}, 'x': 2979.08837890625, 'y': 3843.328857421875, 'x_y _Cov': {'xSigma': 0.6135467886924744, 'ySigma': 0.77132648229599, 'x_y_Cov': 0.0 007463791407644749}, 'apFlux': None, 'apFluxErr': None, 'snr': 0.366516500711441 04, 'psFlux': 7.698232025177276e-07, 'psRa': None, 'psDecl': None, 'ps_Cov': Non e, 'psLnL': None, 'psChi2': None, 'psNdata': None, 'trailFlux': None, 'trailRa': etc. Determine the name of the consumer pods with kubectl get pods Examine output log files. kubectl logs <pod name> The packet log should show deserialized alert packets with contents matching the input packets. |