Julien
Julien
## Rationale The current CI pipeline has many disadvantages: 1. We have a single Docker image for everything 2. This image is huge due to scientific dependencies 3. integration tests...
**Describe the issue** In addition to HBase tables, that contain information on alerts, we should have tables containing information on objects. The ones I can think of: - SSO -...
**Describe the issue** With `pandas==1.1.4` (https://github.com/astrolabsoftware/fink-broker/pull/558) and `pandas==1.1.5` (default), in `raw2science` (probably when applying quality cuts): ``` pandas/core/computation/expressions.py:204: UserWarning: evaluating in Python space because the '*' operator is not supported...
**Describe the issue** When the schema of the alerts is evolving, we should have the ability to update the HBase table without re-pushing all data. After #541 is done, the...
**Describe the issue** The integration testing deployment of the Rubin alert stream is now available. Information for the connection can be found at https://github.com/lsst-dm/sample_alert_info/blob/main/doc/alert_stream_integration_endpoint.md Concretely, this means for us: -...
**Describe the issue** At the end of each night, we save `objectId` of candidates sent to TNS on disk, locally. These ids are then read the following nights to avoid...
**Describe the issue** After the observation night, we store data in HBase. There are the main table, and index tables. While index tables contain only subset of the data, the...
**Describe the issue** Take one alert on the parquet DB and compare its candid to the same alert on the Science portal. They are different.
**Describe the issue** For Fink to work, we need to install Java 8, Kafka, HBase. There are currently scripts to ease their local installation (for tests), but the version numbers...