TxGNN
TxGNN copied to clipboard
TxGNN Restful API - FastAPI with latest CUDA & PyTorch Support and Docker Compose
This PR provides an out-of-the-box the implementation of a RESTful API that exposes TxGNN's functionality for zero-shot therapeutic predictions and explanations using geometric deep learning (GNN). This API is deployed using FastAPI, with CUDA support for GPU acceleration, and containerized using Docker. Additionally, the repository includes a Makefile
, docker-compose.yml
, and .env
file for convenient management and deployment of the application.
It all worked after pushing the stack to latest tech meaning: CUDA: 12.4 PyTorch: 2.4.0 DGL: 2.4.0+cu124
TL;DR - Quickstart
-
Clone the repository
git clone https://github.com/healthecosystem/TxGNN cd TxGNN cp .env.example .env
- Run the application using Make
make run
- Access the API Once the services are up, the API will be locally accessible at port 8883.
Features
- CUDA-enabled: GPU-accelerated predictions for faster performance.
- FastAPI: A modern, fast web framework for building RESTful APIs.
- Docker: Containerized deployment for easy setup and scalability.
API Endpoints
Health Check
-
Endpoint:
/healthz
-
Method:
GET
- Description: Returns the health status of the API.
Predict Drug Replacement
-
Endpoint:
/predict
-
Method:
GET
-
Parameters:
disease (str)
- Description: Predicts a drug replacement for a given disease.
Explain Drug Replacement
-
Endpoint:
/explain
-
Method:
GET
-
Parameters:
disease (str)
,drug (str)
- Description: Explains why a drug is recommended as a replacement for the specified disease.
txgnn | [32mINFO[0m: Will watch for changes in these directories: ['/app']
txgnn | [33mWARNING[0m: "workers" flag is ignored when reloading is enabled.
txgnn | [32mINFO[0m: Uvicorn running on [1mhttp://0.0.0.0:80[0m (Press CTRL+C to quit)
txgnn | [32mINFO[0m: Started reloader process [[36m[1m1[0m] using [36m[1mWatchFiles[0m
txgnn | DGL backend not selected or invalid. Assuming PyTorch for now.
txgnn | Setting the default backend to "pytorch". You can change it in the ~/.dgl/config.json file or export the DGLBACKEND environment variable. Valid options are: pytorch, mxnet, tensorflow (all lowercase)
txgnn | Number of available CUDA devices: 1
txgnn | Device 0: NVIDIA GeForce RTX 3080 Ti
txgnn | Found local copy...
txgnn | Found local copy...
txgnn | Found local copy...
txgnn | Found saved processed KG... Loading...
txgnn | Splits detected... Loading splits....
txgnn | Creating DGL graph....
txgnn | Done!
txgnn | Loading pre-trained GNN model ... <txgnn.TxGNN.TxGNN object at 0x7f576fe05e50>
txgnn | Opening file ...
txgnn | Inittialise model with config ... {'n_hid': 512, 'n_inp': 512, 'n_out': 512, 'proto': True, 'proto_num': 3, 'attention': False, 'sim_measure': 'all_nodes_profile', 'bert_measure': 'disease_name', 'agg_measure': 'rarity', 'num_walks': 200, 'walk_mode': 'bit', 'path_length': 2}
txgnn | Initialising ...
txgnn | Setting G ...
txgnn | initialize_node_embedding ...
txgnn | evaluate_graph_construct valid ...
....
txgnn | Set model ...
txgnn | Set best model ...
txgnn | Loading pre-trained GNN model successfull! <txgnn.TxGNN.TxGNN object at 0x7f576fe05e50>