experiments | ||
notebooks | ||
scripts | ||
utils | ||
.gitignore | ||
config.json | ||
epos-ai-train-osx.yml | ||
epos-ai-train.yml | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
Demo notebooks and scripts for EPOS AI Platform
This repo contains notebooks and scripts demonstrating how to:
-
Prepare data for training a seisbench model detecting P and S waves (i.e. transform mseeds into SeisBench data format), check the notebook and the script
-
[to update] Explore available data, check the notebook
-
Train various cnn models available in seisbench library and compare their performance of detecting P and S waves, check the script
-
[to update] Validate model performance, check the notebook
-
[to update] Use model for detecting P phase, check the notebook
Acknowledgments
This code is based on the pick-benchmark, the repository accompanying the paper: Which picker fits my data? A quantitative evaluation of deep learning based seismic pickers
Installation method 1
Please download and install Mambaforge following the official guide.
After successful installation and within the Mambaforge environment please clone this repository:
git clone ssh://git@git.plgrid.pl:7999/eai/platform-demo-scripts.git
and please run for Linux or Windows platforms:
cd platform-demo-scripts
mambaforge env create -f epos-ai-train.yml
or for OSX:
cd platform-demo-scripts
mambaforge env create -f epos-ai-train-osx.yml
This will create a conda environment named platform-demo-scripts
with all required packages installed.
To run the notebooks and scripts from this repository it is necessary to activate the platform-demo-scripts
environment by running:
conda activate platform-demo-scripts
Installation method 2
Please install Poetry, a tool for dependency management and packaging in Python. Then we will use only Poetry for creating Python environment and installing dependencies.
Install all dependencies with poetry, run:
poetry install
To run the notebooks and scripts from this repository it is necessary to activate the poetry environment by running:
poetry shell
Usage
-
Prepare .env file with content:
WANDB_HOST="https://epos-ai.grid.cyfronet.pl/" WANDB_API_KEY="your key" WANDB_USER="your user" WANDB_PROJECT="training_seisbench_models" BENCHMARK_DEFAULT_WORKER=2
-
Transform data into seisbench format.
To utilize functionality of Seisbench library, data need to be transformed to SeisBench data format). If your data is in the MSEED format, you can use the prepared script
mseeds_to_seisbench.py
to perform the transformation. Please make sure that your data has the same structure as the data used in this project. The script assumes that:- the data is stored in the following directory structure:
input_path/year/station_network_code/station_code/trace_channel.D
e.g.input_path/2018/PL/ALBE/EHE.D/
- the file names follow the pattern:
station_network_code.station_code..trace_channel.D.year.day_of_year
e.g.PL.ALBE..EHE.D.2018.282
- events catalog is stored in quakeML format
Run the script
mseeds_to_seisbench
located in theutils
directorycd utils python mseeds_to_seisbench.py --input_path $input_path --catalog_path $catalog_path --output_path $output_path
If you want to run the script on a cluster, you can use the script
convert_data.sh
as a template (adjust the grant name, computing name and paths) and send the job to queue using sbatch command on login node of e.g. Ares:cd utils sbatch convert_data.sh
If your data has a different structure or format, use the notebooks to gain an understanding of the Seisbench format and what needs to be done to transform your data:
- Seisbench example or
- [Transforming mseeds from Bogdanka to Seisbench format](utils/Transforming mseeds from Bogdanka to Seisbench format.ipynb) notebook
- the data is stored in the following directory structure:
-
Adjust the
config.json
and specify:dataset_name
- the name of the dataset, which will be used to name the folder with evaluation targets and predictionsdata_path
- the path to the data in the Seisbench formatexperiment_count
- the number of experiments to run for each model type
-
Run the pipeline script
python pipeline.py
The script performs the following steps:
-
Generates evaluation targets in
datasets/<dataset_name>/targets
directory.- Trains multiple versions of GPD, PhaseNet and ... models to find the best hyperparameters, producing the lowest validation loss.
This step utilizes the Weights & Biases platform to perform the hyperparameters search (called sweeping) and track the training process and store the results. The results are available at
https://epos-ai.grid.cyfronet.pl/<WANDB_USER>/<WANDB_PROJECT>
Weights and training logs can be downloaded from the platform. Additionally, the most important data are saved locally inweights/<dataset_name>_<model_name>/
directory:- Weights of the best checkpoint of each model are saved as
<dataset_name>_<model_name>_sweep=<sweep_id>-run=<run_id>-epoch=<epoch_number>-val_loss=<val_loss>.ckpt
- Metrics and hyperparams are saved in <run_id> folders
-
Uses the best performing model of each type to generate predictions. The predictons are saved in the
scripts/pred/<dataset_name>_<model_name>/<run_id>
directory. -
Evaluates the performance of each model by comparing the predictions with the evaluation targets. The results are saved in the
scripts/pred/results.csv
file.
-
The default settings are saved in config.json file. To change the settings, edit the config.json file or pass the new settings as arguments to the script.
For example, to change the sweep configuration file for GPD model, run:
python pipeline.py --gpd_config <new config file>
The new config file should be placed in the experiments
folder or as specified in the configs_path
parameter in the config.json file.
Troubleshooting
wandb: ERROR Run .. errored: OSError(24, 'Too many open files')
-> https://github.com/wandb/wandb/issues/2825
Licence
TODO
Copyright
Copyright © 2023 ACK Cyfronet AGH, Poland.
This work was partially funded by EPOS Project funded in frame of PL-POIR4.2