Skip to content

Latest commit

 

History

History
113 lines (82 loc) · 6.9 KB

README.md

File metadata and controls

113 lines (82 loc) · 6.9 KB

nf-core/drugresponseeval

GitHub Actions CI Status GitHub Actions Linting Status AWS CI Cite with Zenodo nf-test

Nextflow run with conda run with docker run with singularity Launch on Seqera Platform

Follow on Twitter Follow on Mastodon Watch on YouTube

drevalpy_summary

Introduction

DrEval is a bioinformatics framework which includes a PyPI package (drevalpy) and a Nextflow pipeline (this repo). DrEval ensures that evaluations are statistically sound, biologically meaningful, and reproducible. DrEval simplifies the implementation of drug response prediction models, allowing researchers to focus on advancing their modeling innovations by automating standardized evaluation protocols and preprocessing workflows. With DrEval, hyperparameter tuning is fair and consistent. With its flexible model interface, DrEval supports any model type, ranging from statistical models to complex neural networks. By contributing your model to the DrEval catalog, you can increase your work's exposure, reusability, and transferability.

Pipeline diagram showing the major steps of nf-core/drugresponseeval

  1. The response data is loaded
  2. All models are trained and evaluated in a cross-validation setting
  3. For each CV split, the best hyperparameters are determined using a grid search per model
  4. The model is trained on the full training set (train & validation) with the best hyperparameters to predict the test set
  5. If randomization tests are enabled, the model is trained on the full training set with the best hyperparameters to predict the randomized test set
  6. If robustness tests are enabled, the model is trained N times on the full training set with the best hyperparameters
  7. Plots are created summarizing the results

For baseline models, no randomization or robustness tests are performed.

Usage

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

Now, you can run the pipeline using:

nextflow run nf-core/drugresponseeval \
   -profile <docker/singularity/.../institute> \
   --models <model1,model2,...> \
   --baselines <baseline1,baseline2,...> \
   --dataset_name <dataset_name> \
   --path_data <path_data> \

Warning

Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

For more details and further functionality, please refer to the usage documentation and the parameter documentation.

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.

Credits

nf-core/drugresponseeval was originally written by Judith Bernett (TUM) and Pascal Iversen (FU Berlin).

We thank the following people for their extensive assistance in the development of this pipeline:

Contributions and Support

Contributors to nf-core/drugresponseeval and the drevalpy PyPI package:

If you would like to contribute to this pipeline, please see the contributing guidelines.

For further information or help, don't hesitate to get in touch on the Slack #drugresponseeval channel (you can join with this invite).

Citations

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.