-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Massive conda update #1663
Massive conda update #1663
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the non-autogenerated files look good to me!
Good for me then |
@@ -64,12 +64,12 @@ process { | |||
// BCFTOOLS ANNOTATE | |||
if (params.tools && params.tools.split(',').contains('bcfann')) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// ALL ANNOTATION TOOLS
if (params.tools && (params.tools.split(',').contains('snpeff') || params.tools.split(',').contains('vep') || params.tools.split(',').contains('merge') || params.tools.split(',').contains('bcfann'))) {
withName: 'NFCORE_SAREK:SAREK:VCF_ANNOTATE_ALL:.*:(TABIX_BGZIPTABIX|TABIX_TABIX)' {
ext.prefix = { input.name - '.vcf' }
publishDir = [
mode: params.publish_dir_mode,
path: { "${params.outdir}/annotation/${meta.variantcaller}/${meta.id}/" },
pattern: "*{gz.tbi}"
]
}
}
if (params.tools && (params.tools.split(',').contains('snpeff') || params.tools.split(',').contains('merge'))) {
withName: 'NFCORE_SAREK:SAREK:VCF_ANNOTATE_ALL:VCF_ANNOTATE_SNPEFF:TABIX_BGZIPTABIX' {
publishDir = [
mode: params.publish_dir_mode,
path: { "${params.outdir}/annotation/${meta.variantcaller}/${meta.id}/" },
pattern: "*{gz,gz.tbi}",
saveAs: { params.tools.split(',').contains('snpeff') ? it : null }
]
}
}
```
This could be cleaned up I think?
@@ -50,7 +50,10 @@ workflow FASTQ_CREATE_UMI_CONSENSUS_FGBIO { | |||
// Using newly created groups | |||
// To call a consensus across reads in the same group | |||
// And emit a consensus BAM file | |||
CALLUMICONSENSUS(GROUPREADSBYUMI.out.bam) | |||
// TODO: add params for call_min_reads and call_min_baseq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be an issue
@@ -38,12 +45,13 @@ process DEEPVARIANT { | |||
--output_gvcf=${prefix}.g.vcf.gz \\ | |||
${args} \\ | |||
${regions} \\ | |||
--intermediate_results_dir=. \\ | |||
${par_regions} \\ | |||
--intermediate_results_dir=tmp \\ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we sure about this one? If nextflow starts writing to /tmp here againit will break a lot of cluster. The .
should ensure it writes to the current scratch directory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll update the module upstream
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should not write to /tmp, it should write to a directory tmp
in the work/scratch directory.
tuple val(meta), path(input), path(input_index), path(intervals), path(recal_table) | ||
tuple val(meta1), path(fasta) | ||
tuple val(meta2), path(fai) | ||
tuple val(meta3), path(dbsnp) | ||
tuple val(meta4), path(dbsnp_tbi) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you test this? I am surprised this didn't require changes to the module call in bam_variant_calling_germline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NOT AT ALL, I thought we fixed sentieon server so we could do CI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤦
Anyhow, I can see that all sentieon modules are using oras
, so we still need to update those one ore time
echo "Decoded and exported Sentieon test-license system environment variables" | ||
fi | ||
sentieon driver $args -t $task.cpus $input_list -r ${fasta} --algo LocusCollector $args2 --fun score_info ${prefix_basename}.score | ||
sentieon driver $args3 -t $task.cpus $input_list -r ${fasta} --algo Dedup $args4 --score_info ${prefix_basename}.score --metrics ${metrics} ${prefix} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not using the suffix
anymore. think the config here needs updating to have .cram
in the prefix end:
sarek/conf/modules/sentieon_dedup.config
Line 19 in 5cc3049
ext.prefix = { "${meta.id}.dedup" } |
@@ -64,7 +39,12 @@ process SENTIEON_BWAMEM { | |||
-t $task.cpus \\ | |||
\$INDEX \\ | |||
$reads \\ | |||
| sentieon util sort -r $fasta -t $task.cpus -o ${prefix}.bam --sam2bam - | |||
| sentieon util sort -r $fasta -t $task.cpus -o ${prefix} --sam2bam - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar here. The prefix already should contain the final file ending. We could consider using cram here already. I assume people would always run sentieon for preprocessing if they have it.
Replace #1662
PR checklist
nf-core lint
).nextflow run . -profile test,docker --outdir <OUTDIR>
).nextflow run . -profile debug,test,docker --outdir <OUTDIR>
).docs/usage.md
is updated.docs/output.md
is updated.CHANGELOG.md
is updated.README.md
is updated (including new tool citations and authors/contributors).