how to speed up task performing using snakemake

Question:

Machine: 48 cores, 96 threads, RAM 256GB
System: Ubuntu 20.04
Python: 3.9

I have a python script for some data processing and analysis and an input dataset containing around 40,000 files for the job.

The script could be run by python a.py -i sample_list.txt -o /path/to/outdir.

The sample_list.txt contains the file prefixes of all 40000 files in that dataset. Each prefix represented a sample id. In python, it is imported as a list. /path/to/outdir define the output directory. The software will create a new folder in output directory based on the prefixes first, then the generated data will be put in these new folders sample-by-sample.

I found that this script is analyzing data one by one. I estimated the time, it needs nearly 240 days to finish all the jobs for this dataset! It is unacceptable. I think parallelization for job submission could speed up. That’s when snakemake comes into my sight.

I did some study on snakemake. In my case, I can provide three things for snakemake file:

input: "sample_list.txt"
output: "/path/to/outdir"
shell: "python a.py -i {input} -o {output}"

But I have a question:

If I provided a sample_list file as input, the script will read the file prefix, instead of checking input file pattern e.g. {input}_1.txt and import it as input directly. Is it possible to parallelize the jobs based on sample_list.txt? Or I must define the input file pattern for input of snakemake?

Thanks

Additional:

I will print an example for you.

The filename looks like: sample1_1.fq, sample1_2.fq, sample2_1.fq, sample2_2.fq, etc.

The software requires a list: name = ['sample1','sample2','sample3','sample4']

To get all the sample names (file prefix) I extracted the sample name and stored them in sample_list.txt:

sample1
sample2
sample3
sample4

How to parallelize the jobs?

Asked By: tomasz

||

Answers:

You don’t necessarily need snakemake to achieve a parallelization.
If you wish to use snakemake, you need to define a rule that will process only one sample. The rule you describe will treat all samples at the same time, and thus will be run once without any parallelization.

The snakemake way:

with open("sample_list.txt", 'r') as f:
    names = f.read().splitlines()

rule all:
    input:  expand("/path/to/outdir/{sample}/{sample}_processedFile.txt",sample=names)

rule process:
    input:  fastq1 = "/path/to/fastq/{sample}_1.fq",
            fastq2 = "/path/to/fastq/{sample}_2.fq"
    output: "/path/to/outdir/{sample}/{sample}_processedFile.txt"
    params: outdir = "/path/to/outdir/{sample}"
    shell:  "python a.py -i {input.fastq1} {input.fastq2} -o {params.outdir}"

This assumes:

  • your script a.py takes as arguments a pair of fastq file, or a sample name if the script knows how to get the fastq files with it.
  • your script a.py processes only one sample at the time.
  • all fastq files are located under the same folder.

And you would run snakemake this way:

snakemake -j X

where X is the number of parallel jobs (do not exceed the number of cores on the machine)

Remember that all python code outside of rules is executed before building the DAG. However, I’m not completely sure snakemake will be able to build its DAG for 40k files…

The python way:

You can use the python module multiprocessing:

import multiprocessing

def processSample(sample):
    # process your sample

# get number of maximum cores on the machine (-1 or more not to overload)
maxCores = multiprocessing.cpu_count() - 1
# build a pool of jobs
pool = multiprocessing.Pool(processes=maxCores)

# add jobs to the pool
for sample in names:
    # process each job
    pool.apply_async(processSample, args=(sample))

pool.close()
pool.join()
Answered By: Eric C.
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.