MACS
MACS copied to clipboard
OSError: [Errno 122] Disk quota exceeded
Hi
I am receiving the following error message. I have 5TB space in my tempdir. Can anyone help what the issue is ? Thanks. The same command works when I did not have --broad
$ macs2 --version macs2 2.2.7.1
INFO @ Tue, 01 Dec 2020 23:42:29: Command line: callpeak --broad --tempdir /scratch/xxx/ -t /project/xxx_71/xxx/xx1/A1.clean_sorted.bam -c /project/xxx_71/xxx/xx1/F1.clean_sorted.bam -f BAMPE -g hs --outdir /project/xxx_71/xxx/xx1/broad -n A1.F1 -B -q 0.01 ARGUMENTS LIST: name = A1.F1 format = BAMPE ChIP-seq file = ['/project/xxx_71/xxx/xx1/A1.clean_sorted.bam'] control file = ['/project/xxx_71/xxx/xx1/F1.clean_sorted.bam'] effective genome size = 2.70e+09 band width = 300 model fold = [5, 50] qvalue cutoff for narrow/strong regions = 1.00e-02 qvalue cutoff for broad/weak regions = 1.00e-01 The maximum gap between significant sites is assigned as the read length/tag size. The minimum length of peaks is assigned as the predicted fragment length "d". Larger dataset will be scaled towards smaller dataset. Range for calculating regional lambda is: 1000 bps and 10000 bps Broad region calling is on Paired-End mode is on
INFO @ Tue, 01 Dec 2020 23:42:29: #1 read fragment files... INFO @ Tue, 01 Dec 2020 23:42:29: #1 read treatment fragments... INFO @ Tue, 01 Dec 2020 23:42:34: 1000000 INFO @ Tue, 01 Dec 2020 23:42:39: 2000000
....
INFO @ Tue, 01 Dec 2020 23:49:26: 93000000 INFO @ Tue, 01 Dec 2020 23:49:31: 94000000 INFO @ Tue, 01 Dec 2020 23:49:36: 95000000 INFO @ Tue, 01 Dec 2020 23:49:36: 95032218 fragments have been read. INFO @ Tue, 01 Dec 2020 23:50:56: #1.2 read input fragments... INFO @ Tue, 01 Dec 2020 23:51:01: 1000000 INFO @ Tue, 01 Dec 2020 23:51:06: 2000000
....
INFO @ Wed, 02 Dec 2020 00:00:40: 124000000
INFO @ Wed, 02 Dec 2020 00:00:44: 125000000
INFO @ Wed, 02 Dec 2020 00:00:49: 126000000
INFO @ Wed, 02 Dec 2020 00:00:54: 127000000
INFO @ Wed, 02 Dec 2020 00:00:54: 127040449 fragments have been read.
INFO @ Wed, 02 Dec 2020 00:02:31: #1 mean fragment size is determined as 285.8 bp from treatment
INFO @ Wed, 02 Dec 2020 00:02:31: #1 note: mean fragment size in control is 247.8 bp -- value ignored
INFO @ Wed, 02 Dec 2020 00:02:31: #1 fragment size = 285.8
INFO @ Wed, 02 Dec 2020 00:02:31: #1 total fragments in treatment: 95032218
INFO @ Wed, 02 Dec 2020 00:02:31: #1 user defined the maximum fragments...
INFO @ Wed, 02 Dec 2020 00:02:31: #1 filter out redundant fragments by allowing at most 1 identical fragment(s)
INFO @ Wed, 02 Dec 2020 00:05:58: #1 fragments after filtering in treatment: 67559775
INFO @ Wed, 02 Dec 2020 00:05:58: #1 Redundant rate of treatment: 0.29
INFO @ Wed, 02 Dec 2020 00:05:58: #1 total fragments in control: 127040449
INFO @ Wed, 02 Dec 2020 00:05:58: #1 user defined the maximum fragments...
INFO @ Wed, 02 Dec 2020 00:05:58: #1 filter out redundant fragments by allowing at most 1 identical fragment(s)
INFO @ Wed, 02 Dec 2020 00:10:46: #1 fragments after filtering in control: 115046583
INFO @ Wed, 02 Dec 2020 00:10:46: #1 Redundant rate of control: 0.09
INFO @ Wed, 02 Dec 2020 00:10:46: #1 finished!
INFO @ Wed, 02 Dec 2020 00:10:46: #2 Build Peak Model...
INFO @ Wed, 02 Dec 2020 00:10:46: #2 Skipped...
INFO @ Wed, 02 Dec 2020 00:10:46: #3 Call peaks...
INFO @ Wed, 02 Dec 2020 00:10:46: #3 Call broad peaks with given level1 -log10qvalue cutoff and level2: 2.000000, 1.000000...
INFO @ Wed, 02 Dec 2020 00:10:46: #3 Pre-compute pvalue-qvalue table...
INFO @ Wed, 02 Dec 2020 00:26:01: #3 In the peak calling step, the following will be performed simultaneously:
INFO @ Wed, 02 Dec 2020 00:26:01: #3 Write bedGraph files for treatment pileup (after scaling if necessary)... A1.F1_treat_pileup.bdg
INFO @ Wed, 02 Dec 2020 00:26:01: #3 Write bedGraph files for control lambda (after scaling if necessary)... A1.F1_control_lambda.bdg
INFO @ Wed, 02 Dec 2020 00:26:01: #3 Call peaks for each chromosome...
INFO @ Wed, 02 Dec 2020 00:42:19: #4 Write output xls file... /project/xxx_71/xxx/xx1/broad/A1.F1_peaks.xls
Traceback (most recent call last):
File "/project/xxx_71/xxx/xx1/software/MACS-master/bin/macs2", line 4, in
It seems that you have a 'quota' to your account. Although the hard disk still has free space, the system admin may set a limit of disk usage that a user can use. If this is a Linux machine, use the command quota
to check.
you mean something like this ; typing quota does not give any output. I am using scratch2 and project directory which have plenty of space.
$ quota $ myquota
/home1/xxx user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- xxx|316413|| 729.95 MiB| 100.00 GiB|| 5593| 2000000
/scratch/xxx user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- xxx|316413|| 0 Byte| 10.00 TiB|| 0|unlimited
/scratch2/xxx user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- xxx|316413|| 0 Byte| 30.00 TiB|| 0|unlimited
/project/xxx_71 user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- xxx_71| 32561|| 3.25 TiB| 5.00 TiB|| 311373| 30000000 /project/xx_56 user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- xx_56| 32575|| 2.23 TiB| 10.00 TiB|| 9417| 60000000