cromwell
cromwell copied to clipboard
Stdout and Stderr empty
Trying to print stdout and stderr to files so that I can see what my command is doing. I broke it down to the simple example shown in the docs.
command <<<
echo "hello world"
echo "another world"
>&2 echo "hello world"
>>>
output {
File message = stdout()
File message2 = stderr()
}
}
Both stdout and stderr return empty. My goal is to capture all stdout and stderr from whatever is in the command section.
On real workflows, I also get a bunch of nonsense in the stdout stderr logs. If my run fails I want to know what caused it.
Using cromwell version 72. Any help?
I can't tell from this fragment what the problem you're seeing is. This workflow worked as expected for me. I tried running it using miniwdl run
and java -jar cromwell.jar run
.
version 1.0
task T {
command {
echo hello world
>&2 echo another world
}
output {
File out = stdout()
File err = stderr()
}
}
workflow W {
call T
output {
File out = T.out
File err = T.err
}
}
Another common form is String s = read_string(stdout())
which puts the command block stdout
in a string result. Sometimes this is easier to use than opening a file.
Trying the example above worked, but it seems to be that cromwell fails to capture docker ran jobs? Here is a more detailed example:
version development
workflow wf {
input {
File left_ = "/Users/leo/dev/tools/trinity/2.13.2/tests/data/reads.left.fa"
File right_ = "/Users/leo/dev/tools/trinity/2.13.2/tests/data/reads.right.fa"
String seqType = "fa"
}
call trinity {
input:
left_ = left_,
right_ = right_,
seqType_ = seqType,
}
output {
File output_fasta_ = trinity.output_fasta_
File out = trinity.out
File err = trinity.err
}
}
task trinity {
input {
File? left_
File? right_
File? sample_file_
String seqType_
String? memory_ = "1"
Int? cpus_
String output_dir = 'trinity_out'
}
command <<<
set -e -o pipefail
Trinity \
~{if defined(left_) then '--left ${left_}' else ''} \
~{if defined(right_) then '--right ${right_}' else ''} \
~{if defined(seqType_) then '--seqType ${seqType_}' else ''} \
~{if defined(memory_) then '--max_memory ${memory_}G' else ''} \
~{if defined(output_dir) then '--output ${output_dir}' else ''}
>>>
runtime {
docker: 'trinity@sha256:e6d449f0838b91beaa17c15cf4d391a79ff6069badf98e92b686062624946630'
docker_user: 'root'
memory: if defined(memory_) then "${memory_}" else ""
cpu: if defined(cpus_) then "${cpus_}" else ""
}
output {
File output_fasta_ = "trinity_out.Trinity.fasta"
File out = stdout()
File err = stderr()
}
}
When I go look at stderr and stdout, they are both empty and fail to capture stdout and stderr. Not sure what is going on here.
The only thing I can think of is cromwell cannot capture docker ran jobs stdout and stderr. Another idea is that the tool itself does not write to stdout or stderr but I confirmed that it does locally.
Docker image: docker pull pegi3s/trinity
I think maybe your tool outputs lines to the console, but they aren't actually going to the stderr
and stdout
streams.
I think maybe your tool outputs lines to the console, but they aren't actually going to the
stderr
andstdout
streams.
Yes I have tried Paul's example and it works just fine. Is there any way to redirect tool outputs? I have tried adding 2&>1
to the end of the command with no luck.
I think I found the issue. In my script.submit I am getting:
mkfifo: cannot create fifo '/cromwell-executions/wf/87c7dfae-7ad1-4435-960b-a2ecff96c90d/call-trinity/tmp.bb40533c/out.1': Operation not permitted
mkfifo: cannot create fifo '/cromwell-executions/wf/87c7dfae-7ad1-4435-960b-a2ecff96c90d/call-trinity/tmp.bb40533c/err.1': Operation not permitted
/cromwell-executions/wf/87c7dfae-7ad1-4435-960b-a2ecff96c90d/call-trinity/execution/script: line 18: /cromwell-executions/wf/87c7dfae-7ad1-4435-960b-a2ecff96c90d/call-trinity/tmp.bb40533c/err.1: No such file or directory
I am running the docker container as root
and by default have given it write permissions. This was done on a Mac OS system.
I tried this workflow again on a Linux based system and stdout and stderr work right out of the box. This is very interesting and there should be a work around for Mac OS if possible. @aednichols @pshapiro4broad Can this get done?
If /cromwell-executions/
is referring to the root of your Mac system, I would not expect that to work due to a Mac feature known as System Integrity Protection.
You can test this in isolation by issuing sudo mkdir /test
which returns mkdir: /test: Read-only file system
for me (Mac OS 12.2.1).
I do not recommend using an escalation to root
to work around, well, pretty much anything.
If you are a Mac user and are also experiencing this issue, you have to compile cromwell from source and change file backend/src/main/scala/cromwell/backend/standard/StandardAsyncExecutionActor.scala
as so:
- |mkfifo "$$$out" "$$$err"
- |trap 'rm "$$$out" "$$$err"' EXIT
+ |touch "$$$out" "$$$err"
|touch $stdoutRedirection $stderrRedirection
- |tee $stdoutRedirection < "$$$out" &
- |tee $stderrRedirection < "$$$err" >&2 &
This will allow you bypass the System Integrity Protection
and produce stdout and stderr logs if running local.
Thanks for the followup, interesting to learn you found a workaround.
I'd be curious to see whether there is a simpler workaround involving a change to the directory you run Cromwell from.
When I run with local Docker, Cromwell puts cromwell-executions
at the same path as the executable, i.e. if I run Cromwell from
/Users/anichols/Projects/cromwell
I get a files at paths like
/Users/anichols/Projects/cromwell/cromwell-executions/three_step/ce6a6385-a8d6-4532-9aa4-d2eacdd89f5b/call-cgrep/execution/rc