cp fails in very deep directories
I saw commit https://github.com/uutils/coreutils/pull/2624/commits/ef9c5d4fcf13fdc0c7bc8f3a8d14f84c3986ee51 and figured I should test this situation:
$ name="0123456789ABCDEF"
$ name="${name}${name}${name}${name}"
$ name="${name}${name}${name}${name}"
$ name="${name:0:255}"
$ mkdir deep && cd deep && for i in {1..17}; do mkdir $name && cd $name; done
$ touch foo
$ cp foo bar
$ RUST_BACKTRACE=1 ~/code/uutils/coreutils/target/debug/uutils cp foo baz
cp: error: File name too long (os error 36)
thread 'main' panicked at 'explicit panic', src/cp/cp.rs:124
stack backtrace:
1: 0x558a939fbcff - std::sys::backtrace::tracing::imp::write::h6f1d53a70916b90d
2: 0x558a93a049dd - std::panicking::default_hook::{{closure}}::h137e876f7d3b5850
3: 0x558a93a034d0 - std::panicking::default_hook::h0ac3811ec7cee78c
4: 0x558a93a03ad8 - std::panicking::rust_panic_with_hook::hc303199e04562edf
5: 0x558a92fdb343 - std::panicking::begin_panic::h341b039f84d0b176
at /build/rust/src/rustc-1.13.0/src/libstd/panicking.rs:413
6: 0x558a92ff4131 - uu_cp::copy::{{closure}}::h057cd0f83ebce91f
at /home/tavianator/code/uutils/src/cp/cp.rs:124
7: 0x558a92fdd5ad - <core::result::Result<T, E>>::unwrap_or_else::h06a0c9c0105eda3c
at /build/rust/src/rustc-1.13.0/src/libcore/result.rs:706
8: 0x558a92fef63a - uu_cp::copy::h4a5db8e7a229cb6a
at /home/tavianator/code/uutils/src/cp/cp.rs:119
9: 0x558a92fecc1f - uu_cp::uumain::h570fe0b91cd6265b
at /home/tavianator/code/uutils/src/cp/cp.rs:59
10: 0x558a92e6c3e6 - uutils::main::hdaaa0dae8bde1ca5
at /home/tavianator/code/uutils/src/uutils/uutils.rs:74
11: 0x558a93a0c336 - __rust_maybe_catch_panic
12: 0x558a93a02d41 - std::rt::lang_start::h538f8960e7644c80
13: 0x558a92e6e223 - main
14: 0x7f83b4321b24 - __libc_start_main
15: 0x558a92e58169 - _start
16: 0x0 - <unknown>
What is the expected behavior? Many path constraints seem to be limited to the underlying filesystem. It also wouldn't surprise me on Linux if bash and zsh also have shell dependent limits.
https://serverfault.com/questions/9546/filename-length-limits-on-linux https://eklitzke.org/path-max-is-tricky
The shell specific dependencies do bother me - I have seen other errors break TTY settings on bash that run fine under zsh. I would like an AWK style test suite for this project https://www.cs.princeton.edu/courses/archive/spring01/cs333/awktest.html
Perhaps a tests/shell folder?
I would expect it to just copy the file. There's no need to compute the full absolute path.
PATH_MAX applies to many things but there's no need for it to limit the current working directory.
Is this error already documented?
---- test_cp::test_closes_file_descriptors stdout ----
current_directory_resolved:
run: /path/coreutils/target/debug/coreutils cp -r --reflink=auto dir_with_10_files/ dir_with_10_files_new/
thread 'test_cp::test_closes_file_descriptors' panicked at 'Command was expected to succeed.
stdout =
stderr = cp: '/tmp/.tmpVx5ebK/dir_with_10_files/7' -> 'dir_with_10_files_new/7': Too many open files (os error 24)
', tests/common/util.rs:166:9
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hasn't been fixed yet
https://github.com/uutils/coreutils/issues/2688#issuecomment-955769210
the test_cp::test_closes_file_descriptors ... ok passes,
and the initial report
$ name="0123456789ABCDEF" $ name="${name}${name}${name}${name}" $ name="${name}${name}${name}${name}" $ name="${name:0:255}" $ mkdir deep && cd deep && for i in {1..17}; do mkdir $name && cd $name; done $ touch foo $ cp foo bar
also works
# pwd | wc -m
4375
# RUST_BACKTRACE=1 /mnt/p2/coreutils/target/release/coreutils cp foo baz2
Might this be considered for closing, then?