goofys
goofys copied to clipboard
Issue with auto-mounting on boot
I'm using goofys to mount (on-demand) S3 buckets to Ubuntu 16.04 images (on AWS). I'm noticing the following issue on mount where the filesystem isn't mounted.
Jul 24 18:31:47 ip-10-0-21-129 systemd[1]: Started LXD - container startup/shutdown.
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1337]: main.ERROR Unable to access 'lb.test': %!v(PANIC=runtime error: invalid memory address or nil pointer dereference)
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1338]: s3.ERROR code=incorrect region, the bucket is not in 'us-east-1' region msg=301 request=
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1338]: s3.ERROR code=incorrect region, the bucket is not in 'us-east-1' region msg=301 request=
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1338]: main.ERROR Unable to access 'lb.test': BucketRegionError: incorrect region, the bucket is not in 'us-east-1' region#012#011status code: 301, request id: , host id:
Jul 24 18:31:48 ip-10-0-21-129 mount[1282]: 2017/07/24 18:31:48.089395 main.FATAL Unable to mount file system, see syslog for details
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1338]: main.FATAL Mounting file system: Mount: initialization failed
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: mnt-s3-lb.test.mount: Mount process exited, code=exited status=1
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: Failed to mount /mnt/s3/lb.test.
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: Dependency failed for Remote File Systems.
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: remote-fs.target: Job remote-fs.target/start failed with result 'dependency'.
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: mnt-s3-lb.test.mount: Unit entered failed state.
I have the right IAM permissions and have the following /etc/fstab entry
goofys#lb.test /mnt/s3/lb.test fuse ro,_netdev,allow_other,--file-mode=0666 0 0
I can confirm goofys works if I try to manually mount after boot with
sudo mount /mnt/s3/lb.test
Is there a way to robustly get goofys to mount files on startup?
this is strange, can you include the output of --debug_s3?
Is there a way to pass debug_s3 as a flag via fstab? I wont be able to get you the debug log otherwise, because this happens during boot
yup just do --debug_s3 like you would with --file-mode.
i am running into the same issue when using chef. i can manually run mount /data
with no issue.
goofys#mybucket /data fuse _netdev,allow_other,--dir-mode=0777,--file-mode=0666,--debug_s3 0 2
https://gist.github.com/chasebolt/a28bac3785d2df8d1685d60cf8f19421
using /root/.aws/credentials
file works fine. something with using IAM roles it is failing with. I temporarily gave the IAM role full access and it still failed.
I haven't had the chance to spin up a cluster to try and repro this issue. I'll try and get to it next week.
seems like retrieving the IAM role is erroring:
Jul 28 19:50:13 i-0eef1f0ad642878c5 /usr/bin/goofys[12999]: s3.DEBUG DEBUG: Validate Response ec2metadata/GetMetadata failed, not retrying, error EC2MetadataError: failed to make EC2Metadata request
caused by: <?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>404 - Not Found</title>
</head>
<body>
<h1>404 - Not Found</h1>
</body>
</html>
@chasebolt are you sure your IAM is setup correctly? What if you write a wrapper script for goofys and sleep a bit first?
I am having the same issue on an AWS Ubuntu 14.04 server. I have been running goofys since 0.0.9 and it has been great. I recently upgraded to 0.0.17-3d40e98 and decided to finally add an entry to my fstab
file. When I reboot, my bucket is not mounted. However, I can successfully mount the bucket from the command line. Here is my fstab
entry:
goofys#my-staging /mnt/staging fuse --uid=106,--gid=111,_netdev,allow_other,--file-mode=0644,--debug_s3 0 0
I have this bucket setup for use with vsftpd
, and the uid
/gid
correspond to the ftp
user. This is the successful command line I am using:
goofys --uid 106 --gid 111 -o allow_other my-staging /mnt/staging
I added the --debug_s3
option to my last reboot attempt, but there is no output in any of the system logs that I can find. (grep goofys *.log
and manually looking through them...) When I successfully mount using the command line, I get the following in /var/log/syslog
:
Sep 10 04:38:47 ip-xxx-xxx-xxx-xxx /usr/local/bin/goofys[1450]: main.INFO File system has been successfully mounted.
After upgrading to 0.0.17, I also noticed a zombie process which I had never seen before on this server. Here is the output of ps
and pstree
immediately after rebooting
# ps aux | grep Z
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 361 0.0 0.0 0 0 ? Zs 04:35 0:00 [goofys] <defunct>
# pstree -p -s 361
init(1)───mountall(250)───mount(322)───sh(324)───goofys(325)───goofys(361)
# pstree
init─┬─acpid
├─atd
├─cron
├─dbus-daemon
├─dhclient
├─7*[getty]
├─master─┬─pickup
│ └─qmgr
├─mountall───mount───sh───goofys─┬─goofys
│ └─4*[{goofys}]
├─rsyslogd───3*[{rsyslogd}]
├─sshd───sshd───sshd───bash───sudo───su───bash───pstree
├─systemd-logind
├─systemd-udevd
├─upstart-file-br
├─upstart-socket-
├─upstart-udev-br
└─vsftpd
Here is the output of pstree
after successfully mounting from the command line:
# pstree -p -s 361
init(1)───mount(322)───sh(324)───goofys(325)───goofys(361)
# pstree
init─┬─acpid
├─atd
├─cron
├─dbus-daemon
├─dhclient
├─7*[getty]
├─goofys───6*[{goofys}]
├─master─┬─pickup
│ └─qmgr
├─mount───sh───goofys─┬─goofys
│ └─4*[{goofys}]
├─rsyslogd───3*[{rsyslogd}]
├─sshd───sshd───sshd───bash───sudo───su───bash───pstree
├─systemd-logind
├─systemd-udevd
├─upstart-file-br
├─upstart-socket-
├─upstart-udev-br
└─vsftpd
The output of ps aux
is the same. It definitely looks like it's hanging on reboot when called by mount, for whatever reason. Not sure what else I can provide, since there is no debug output in syslog. I've copied the output of the entire last reboot, if you'd like to see it. Otherwise, if there is anything else I can provide, let me know.
Besides not mounting on boot, it works perfectly and I've been very happy with the performance uploading files through vsftpd
!
I have the same issue. The version is v0.0.9 with CentOS 6.9 x64 and golang 1.7.6 and fuse 2.8.3. I have fstab setting as below.
/usr/local/bin/goofys#hoge-contents /mnt/s3src fuse _netdev,allow_other,--uid=501,--gid=501 0 0
After rebooting OS , fstab does not work. not mounting s3bucket. I have error logs on OS, as below.
Sep13 11:17:13 user /usr/local/bin/goofys[1087]: main.ERROR Unable to access 'hoge-contents': BucketRegionError: incorrect region, the bucket is not in 'us-east-1' region
Sep 13 11:17:13 user /usr/local/bin/goofys[1087]: main.FATAL Mounting file system: Mount: initialization failed
But I can manually run mount command # mount -a
, it's successful with logs as below.
Sep 13 11:19:29 user /usr/local/bin/goofys[1499]: s3.INFO Switching from region 'us-east-1' to 'ap-northeast-1'
Sep 13 11:19:29 user kernel: fuse init (API version 7.14)
Sep 13 11:19:29 user /usr/local/bin/goofys[1499]: main.INFO File system has been successfully mounted.
Also /root/.aws/credentials
file is full of my key info. /root/.aws/config
file is as below.
[default]
output =
region = ap-northeast-1
so, what is wrong with my setting ? If you have a solution about this , please tell me how . thank you.
@haraa You can add the region option to fstab. See #211 for an example.
@jeff-kilbride Thank you for reply to my question ! I am going to try it .
I have the same issue on Ubuntu 14, mount -a works post reboot but if you reboot the box, bucket is not mounting.
for people who have reported this, it's not clear to me that they are all the same problem. So could you please attach your syslog with --debug_s3
?
Here's a gist with the full syslog output of my last reboot:
https://gist.github.com/jeff-kilbride/984c72e702988172be24a3d36e4e585a
I couldn't find anything in there related to goofys, even with the --debug_s3
option.
Literally the same is true for me. I cannot see anything in the syslog when it does not mount it. It just does not do it and nothing gets logged despite having debug options set.
@cblackuk Do you also have a zombie goofys process after reboot?
@jeff-kilbride I cannot see any zombie processes no... also 1 out of 10 reboots it will actually mount it... the other 9 times it will not and I am making NO changes to it whatsoever... it is just magically working or magically not working.... but when it is working it spams syslog with all the debug logs...
@jeff-kilbride Actually... you are correct! When it does not mount I can see:
ps axo stat,ppid,pid,comm | grep -w defunct
Zs 980 1020 goofys
do all of you use IAM or credential from ~/.aws/credentials
?
I use ~/.aws/credentials -- as root.
Same
I still have no clue about this. For people who use .aws/credentials
, could you try adding --profile default
?
I tried adding --profile=default
to my fstab entry:
goofys#my-staging /mnt/staging fuse --uid=106,--gid=111,_netdev,allow_other,--file-mode=0644,--profile=default,--debug_s3 0 0
I'm still getting a zombie process when I reboot and my mount point is not there:
$ pstree
init─┬─acpid
├─atd
├─cron
├─dbus-daemon
├─dhclient
├─7*[getty]
├─master─┬─pickup
│ └─qmgr
├─mountall───mount───sh───goofys─┬─goofys
│ └─4*[{goofys}]
├─ondemand───sleep
├─rsyslogd───3*[{rsyslogd}]
├─sshd───sshd───sshd───bash───pstree
├─systemd-logind
├─systemd-udevd
├─upstart-file-br
├─upstart-socket-
├─upstart-udev-br
└─vsftpd
$ top
top - 04:13:13 up 1 min, 1 user, load average: 0.33, 0.14, 0.05
Tasks: 109 total, 1 running, 107 sleeping, 0 stopped, 1 zombie
Hi @kahing,
I've read this thread and others and have also not been able to get fstab to mount my S3 drive on bootup either. If I boot my machine and then type (as root):
mount /root/s3
It works fine. My fstab is exactly what's in the README.md:
goofys#bucket /root/s3 fuse _netdev,allow_other,--file-mode=0666 0 0
But nothings shows in /var/log/kernlog. It looks like it doesn't even try. I've added --debug_s3 to my fstab:
goofys#bucket /root/s3 fuse _netdev,allow_other,--file-mode=0666,--debug_s3 0 0
...and rebooted, but still nothing shows. Of course, I do have the /root/.aws/credentials file correct, which is why "mount /root/s3" works. Any breakthrough on this point?
Same for me, fstab and autofs not working. Any solution?
debug shows: mounted indirect on /mnt/goofys with timeout 300, freq 75 seconds ghosting enabled attempting to mount entry /mnt/goofys/panthermedia-test
2018/06/07 12:47:03.534282 s3.INFO Switching from region 'us-east-1' to 'eu-west-1' 2018/06/07 12:47:03.572287 main.INFO File system has been successfully mounted.
but ls /mnt/goofys/panthermedia-test/ not returning
Not working for me either. Ubuntu 12.04, latest goofys.
The fstab line is:
/usr/local/sbin/goofys-latest#<bucket-name> /home/s3user/files fuse _netdev,allow_other,--debug_s3,--debug_fuse,--uid=6022,--gid=6022 0 0
The file /root/.aws/credentials has valid creds and I even tried various file and dir permissions on it. I can manually do "mount -a" as root and get my bucket mounted.
But on reboot, it's not working. I see the pending processes in pstree output:
mountall,450 --daemon
-mount,486 -n -t fuse -o _netdev,allow_other,--debug_s3,--debug_fuse,--uid=6022,--gid=6022 /usr/local/sbin/goofys-latest#
And the syslog has this:
Aug 17 20:34:18 HOSTNAME /usr/local/sbin/goofys-latest[1974]: s3.ERROR code=NoCredentialProviders msg=no valid providers in chain. Deprecated.#012#011For verbose messaging see aws.Config.CredentialsChainVerboseErrors, err=<nil>#012
If you look into the environment, is $HOME set correctly?
Yes, it's /root when I'm logged in as root. However, I have no idea if it's even set during the boot process.
On Fri, Aug 17, 2018 at 5:00 PM Ka-Hing Cheung [email protected] wrote:
If you look into the environment, is $HOME set correctly?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kahing/goofys/issues/207#issuecomment-413986903, or mute the thread https://github.com/notifications/unsubscribe-auth/AFrvnI-s0McX-U7vlxRwvxP-VWBLig10ks5uRy7kgaJpZM4Oh6ji .
Just an update...
I recently moved my goofys
setup from an Ubuntu 14.04 server to one running Amazon Linux 2. Now, the auto mount on boot stuff works. So, at least in my experience, it seems to be something weird with Ubuntu flavors.
If you see a stuck process after boot, you can look into /proc/pid/environ to see the environment variables