ansible.posix
ansible.posix copied to clipboard
Add ephemeral state to mount fs without altering fstab
SUMMARY
Add ephemeral
possible value for state parameter.
The ephemeral
state allows end-users to mount a volume on a given path, without altering an fstab file or creating a dummy one.
There have been debates about splitting this module into an fstab
module and a mount
module, but nothing has been done in 5 years. This is why I'd like to propose this feature.
Downside: the way the posix.mount
module handles mount options prevents it to be able to check exactly if the given opts perfectly match the mount options of an already mounted volume. To achieve this, the module would have to be aware of every mount
default options, for all platforms. This is why state=ephemeral
always return changed=yes
.
In other terms, a remount
will always be triggered if the volume is already mounted, even if the options look to be the same. Using state=unmounted
on a volume previously mounted with ephemeral
behaves correctly.
ISSUE TYPE
- Feature Pull Request
Related issues:
COMPONENT NAME
mount
ADDITIONAL INFORMATION
Example use case
Sometimes it is handy to be able to temporarily mount a volume. I've seen this in couple companies where Ansible is used to generate reports and put it on network shares. However, some admins don't look into mount options such as krb5 and multiuser for SMB shares. Being forced to use fstab-based mounts leads to clear text passwords being stored more or less temporarily on the host filesystem, requiring "manual" deletion (with the hassle of using blocks, rescues, always, etc.). This feature respond to this use case by providing a way to mount a volume without having to alter an fstab file.
Description of changes
- Edit DOCUMENTATION section to add
ephemeral
state - Edit EXAMPLES section to add
ephemeral
state example - Add new function
_set_ephemeral_args
to use instead of_set_fstab_args
when using ephemeral state - Add new function
_is_same_mount_src
to determine if the mounted volume on the destination path has the same source than the one supplied to the module - Add new function
_get_mount_info
to avoid redundant code between functionsget_linux_mounts
and_is_same_mount_src
- Modify
get_linux_mount
to use the new function_get_mount_info
. Original behavior is preserved. - Integrate
ephemeral
parameter treatment intomounted
treatment, and addif
statements to avoid IO from/to fstab - Add
ephemeral
as a possible value for the state parameter inmain()
- Add
required_if
dependencies forephemeral
state
@NeodymiumFerBore Thank you for the pull request. If it is possible, can you create integration tests for the ephemeral
option?
- https://github.com/ansible-collections/ansible.posix/blob/main/tests/integration/targets/mount/tasks/main.yml
Hello @saito-hideki . Thank you for your interest in my PR. Yes I can write integration tests. I don't have much time right now. I will push the tests into this branch.
Hello @saito-hideki . Thank you for your interest in my PR. Yes I can write integration tests. I don't have much time right now. I will push the tests into this branch.
@NeodymiumFerBore thanks for this feature / option. Would be cool if you could write those tests so this can be added to Ansible ;-)
Hello. Glad you appreciate this feature, thanks for the feedback :) I had a lot of work and stuff going on lately, I haven't take the time to write it. Will try to do it this weekend!
Hello @saito-hideki . I started writing integration tests and got into some weird behavior review. Maybe this is intended, but this is not documented as far as I know.
I will keep writing integration tests while keeping the current behavior. If you have some time, please give me your opinion on the following. We may fix it with another PR.
EDIT: opened issue #322 as it was beyond the scope of this PR. Sorry for the ping
Rebased PR branch on upstream main branch
Tests fail on RHEL 7.9 remote devel and remote 2.12: assertion "'/tmp/myfs' is mount"
is not asserted, though it is asserted on RHEL 8. Any idea that would save me some time?
Will go into troubleshooting this next week.
Build succeeded.
- ansible-changelog-fragment : SUCCESS in 1m 11s
- ansible-test-sanity-docker-devel : SUCCESS in 9m 42s (non-voting)
- ansible-test-sanity-docker-milestone : SUCCESS in 8m 21s
- ansible-test-sanity-docker-stable-2.9 : SUCCESS in 10m 11s
- ansible-test-sanity-docker-stable-2.10 : SUCCESS in 7m 39s
- ansible-test-sanity-docker-stable-2.11 : SUCCESS in 15m 59s
- ansible-test-sanity-docker-stable-2.12 : SUCCESS in 7m 57s
- ansible-test-units-posix : SUCCESS in 6m 02s
- ansible-ee-tests-latest : SUCCESS in 8m 06s
- ansible-ee-tests-stable-2.9 : SUCCESS in 12m 22s
- ansible-ee-tests-stable-2.11 : SUCCESS in 9m 42s
- ansible-ee-tests-stable-2.12 : SUCCESS in 9m 31s
- ansible-galaxy-importer : SUCCESS in 4m 47s
- build-ansible-collection : SUCCESS in 2m 58s
Hello @saito-hideki . Should I squash my commits into a single one and force push my branch? Rebase it again on upstream main?
Is there any required action from me before the merge can occur?
Build succeeded.
- ansible-changelog-fragment : SUCCESS in 27s
- ansible-test-sanity-docker-devel : FAILURE in 8m 23s (non-voting)
- ansible-test-sanity-docker-milestone : SUCCESS in 8m 18s
- ansible-test-sanity-docker-stable-2.9 : SUCCESS in 12m 02s
- ansible-test-sanity-docker-stable-2.10 : SUCCESS in 10m 23s
- ansible-test-sanity-docker-stable-2.11 : SUCCESS in 8m 47s
- ansible-test-sanity-docker-stable-2.12 : SUCCESS in 8m 55s
- ansible-test-units-posix : SUCCESS in 6m 04s
- ansible-ee-tests-latest : SUCCESS in 12m 39s (non-voting)
- ansible-ee-tests-stable-2.9 : SUCCESS in 17m 41s
- ansible-ee-tests-stable-2.11 : SUCCESS in 9m 11s
- ansible-ee-tests-stable-2.12 : SUCCESS in 11m 24s (non-voting)
- ansible-galaxy-importer : SUCCESS in 3m 12s
- build-ansible-collection : SUCCESS in 2m 06s
Build succeeded.
- ansible-changelog-fragment : SUCCESS in 26s
- ansible-test-sanity-docker-devel : SUCCESS in 10m 00s (non-voting)
- ansible-test-sanity-docker-milestone : SUCCESS in 9m 28s
- ansible-test-sanity-docker-stable-2.9 : SUCCESS in 10m 44s
- ansible-test-sanity-docker-stable-2.10 : SUCCESS in 9m 17s
- ansible-test-sanity-docker-stable-2.11 : SUCCESS in 8m 10s
- ansible-test-sanity-docker-stable-2.12 : SUCCESS in 9m 18s
- ansible-test-units-posix : SUCCESS in 6m 23s
- ansible-ee-tests-latest : SUCCESS in 12m 50s (non-voting)
- ansible-ee-tests-stable-2.9 : SUCCESS in 11m 59s
- ansible-ee-tests-stable-2.11 : SUCCESS in 13m 40s
- ansible-ee-tests-stable-2.12 : SUCCESS in 11m 21s (non-voting)
- ansible-galaxy-importer : SUCCESS in 5m 03s
- build-ansible-collection : SUCCESS in 3m 32s
Hello @saito-hideki . Is there a reachable NFS target in the test environment? If yes, what's its name?
It would highly simplify integration tests. Using loop devices on *BSD and Solaris is not transparent, and they all have different tools (lofiadm
, mdconfig
, vnconfig
).
Integration tests in my last push work at least on (py2/py3) Solaris 11, NetBSD 8.2/9.2, FreeBSD 13, OpenBSD 6.9, but it's pretty heavy...
Thank you.
Hi @NeodymiumFerBore
Integration tests in my last push work at least on (py2/py3) Solaris 11, NetBSD 8.2/9.2, FreeBSD 13, OpenBSD 6.9, but it's pretty heavy... Thank you.
Thank you for your hard work. Your integration tests worked not only Linux but also on Solaris and *BSD(my test environment is Solaris11.4, FreeBSD13, NetBSD9.2, and OpenBSD6.9). Also, I would appreciate it if you squash multiple commits into one :) I have left two trivial change requests and will set approve flag once it is fixed.
@Akasurde if you have B/W, is it possible to review this PR? if you have B/W, is it possible to review this PR? This PR looks pretty reasonable for me.
Hello @saito-hideki . Thank you for your warm feedback! :)) The requested changes have been addressed.
In my last push (squashed commits), the CI failed with error ERROR! Error when getting collection version metadata for community.general:1.3.5 from default (https://galaxy.ansible.com/api/) (HTTP Code: 429, Message: Too Many Requests Code: Unknown)
. I don't think the code is responsible for this error.
Is there any way I can trigger the CI again without pushing dummy stuff?
Build failed.
- ansible-changelog-fragment : SUCCESS in 56s
- ansible-test-sanity-docker-devel : SUCCESS in 12m 41s (non-voting)
- ansible-test-sanity-docker-milestone : SUCCESS in 11m 51s
- ansible-test-sanity-docker-stable-2.9 : SUCCESS in 19m 24s
- ansible-test-sanity-docker-stable-2.10 : SUCCESS in 13m 19s
- ansible-test-sanity-docker-stable-2.11 : SUCCESS in 11m 59s
- ansible-test-sanity-docker-stable-2.12 : SUCCESS in 12m 10s
- ansible-test-units-posix : FAILURE in 9m 29s
- ansible-ee-tests-latest : SUCCESS in 14m 53s (non-voting)
- ansible-ee-tests-stable-2.9 : FAILURE in 15m 21s
- ansible-ee-tests-stable-2.11 : SUCCESS in 12m 44s
- ansible-ee-tests-stable-2.12 : SUCCESS in 13m 18s (non-voting)
- ansible-galaxy-importer : SUCCESS in 7m 06s
- build-ansible-collection : SUCCESS in 3m 29s
Closing and reopening for CI trigger
Build failed.
- ansible-changelog-fragment : SUCCESS in 1m 38s
- ansible-test-sanity-docker-devel : SUCCESS in 9m 14s (non-voting)
- ansible-test-sanity-docker-milestone : SUCCESS in 9m 27s
- ansible-test-sanity-docker-stable-2.9 : SUCCESS in 10m 03s
- ansible-test-sanity-docker-stable-2.10 : SUCCESS in 9m 49s
- ansible-test-sanity-docker-stable-2.11 : SUCCESS in 12m 38s
- ansible-test-sanity-docker-stable-2.12 : SUCCESS in 10m 19s
- ansible-test-units-posix : FAILURE in 5m 50s
- ansible-ee-tests-latest : SUCCESS in 11m 39s (non-voting)
- ansible-ee-tests-stable-2.9 : FAILURE in 13m 50s
- ansible-ee-tests-stable-2.11 : SUCCESS in 11m 03s
- ansible-ee-tests-stable-2.12 : SUCCESS in 12m 27s (non-voting)
- ansible-galaxy-importer : SUCCESS in 4m 39s
- build-ansible-collection : SUCCESS in 5m 26s
@NeodymiumFerBore Thank you for reporting the CI issue. The galaxy issue that you reported seems fixed(galaxy server API possibly unavailable at that time). However, it seems that there's another problem on the CI tests side. This error seems to be a problem on the CI test side and not directly related to this PR. So we will check CI testing and let you know if we have any clue.
Closing and reopening for CI trigger
Build failed.
- ansible-changelog-fragment : SUCCESS in 51s
- ansible-test-sanity-docker-devel : SUCCESS in 8m 50s (non-voting)
- ansible-test-sanity-docker-milestone : SUCCESS in 10m 01s
- ansible-test-sanity-docker-stable-2.9 : SUCCESS in 12m 54s
- ansible-test-sanity-docker-stable-2.10 : SUCCESS in 8m 36s
- ansible-test-sanity-docker-stable-2.11 : SUCCESS in 9m 01s
- ansible-test-sanity-docker-stable-2.12 : SUCCESS in 11m 14s
- ansible-test-units-posix : FAILURE in 6m 11s
- ansible-ee-tests-latest : SUCCESS in 12m 54s (non-voting)
- ansible-ee-tests-stable-2.9 : SUCCESS in 13m 33s (non-voting)
- ansible-ee-tests-stable-2.11 : SUCCESS in 9m 57s
- ansible-ee-tests-stable-2.12 : SUCCESS in 12m 44s
- ansible-galaxy-importer : SUCCESS in 4m 45s
- build-ansible-collection : SUCCESS in 4m 15s
Hi @NeodymiumFerBore , I have fixed one of the unit test issues, also test team seems fixed several issues regarding execution environment tests. Therefore, if it is possible, can you rebase this PR with the latest main branch? Once you rebased it, I think this PR will pass the CI tests.
Rebased PR branch to master dc4da60affbe29e7706851ec73dab09bd11bdb44.
Build succeeded.
- ansible-changelog-fragment : SUCCESS in 41s
- ansible-test-sanity-docker-devel : SUCCESS in 10m 20s (non-voting)
- ansible-test-sanity-docker-milestone : SUCCESS in 10m 39s
- ansible-test-sanity-docker-stable-2.9 : SUCCESS in 10m 24s
- ansible-test-sanity-docker-stable-2.10 : SUCCESS in 9m 56s
- ansible-test-sanity-docker-stable-2.11 : SUCCESS in 9m 48s
- ansible-test-sanity-docker-stable-2.12 : SUCCESS in 10m 47s
- ansible-test-units-posix : SUCCESS in 6m 06s
- ansible-ee-tests-latest : SUCCESS in 11m 35s (non-voting)
- ansible-ee-tests-stable-2.9 : SUCCESS in 12m 12s (non-voting)
- ansible-ee-tests-stable-2.11 : SUCCESS in 13m 58s
- ansible-ee-tests-stable-2.12 : SUCCESS in 12m 06s
- ansible-galaxy-importer : SUCCESS in 5m 46s
- build-ansible-collection : SUCCESS in 4m 23s
Hello @saito-hideki , thank you for your quick fix on the CI, build succeeded 🥳 Let me know if there is anything I should do for the merge to occur!
@Akasurde I think this PR is reasonable. I would appreciate it if you could review this PR :) Thanks!
Hi @Akasurde . Thank you for your review, I applied your suggestions :) The ansible-galaxy API seems to have some "Too many requests" issues, and fails the CI.
I will close/re-open the PR to trigger the CI again in couple hours.
Also, let me know if I should squash this last commit again :) Thank you !
Build succeeded.
:heavy_check_mark: ansible-changelog-fragment SUCCESS in 20s :heavy_check_mark: ansible-test-sanity-docker-devel SUCCESS in 8m 04s (non-voting) :heavy_check_mark: ansible-test-sanity-docker-milestone SUCCESS in 9m 14s :heavy_check_mark: ansible-test-sanity-docker-stable-2.9 SUCCESS in 11m 10s :heavy_check_mark: ansible-test-sanity-docker-stable-2.10 SUCCESS in 8m 22s :heavy_check_mark: ansible-test-sanity-docker-stable-2.11 SUCCESS in 8m 25s :heavy_check_mark: ansible-test-sanity-docker-stable-2.12 SUCCESS in 8m 46s :heavy_check_mark: ansible-test-units-posix SUCCESS in 6m 41s :heavy_check_mark: ansible-galaxy-importer SUCCESS in 4m 36s :heavy_check_mark: build-ansible-collection SUCCESS in 3m 06s
Closing and reopening for CI trigger
Build succeeded.
:heavy_check_mark: ansible-changelog-fragment SUCCESS in 15s :heavy_check_mark: ansible-test-sanity-docker-devel SUCCESS in 8m 08s (non-voting) :heavy_check_mark: ansible-test-sanity-docker-milestone SUCCESS in 8m 56s :heavy_check_mark: ansible-test-sanity-docker-stable-2.9 SUCCESS in 9m 20s :heavy_check_mark: ansible-test-sanity-docker-stable-2.10 SUCCESS in 8m 37s :heavy_check_mark: ansible-test-sanity-docker-stable-2.11 SUCCESS in 7m 58s :heavy_check_mark: ansible-test-sanity-docker-stable-2.12 SUCCESS in 8m 59s :heavy_check_mark: ansible-test-units-posix SUCCESS in 5m 12s :heavy_check_mark: ansible-galaxy-importer SUCCESS in 3m 55s :heavy_check_mark: build-ansible-collection SUCCESS in 2m 59s
Hello @saito-hideki @Akasurde . I have the following error in the CI, on all Remote 2.9 : ERROR! couldn't resolve module/action 'community.general.filesize'. This often indicates a misspelling, missing collection, or incorrect module path.
. I see there is the same error on upstream CI, commit 2d3f55c.
Should I not worry about this? Do you have an idea why this module is not resolved on an older version of Ansible?
Also, I have some ERROR: Timeout waiting for freebsd/12.0 instance
for all FreeBSD tests. Again, same problem on upstream CI. I cannot do much about this I guess.
Please, let me know if I have to change something in my PR. As of now I'm not sure what to do :(
Thank you!
Hi @NeodymiumFerBore thank you for the heads-up. I'll check that there is a problem on the CI process side.