Adrian
Adrian
@dang @ffilz Thanks for your replies, I understand now.
@ffilz @dang I have encountered a new confusion, I don't understand how the memory requested in the two functions nfs4_add_clid_entry and nfs4_add_rfh_entry is released? The memory requested in the nfs4_add_clid_entry...
> @Haroldll Yeah, we have faced this...Please cherry pick this patch... https://gerrithub.io/c/ffilz/nfs-ganesha/+/1174224 Thank you. It looks great. I'll try cherry pick this patch and test it again.
> @Haroldll是的,我们遇到了这个...请修复这些补丁... [https://gerrithub.io/c/filz/nfs-ganesha/+/1174224](https://gerrithub.io/c/ffilz/nfs-ganesha/+/1174224) After merging for a period of time, this problem still occurred, but the probability became smaller. I think there are other reasons. see: Thread 82 (Thread 0x7f153b5ff700...
Saw the patch in the ceph community, close.
I've encountered a strange problem. My ganesha is running in a container, the container uses a bridged network, and the NFS client is reading and writing data. When the primary...
I tried version 5.7 and still have this problem. In addition, the ceph_alloc_state/ceph_free_state function seems to be called only by NFS V4. This problem does not occur when using V3....
I tried the new patch, the bug seems to be fixed, memory of ganesha is stable.
> No, the proper solution would be to use CLOCK_MONOTONIC for cases that cannot go backwards. However, the _real_ proper solution is to run NTP on all servers, and to...