open-vm-tools icon indicating copy to clipboard operation
open-vm-tools copied to clipboard

Slow disk write speed for shared folders mounted via vmhgfs-fuse

Open smattheis opened this issue 7 years ago • 50 comments

I'm using VMWare Fusion 10.1.1 on Mac OS X 10.12.6 with Ubuntu 16.04 (Linux ubuntu 4.4.0-121-generic) and observe that the speed for writing to shared folders mounted with vmhgfs-fuse is extremely different from writing to a folder of the guest OS.

I installed open-vm-tools-desktop with the following versions:

$ vmhgfs-fuse -V
vmhgfs-fuse: version 1.6.6.0

FUSE library version: 2.9.4
fusermount version: 2.9.4
using FUSE kernel interface version 7.19
$ dpkg -s open-vm-tools-desktop 
Package: open-vm-tools-desktop
Status: install ok installed
Priority: extra
Section: admin
Installed-Size: 454
Maintainer: Ubuntu Developers <[email protected]>
Architecture: amd64
Source: open-vm-tools
Version: 2:10.2.0-3ubuntu0.16.04.1~ppa6
Replaces: open-vm-tools (<< 2:10.0.0~)
Depends: init-system-helpers (>= 1.18~), libatkmm-1.6-1v5 (>= 2.24.0), libc6 (>= 2.14), libcairomm-1.0-1v5 (>= 1.12.0), libdrm2 (>= 2.4.3), libgcc1 (>= 1:3.0), libglib2.0-0 (>= 2.16.0), libglibmm-2.4-1v5 (>= 2.46.0), libgtk-3-0 (>= 3.9.10), libgtkmm-3.0-1v5 (>= 3.18.0), libice6 (>= 1:1.0.0), libsigc++-2.0-0v5 (>= 2.6.1), libsm6, libstdc++6 (>= 5.2), libudev1 (>= 183), libx11-6 (>= 2:1.4.99.1), libxext6, libxi6, libxinerama1, libxrandr2 (>= 2:1.2.0), libxtst6, open-vm-tools (= 2:10.2.0-3ubuntu0.16.04.1~ppa6), fuse
Recommends: xauth, xserver-xorg-input-vmmouse, xserver-xorg-video-vmware
Suggests: xdg-utils
Breaks: open-vm-tools (<< 2:10.0.0~)
Conffiles:
 /etc/vmware-tools/xautostart.conf 48addca654cf45120790657090edff00
 /etc/xdg/autostart/vmware-user.desktop a00f6f451c319d17d319763b915415ce
Description: Open VMware Tools for virtual machines hosted on VMware (GUI)
 The Open Virtual Machine Tools (open-vm-tools) project is an open source
 implementation of VMware Tools. It is a suite of virtualization utilities and
 drivers to improve the functionality, user experience and administration of
 VMware virtual machines.
 .
 This package contains the user-space programs and libraries that are essential
 for improved user experience of VMware virtual machines.
Homepage: https://github.com/vmware/open-vm-tools
Original-Maintainer: Bernd Zeimetz <[email protected]>

In a non-shared folder of the guest OS, I get the following speed:

$ dd if=/dev/zero of=/tmp/file bs=8
^C8325393+0 records in
8325392+0 records out
66603136 bytes (67 MB, 64 MiB) copied, 10.736 s, 6.2 MB/s

$ dd if=/dev/zero of=/tmp/file bs=512
^C4095313+0 records in
4095313+0 records out
2096800256 bytes (2.1 GB, 2.0 GiB) copied, 11.7533 s, 178 MB/s

However, in a shared folder, I get the following speed:

$ dd if=/dev/zero of=/mnt/hgfs/shared-folder/temp/file bs=8
^C103488+0 records in
103488+0 records out
827904 bytes (828 kB, 808 KiB) copied, 11.9862 s, 69.1 kB/s

$ dd if=/dev/zero of=/mnt/hgfs/shared-folder/temp/file bs=512
^C106471+0 records in
106471+0 records out
54513152 bytes (55 MB, 52 MiB) copied, 13.3213 s, 4.1 MB/s

That's a huge difference which, unfortunately, breaks some of my use cases for shared folders. Do I have something wrong in my setup?

smattheis avatar Apr 28 '18 11:04 smattheis

I'm seeing similar performance. I'd like to know the same as @smattheis . VMWare team - it's been 4 months since this issue was filed, anything insights?

pglombardo avatar Aug 17 '18 20:08 pglombardo

Could you please confirm in your test how is /tmp mounted in the guest and in the host? Is /tmp going to disk on both sides or memory?

There may not be anything wrong in your setup because HGFS over FUSE is expected to slower. However, we will have to look into any possible improvements in our implementation.

Thanks for reporting this issue. I have filed an internal bug to look into any performance improvement possible there.

ravindravmw avatar Aug 18 '18 23:08 ravindravmw

Of course, /tmp is a directory on the virtual disk mounted as / into the guest OS.

smattheis avatar Aug 19 '18 05:08 smattheis

Sorry not getting to this before now.

I am actively going to look into this and see what is going on here.

steve-goddard-brcm avatar Aug 20 '18 14:08 steve-goddard-brcm

I have seen the same behavior previously with vmware-tools. I just ran smattheis' test and replicated his result (on Ubuntu 16.) Running top shows that vmhgfs-fuse is consuming excessive CPU:

jp@jp-virtual-machine:~$ top

top - 08:51:01 up 37 min,  1 user,  load average: 0.40, 0.12, 0.03
Tasks: 287 total,   2 running, 285 sleeping,   0 stopped,   0 zombie
%Cpu(s):  8.8 us,  3.1 sy,  0.0 ni, 88.1 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  9977528 total,  8503244 free,   864844 used,   609440 buff/cache
KiB Swap: 11532284 total, 11532284 free,        0 used.  8793128 avail Mem 

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                         
  2394 root      20   0  378528    836      8 S  78.7  0.0   0:25.47 vmhgfs-fuse                     
  2487 jp        20   0    7328    700    632 S  14.6  0.0   0:02.18 dd                              
  1071 root      20   0  406568  91504  35552 S   1.0  0.9   0:04.45 Xorg                            
  2071 jp        20   0  663132  37228  28232 S   0.7  0.4   0:01.70 gnome-terminal-                 
  1570 jp        20   0   86344   8688   8076 S   0.3  0.1   0:00.01 window-stack-br                 
  1590 jp        20   0  344972   6604   5384 S   0.3  0.1   0:00.81 ibus-daemon                     

nilnullzip avatar Aug 21 '18 15:08 nilnullzip

Same here indeed:

marco@ubuntu-vmware-bionic:~:✓ $ time sh -c "dd if=/dev/zero of=test.tmp bs=4k count=524288 && sync"
524288+0 records in
524288+0 records out
2147483648 bytes (2,1 GB, 2,0 GiB) copied, 2,20079 s, 976 MB/s
real    0m3,876s
user    0m0,140s
sys     0m1,992s

marco@ubuntu-vmware-bionic:~:✓ $ time sh -c "dd if=/dev/zero of=/mnt/hgfs/ssd-host/test.tmp bs=4k count=524288 && sync"
524288+0 records in
524288+0 records out
2147483648 bytes (2,1 GB, 2,0 GiB) copied, 86,801 s, 24,7 MB/s

real    1m27,047s
user    0m0,004s
sys     0m17,424s

marco@ubuntu-vmware-bionic:~:✓ $ time sh -c "dd if=/mnt/hgfs/ssd-host/test.tmp of=/dev/null bs=4k"
524288+0 records in
524288+0 records out
2147483648 bytes (2,1 GB, 2,0 GiB) copied, 19,4225 s, 111 MB/s

real    0m19,660s
user    0m0,036s
sys     0m3,931s

3v1n0 avatar Sep 04 '18 17:09 3v1n0

I have same issue.. with CentoOS

mbm-michal avatar Oct 16 '18 08:10 mbm-michal

Just to clarify and ensure that everyone is aware of what is going on when you run these benchmark tests. It is great to have people run them and check out the performance.

Points to consider:

  1. Shared Folders is a remote file system not a local one. Meaning that the disk is on the host and not the VM and it is shared with all the other running host applications of which Workstation is only one.
  2. The Shared Folders client is a FUSE file system not a kernel mode file system client driver that we used to have and was difficult to support and required builds at install time and constant breakages with OS updates.
  3. FUSE file systems are going to be slower due to the extra overhead of a round-trip per file IO request from the FUSE kernel file system to the user mode FUSE library and shared folders client.
  4. Shared Folders is a remote file system, so each file IO request has to be packaged and sent to the server running in the host desktop Workstation application to perform the file IO to the file on the host desktop. a. Linux dd sends write to Linux kernel and FUSE kernel file system b. FUSE kernel file system sends the write to the user mode FUSE library and registered FUSE shared folders client. c. FUSE shared folders client creates the HGFS packet for the file IO and sends it to the host Workstation application. (Each request has 34 bytes of protocol header.) d. The host Workstation application has the Shared Folders server which receives the packet, decodes it, and issues the file IO to the correct file that is shared from the host. e. The host Workstation application has the Shared Folders server receives the reply and packages the result details and sends it back to the Shared Folders client. f. The Shared Folders client then decodes the reply and sends it back to the FUSE kernel file system. g. FUSE kernel file system sends the write result to the user mode application dd.
  5. For each write operation there are a lot of steps when doing remote file IO. The benefit for Shared Folders is that there isn’t any network latency for going to a truly remote machine.
  6. Shared Folders client gets file IO write requests as determined by the application (dd) and the FUSE kernel file system. Selecting bs=x means the Shared Folders client receives data writes of that size x bytes.
  7. Shared Folders client does not second guess what the application sends it and assumes that it knows how it wants to send the data To minimize the number of round-trips for each write for Shared Folders client a user can select a block size of 60k would be about the maximum transfer size per write request. However, what applications are running on your host with Workstation, the state and type of the disk on which the files reside, the system caching and memory usage all will affect and might inhibit performance. So which size you chose would be better served running tests and see what typically works best in their setup. With Specifying small block sizes is going to result in lots of overhead and round-trips between the Shared Folders client and server and worse between kernel and user mode transitions, larger buffers might hit caching issues and memory issues, and system responsiveness might go down during the file operations. Shared Folders is not perfect, and you are right there is much more we can do to optimize some performance aspects, especially with read/write operations. However, there are some bottlenecks as shown above which cannot be avoided without significant trade-offs in design. Having said all of that I will still investigate what improvements we can make regardless.

Thanks Steve

steve-goddard-brcm avatar Oct 17 '18 11:10 steve-goddard-brcm

Thanks @lousybrit for the insights and many thanks for working on the problem. Some comments:

(1) I reported the problem because for me a use case broke after I switched from VMWare Fusion version 8 to version 10. The use case was that operations on a local Git repository, which I mounted as shared folders between host and guest, timed out and corrupted my repository. I did some tests and found that the mentioned performance characteristics might be the explanation especially because it was no problem to work on non-shared folder within the VM. (2) At the moment, however, I haven't encountered the problem for some time although the performance values are still the same as initially reported. Of course, the tests measure the throughput while the latency for write operations may be actually the critical characteristic that broke (or still breaks) my use case. Nevertheless, since throughput decreases significantly for small block sizes I assume latencies must be high for small block sizes. Do you agree on this conclusion? (3) Maybe Git writes (in some operations, I remember that some rebase operations failed) many small chunks in high frequency where high latencies caused a timeout in Git. Therefore, in my opinion high throughput is not the only requirement for shared folders performance but also low latencies. As said, that used to work fine for a long time that I was using VMWare Fusion 8 and was/is broken with version 10. Do you have an explanation for that?

In conclusion, I think VMWare is a tool mostly for developers and having a local Git repo in a shared folder is something very convenient and maybe a very frequent use case. I hope that this is enough to catch the attention of VMWare to keep working on the issue. From my side, I will do some Git operation tests to maybe have a reproducible case that you could test and work with.

smattheis avatar Oct 17 '18 11:10 smattheis

@lousybrit Thanks so much for working on this!

My use case was running an EDA tool (Quartus) on the Linux guest with the database in a shared folder. This is necessary in my case because some work is performed on Mac and some on Linux. I have no control over the type if I/O Quartus performs, so I can't change the block size. The net effect is that Quartus takes orders of magnitude longer to run which renders it unusable with a shared folder.

For me the telling behavior here is that the fuse process consumes essentially 100% of the CPU when running Quartus. Same is the case when running DD with smaller block sizes. So this is is not the case of communications latencies between guest and host. There is a huge amount of CPU resources being used here on the guest, much more than what seems reasonable for packaging up the data to send to the host even one byte at a time. If I had to guess, this feels like a spin lock, or some other type non-productive loop in Fuse.

nilnullzip avatar Oct 17 '18 12:10 nilnullzip

Hi there, Thanks for the context and background to this.

Inline responses below.

Thanks @lousybrit for the insights and many thanks for working on the problem. Some comments:

(1) I reported the problem because for me a use case broke after I switched from VMWare Fusion version 8 to version 10. The use case was that operations on a local Git repository, which I mounted as shared folders between host and guest, timed out and corrupted my repository. I did some tests and found that the mentioned performance characteristics might be the explanation especially because it was no problem to work on non-shared folder within the VM.

This seems like a useful test case for us. I have seen Windows VM users have similar setups with MS Visual Studio projects. We have trouble with these too. Visual Studio does not do sensible file IO to be run against remote file systems.

(2) At the moment, however, I haven't encountered the problem for some time although the performance values are still the same as initially reported. Of course, the tests measure the throughput while the latency for write operations may be actually the critical characteristic that broke (or still breaks) my use case. Nevertheless, since throughput decreases significantly for small block sizes I assume latencies must be high for small block sizes. Do you agree on this conclusion?

It really does depend on the file IO profile. Here with dd we can see that it is a number of write operations of a consistent block size when small adds up to a larger number of round-trips to the server. That would cause a file update to take longer and so overall time latency for the file update. That could be an issue or not, it is impossible to say exactly without the Git profile and issue. It would be the same as running Git with your "local" checked out repository to be on a real networked server that is remote and is slow in responding. I have not looked into that kind of set up with Git.

(3) Maybe Git writes (in some operations, I remember that some rebase operations failed) many small chunks in high frequency where high latencies caused a timeout in Git. Therefore, in my opinion, high throughput is not the only requirement for shared folders performance but also low latencies. As said, that used to work fine for a long time that I was using VMWare Fusion 8 and was/is broken with version 10. Do you have an explanation for that?

I don't think I have enough information yet to fully know why Git was broken from Fusion 8 to 10. Was your Linux version Ubuntu 16.04 consistent and the same for both? If so, what was it before with Fusion 8? It could mean you migrated from a Shared Folders kernel mode client to a FUSE client for example. I have fixed a few issues with the FUSE client over the time period but nothing I would expect to cause a break unless it was there but being masked by some errant behavior which was addressed. I will see if I can try a few scenarios out myself.

In conclusion, I think VMWare is a tool mostly for developers and having a local Git repo in a shared folder is something very convenient and maybe a very frequent use case. I hope that this is enough to catch the attention of VMWare to keep working on the issue. From my side, I will do some Git operation tests to maybe have a reproducible case that you could test and work with.

Agreed and that would be awesome. Having a reproducible case would be awesome and always makes life better to address issues as we can be sure that we are definitely fixing the correct one.

Thanks Steve

steve-goddard-brcm avatar Oct 17 '18 13:10 steve-goddard-brcm

@lousybrit Thanks so much for working on this!

Thanks very much for the background and information here. It could well be that what you are seeing is the applications database making lots of small writes. I can see the top showing high cpu usage by the vmhgfs-fuse client in this situation too. I will look into how much of an issue this is because when we transition to the host server each time it will block the CPU that it is running on. So an unnecessarily large number of writes would cause lots of CPU blockages and make the responsivity of the VM bad.

Do you have a single CPU VM?

Thanks Steve

My use case was running an EDA tool (Quartus) on the Linux guest with the database in a shared folder. This is necessary in my case because some work is performed on Mac and some on Linux. I have no control over the type if I/O Quartus performs, so I can't change the block size. The net effect is that Quartus takes orders of magnitude longer to run which renders it unusable with a shared folder.

For me the telling behavior here is that the fuse process consumes essentially 100% of the CPU when running Quartus. Same is the case when running DD with smaller block sizes. So this is is not the case of communications latencies between guest and host. There is a huge amount of CPU resources being used here on the guest, much more than what seems reasonable for packaging up the data to send to the host even one byte at a time. If I had to guess, this feels like a spin lock, or some other type non-productive loop in Fuse.

steve-goddard-brcm avatar Oct 17 '18 13:10 steve-goddard-brcm

@lousybrit I just replicated on an 8 core guest to confirm. Fuse taking 88% on the guest. Not sure if that percent is of a single core or all cores combined.

The host is reporting vmware-vmx process taking 136%. My laptop has 6 cores. I'm pretty sure, but not certain, that 1200% is max possible CPU usage.

Thanks! -Juan

top - 06:28:17 up 1 day,  1:54,  1 user,  load average: 0.65, 0.34, 0.24
Tasks: 313 total,   1 running, 292 sleeping,   0 stopped,  20 zombie
%Cpu(s): 10.6 us,  2.6 sy,  0.0 ni, 86.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16414388 total,  9892212 free,  3105928 used,  3416248 buff/cache
KiB Swap: 11532284 total, 11488552 free,    43732 used. 12863672 avail Mem 

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND    
 42938 root      20   0  304660    864     32 S  88.3  0.0   0:25.15 vmhgfs-fu+ 
 42983 jp        20   0    7328    792    724 S  14.3  0.0   0:04.03 dd         
  1108 root      20   0  463860 112928  52268 S   3.0  0.7  68:30.30 Xorg       
  1904 jp        20   0 1536116 119828  67352 S   1.0  0.7  30:44.83 compiz     
  5614 jp        20   0  709836  73364  33096 S   1.0  0.4  24:34.29 gnome-sys+ 
 15410 jp        20   0 24.118g 370460  71432 S   1.0  2.3  20:17.16 atom       
   425 root      20   0  190108   9236   8832 S   0.3  0.1   1:24.84 vmtoolsd   
  1775 jp        20   0  644172  37916  24724 S   0.3  0.2   0:08.19 unity-pan+ 
  2176 jp        20   0  663184  36028  26492 S   0.3  0.2   0:09.27 gnome-ter+ 
  3961 jp        20   0 3739220 1.406g 278832 S   0.3  9.0  11:44.78 quartus    
     1 root      20   0  119996   5460   4020 S   0.0  0.0   0:03.92 systemd    
jp@jp-virtual-machine:/mnt/hgfs/Downloads$ dd bs=8 if=/dev/zero of=./zeros
^C366217+0 records in
366217+0 records out
2929736 bytes (2.9 MB, 2.8 MiB) copied, 52.3898 s, 55.9 kB/s

nilnullzip avatar Oct 17 '18 13:10 nilnullzip

I just replicated on an 8 core guest to confirm. Fuse taking 88% on the guest. Not sure if that percent is of a single core or all cores combined.

%Cpu(s): 10.6 us, 2.6 sy, 0.0 ni, 86.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
42938 root 20 0 304660 864 32 S 88.3 0.0 0:25.15 vmhgfs-fu+

This is just one of the cores. As you can see see total CPU usage is summarized in the header in %Cpu line and that is 10.6% only.

ravindravmw avatar Oct 17 '18 17:10 ravindravmw

10.6 us, 2.6 sy, 0.0 ni, 86.8 id

User space: 10.6% System: 2.6% Idle: 86.8%

However, if your application is waiting for FUSE to complete an IO, it can't do anything with the available/idle CPUs but wait for the IO to finish.

ravindravmw avatar Oct 17 '18 18:10 ravindravmw

Apologies to "bump" an old thread. Is there some official or "official-like" response on the matter for its status?

Any workarounds apart from duplicating files?

stdedos avatar Apr 08 '19 11:04 stdedos

I'm having the same issue Centos Guest on Windows 10 host. This is really important, would like to get some feed back.

atalayk avatar May 08 '19 20:05 atalayk

I'm having the same issue with Ubuntu 18.04 + Windows 10 host. One thing I noticed, enabling -o big_writes and -o direct_io raised the performance from 24MB/s to 35MB/s, not huge, but interesting to note.

edude03 avatar Jun 02 '19 17:06 edude03

There has been multiple answers about this from myself and other VMware colleagues.

Can you either give some context about the application in question with regard performance and what steps you use to reproduce the issue you are referring too? As each application behaves somewhat differently it can greatly depend on the context here.

It should also be noted that we set big_writes to true as default so setting this explicitly on the command line should not have any effect. It might be dictated by what application you are using and any arguments you are passing.

Can you supply more information?

Thanks Steve

steve-goddard-brcm avatar Jun 03 '19 20:06 steve-goddard-brcm

There has been multiple answers about this from myself and other VMware colleagues.

Can you either give some context about the application in question with regard performance and what steps you use to reproduce the issue you are referring too? As each application behaves somewhat differently it can greatly depend on the context here.

It should also be noted that we set big_writes to true as default so setting this explicitly on the command line should not have any effect. It might be dictated by what application you are using and any arguments you are passing.

Can you supply more information?

Thanks Steve

This is a generic response, which doesn't really help us, in return, answer:

Can you either give some context about the application in question with regard performance and what steps you use to reproduce the issue you are referring too?

My case is sharing a workspace from my Host (Ubuntu 16.04) home directory, to my Guest (Ubuntu 18.04) home directory. Most notably, git-status can take up to a minute to complete (latest git on ppa:git)

stdedos avatar Jun 03 '19 20:06 stdedos

My case is Centos 7 guest running on Windows 10 Host. Using LAMP stack on Centos. Code is on VMWare Shared directory, (AND it needs to be there) vmhgfs-fuse CPU usage starts out OK, but after 10-15 minutes, it stays above 100%.

As Stavros Ntentos noted before, we don't need a generic response, but a real fix/patch.

image

atalayk avatar Jun 07 '19 00:06 atalayk

Same here, it's not unworkable, but slow enough to be annoying.

Setup: Fedora 30 guest running in Windows 10 host. My development files (PHP) are located on the host, where my IDE is; the Linux guest runs the web server that reads the PHP files through the shared folder.

This setup is much slower than when the files are located on the guest directly (which is not possible due to the IDE and other tools being located on the host⁠—or would require a rather complicated sync process).

Also, things such as git checkout or composer install are painfully slow inside the shared folder.

I can definitely understand and accept some overhead due to the translation/encapsulation/transport of filesystem commands on the guest filesystem to commands on the host filesystem. It's just that the overhead is right now big enough to give a hint that some kind of optimization must be possible, or at least mandate some investigation.

BenMorel avatar Jun 28 '19 11:06 BenMorel

I have the same.... I purchased VMware workstation because I thought that it works way better than the free virtualbox or Hyper-V. Learned so many limitations that I´m really annoyed.

vmhgfs is really slow - not faster than vboxsf - and consumes nearly 100% of a core on transferring data. Perhaps I go better with smb (like Hyper-V) - I´ll try that out

cljk avatar Aug 13 '19 08:08 cljk

Any news on this issue?

Debian 10 guest on a Windows 7 host.

If I use dd to make a write speed test, I get around 950 MB/s directly on the VM disk and only 170 MB/s on the mounted shared drive. It's really slow!

ghost avatar Sep 06 '19 17:09 ghost

@StephaneArguin Consider yourself lucky, as you're probably transferring reasonably big files, hence you still get a decent transfer speed and "only" a ~5x performance penalty.

When working with many small files though, I get ~25x slower speeds in the shared folder, compared to an ext4 folder in the VM (composer install with a few dozen dependencies takes ~5s on ext4, compared to > 2 min in the shared folder).

Plus, I have to note that symlinks are still not properly handled, and that inotify notifications are not forwarded to the guest machine.

That's 3 reasons why shared folders are almost unusable for any serious software development using VMware shared folders at the moment.

If VMware wants to win the battle against upcoming WSL2, they'd better fix shared folders, and quick. Microsoft is fixing WSL issues at lightning speed right now, and whoever gets this right first will probably win developers' business.

At least, they'll win mine. I'd be ready to put quite some money in an hypervisor that would fix the 3 issues above, right now.

BenMorel avatar Sep 06 '19 17:09 BenMorel

@BenMorel My test was just the creation of one 1G file using 'dd'. But like you, I have the same problem when I'm using Git : it's really slowwww. Almost unusable. And I also have an issue if I try to untar a file with symlinks in it.

I think I'll recommend my client to stop paying for VMWare Workstation licences and simply use VirtualBox for developer's machines because there is no advantage to pay for this. Or better, authorizing developers to use Linux machines (99.9% Java development) and save VMWare and Windows licenses. :)

ghost avatar Sep 06 '19 19:09 ghost

@StephaneArguin Consider yourself lucky, as you're probably transferring reasonably big files, hence you still get a decent transfer speed and "only" a ~5x performance penalty.

When working with many small files though, I get ~25x slower speeds in the shared folder, compared to an ext4 folder in the VM (composer install with a few dozen dependencies takes ~5s on ext4, compared to > 2 min in the shared folder).

EXACTLY THAT.

In dev you have several systems with LOTS of small files... and shared folder suck at this

cljk avatar Sep 06 '19 19:09 cljk

Just upgraded to macOS Catalina on my iMac with SSD drive. Using Fusion 11.5.0.

Having the same experience as all describe.

vmhgfs-fuse -V

vmhgfs-fuse: version 1.6.9.0

FUSE library version: 2.9.7 fusermount version: 2.9.7 using FUSE kernel interface version 7.19

princeofnaxos avatar Nov 04 '19 11:11 princeofnaxos

Same here VMWare Fusion Version 11.5.1 (15018442) running on macOS Catalina

Guest system: Ubuntu 16.04.2 LTS $ vmhgfs-fuse -V vmhgfs-fuse: version 1.6.6.0

FUSE library version: 2.9.4 fusermount version: 2.9.4 using FUSE kernel interface version 7.19

nuh-temp avatar Nov 15 '19 03:11 nuh-temp

I would really like to switch from virtualbox to vmware, but this issue is making that impossible. I cannot wait for 10 seconds for every git command to complete. Back to virtualbox I go...

VMWare Fusion Version 11.5.3 (15870345) on macOS Mojave

Guest system Ubuntu 18.04

$ vmhgfs-fuse -V vmhgfs-fuse: version 1.6.9.0

FUSE library version: 2.9.7 fusermount version: 2.9.7 using FUSE kernel interface version 7.19

dinkelk avatar Mar 25 '20 23:03 dinkelk