Preferences

If it's over SMB/Windows file sharing then you might be looking at some kind of latency-induced limit. AFAIK SMB doesn't stream uploads, they occur as a sequence of individual write operations, which I'm going to guess also produce an acknowledgement from the other end. It's possible something like this (say, client waiting for an ACK before issuing a new pending IO) is responsible

What does iperf say about your client/server combination? If it's capping out at the same level then networking, else something somewhere else in the stack.

I noticed recently that OS X file IO performance is absolute garbage because of all the extra protection functionality they've been piling into newer versions. No idea how any of it works, all I know is some background process burns CPU just from simple operations like recursively listing directories


The problem I describe is local (U.2 to U.2 SSD on the same machine, drives that could easily performs at 4GB/s read/write, and even when I pool them in RAID0 in arrays that can do 10GB/s).

Windows has weird behaviors for copying. Like if I pool some SAS or NVMe SSD in storage space parity (~RAID5) the performance in CrystalDiskMark is abyssal (~250MB/s) but a windows copy will be stable at about 1GB/s over terabytes of data.

So it seems that whatever they do hurts in certain cases and severely limits the upside as well.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal