0

I have NFS working much better now in RHEL 7.9, on a wired 1gbps LAN. The mount shows up as vers=4.1. Initially I was using sync and getting a reliable and consistent ~55 MB/sec for a data_70gb.tar file using rsync -P.

Since changing sync to async I will get as high as 270 MB/sec for 1-2 minutes, and that will be the average of the total copy for small gigabyte files. But with my 70gb tar file that initially is supposed to take around 4.5 minutes at ~ 250 MB/sec, in the end rsync shows the copy happened at an average 160 MB/sec, it took about 7 minutes total, and I noticed periodically about 5 times the copy speed would drop to around 20 MB/sec for not much more than a minute then ramp back up to around 250 MB/sec.

why would this happen? I am using proto=tcp; also the servers have 192gb RAM, and use 12gbps SAS SSD's; the tar file was newly created just before doing rsync so I believe disk caching in linux should be in effect and disk I/O should not be a factor (but correct me if I'm wrong here).

pic of ssh window showing rsync :

screenshot of text

Marcus Müller
  • 21,602
  • 2
  • 39
  • 54
ron
  • 5,749
  • 7
  • 48
  • 84
  • 1
    "270 MB/sec for 1-2 minutes": That means something fantastic is happening! You're transporting 2.16 Gb/s over a link that only supports 1 Gb/s, physically. So, I call "mistake!" on your measured speed. Something is not right. – Marcus Müller Jan 09 '23 at 18:33
  • I agree with what you are implying, but `rsync -P` does show a `270 MB/sec` as it's running. I will try to get a screenshot to prove that. But is it something to do with how/where rsync is measuring? Not necessarily packet transfer being finished to the endpoint? I have many more files to copy so i will pay attention to the details. But it did report 160 MB/sec in the end, which I though 1gbps x 8 bits/bye = 125 MB/sec max? – ron Jan 09 '23 at 18:38
  • As a reference point I get a solid 112 MB/sec using `scp` whether it's 10gb or a 500gb tar file. – ron Jan 09 '23 at 18:39
  • 1
    112 MB/s is nearly exactly the wire speed of 1 Gb/s. rsync must buffer at some point, or are you moving within the same NFS export? – Marcus Müller Jan 09 '23 at 18:41
  • pic link added above, rsync showed 440 MB/sec for 3.7gb rhel6.iso copy, but over 42 seconds which comes out to be ~88 MB/sec? still slower than `scp` at 112 MB/sec ? why? – ron Jan 09 '23 at 19:01
  • 1
    Does [this topic](https://unix.stackexchange.com/questions/146620/difference-between-sync-and-async-mount-options) help answer your question? This looks to be expected behavior for `async`, which lets writes be "completed" before the page buffer is written to disk. The write isn't actually completed though, it is just queued into the page buffer so the kernel scheduler can handle flushing it in the most efficient way. – Thegs Jan 09 '23 at 19:07
  • a `date; cp -r ...; date` shows 35 seconds elapsed time: 3700 / 35 = 105 MB/sec. – ron Jan 09 '23 at 19:08
  • **note:** the above pic is of a system I have easier access to screenshot and is a cat6 wired cable directly between two servers, no network switch involved, at 1gbps. – ron Jan 09 '23 at 19:13
  • I believe it to be some sort of buffering happening with rsync, and when it gets to 100% it can still be a while before the command actually completes. Same with using `pv` with with `cp` which shows 1.24 GiB/sec. a `dd status=progress` to do the same simple copy ends up showing 84.1 MB/sec in the end, but will show 257 MB/sec as during the copy. – ron Jan 09 '23 at 19:31
  • there seems to be no good way to see copy speed other than `date; cp from to; date` and calculate yourself ? – ron Jan 09 '23 at 19:34
  • I am not happy with NFS as async at 88 MB/sec when `scp` achieves 112 MB/sec – ron Jan 09 '23 at 19:38
  • Presumably this is a followup to [why is NFS copy speed half that of SSH scp](https://unix.stackexchange.com/q/730554/100397)? Good to hear that `async` helped – roaima Jan 09 '23 at 20:20
  • ^^ yes, async sort of helped in that rsync does not report 55 MB/sec the entire copy process but it seems the real speed now with async is only 88MB/sec and still disappointed since I can get 112 MB/sec with `scp`; the reported numbers from using `rsync -P` **are** misleading. – ron Jan 09 '23 at 20:41
  • You're missing the `--times` (`-t`) or `--archive` (`-a`) flag for `rsync`. Without this it'll never consider the destination to be the same as the source – roaima Jan 10 '23 at 08:38
  • You're also using `rsync` from local to local, which isn't its best game – roaima Jan 10 '23 at 08:39

0 Answers0