3

Consider a data transfer over a pipeline network in legacy systems with remote shell (RSH), like:

rsh host -l user tar -cf - /home/dir \| compress | uncompress | tar -xvf -

and this one in 'modern' systems:

ssh user@host tar -cf - /home/dir \| gzip | ungzip | tar -xvf -

PERFORMANCE ISSUES:

I experienced a very bad performance running a transfer among AIX 4.3 to AIX 5.3 using RSH. Even having idle cards (10/100) connecting them through an idle switch, I got a performance about 350Kbps transferring 5.4GB.

When running this transferring among AIX 5.3 and Linux, but now using SSH and gzip, the performance is way better, but never reachs the network capacity (In a 1 Gigabit LAN I got an average of about 400Mbps).

Are there ways to optimize the network pipeline performance maybe customizing pipe buffers or network block/buffers, or what ?

Luciano
  • 1,129
  • 4
  • 15
  • 26
  • 1
    If doing it in a secure network, do it over a netcat instead of ssh – Rui F Ribeiro Feb 17 '16 at 14:42
  • netcat isn't an AIX command (at least until AIX 5.3). – Luciano Feb 17 '16 at 14:45
  • 2
    is it possible that the gzip/gunzip & compress/uncompress processes are slowing you down? Have you compared timings/speeds without them? – Jeff Schaller Feb 17 '16 at 14:49
  • 1
    nc, netcat, install/compile it, it is very useful. – Rui F Ribeiro Feb 17 '16 at 14:49
  • @JeffSchaller compressing/uncompressing do slow things down, as I often transfer things via network, and the difference is quite noticeable – Rui F Ribeiro Feb 17 '16 at 14:50
  • @JeffSchaller: Yes. In AIX 4.3 without compression was impossible to transfer. In AIX 5.3 in some machines we needed to remove compression to speedup, but in others no, keeping compression speed up the transferring. So, although there is AIX 5.3 the hardware ins't the same. – Luciano Feb 17 '16 at 14:55
  • 1
    @RuiFRibeiro, in some machines we can install new software, in others no, these servers are being replaced and they are very critical (and legacy), we cannot do some interventions on all them (like software installation, and some don't have C compiler - This is a hell). – Luciano Feb 17 '16 at 15:00
  • I understand that, had to deal with legacy systems, and more migrations under the belt than I would like to have. – Rui F Ribeiro Feb 17 '16 at 15:19

2 Answers2

3

I think you are getting CPU limited rather than bandwidth limited, at least on the ssh part.

I get about 45-50 MB/s with scp (ssh cp) between two otherwise idle servers as the encryption / decryption on the servers is the limiting factor. Add gzip / ungzip and you might drop further depending on the amount of cores available.

Uncompressed and unencrypted transfers have better numbers. You could try without the compression commands and see how that goes.

JRiis
  • 51
  • 2
1

Did you do a basic speed test yet to rule out compression, and disk I/O?

That is pretty easy to do, connect from one of the affected hosts to another via ftp, and run
put "|dd if=/dev/zero bs=1M count=1000″ /dev/null
which will read 1GB from /dev/zero on one side and write it to /dev/null on the other side, which tests pure network bandwidth. This is described in more detail e.g. at
AIXChange blog: Two Ways to Measure Network Performance

For the transfer via scp, you could try to reduce the encryption by using a less demanding stream cipher (RC4) via -o Cipher=arcfour ... as an option to scp.

See also a related question suggesting the use of -C to enable compression, although from what I understood in your case the CPU performance is the limiting factor, and not the network bandwidth, so compression will probably make things worse.

doktor5000
  • 2,689
  • 15
  • 30