I have encountered this with rsync in the past as well. The solution that fixed it for me was running it from within a screen session, which was able to help maintain the connection to the remote server.
screen -LS rsync
[execute your rsync command]
Ctrl-A+D to detach from the session
You can check the status by running screen -x rsync (or whatever you decide to name the session if you do give it a name, which is not required). This will re-attach your current shell to that session. Just remember to detach from it again after you have checked the status so that it keeps running in the background.
You can also execute the command to run via screen in the background in one fail swoop by doing [someone please correct me if I'm wrong] screen -dm 'command'. You may want to man screen before trying that last one.
EDIT:
I am editing my answer because you have confirmed that screen provides no assistance in this scenario, but you replied to my comment suggesting to try scp and see what kind of results you get, to which you replied that oddly enough, it worked just fine.
So my new answer is this: use scp -- or ssh (with tar) -- instead of rsync
Granted, scp doesn't support the vast number of features as rsync, but you'd actually be surprised to discover just how many features that it does support that are almost identical to that of rsync.
Real world scenarios for scp and other alternatives to rsync:
Awhile back, I was tasked with creating a shell script that would pull logs from our production servers and store them locally on a web server so that developers could access them for troubleshooting purposes. After trying unsuccessfully to get the Unix team to install rsync on our servers, I came up with a workaround using scp that worked just as well.
That being said, I recently modified the script so that all it uses is ssh and tar -- GNU tar/gtar, to be exact. GNU tar supports many of the options that you will actually find in rsync, such as --include, --exclude, permission/attribute preservation, compression, etc.
The way I now accomplish this is by ssh-ing to the remote server (via pubkey auth) and using gtar -czf - [other options such as --include='*.log' and --exclude='*core*', etc.] -- this writes all of the info to stdout, which is then piped [locally] to tar -xzf so that no changes are made on the remote production server, and all of the files pulled as-is to the local server. It's a great alternative to rsync in this case. The only thing important thing neither tar nor scp support are incremental backups and the level of block-level error checking that rsync features.
The full command I am referring to when using ssh and tar would be something like this (remote is Solaris 10; local is Debian, for what it's worth):
cd /var/www/remotelogs
ssh -C user@remotehost "cd /path/to/remote/app.directories; gtar -czf - --include='*.log' --exclude='*.pid' --exlude='*core*' *" | tar -xz
In your scenario it would be the opposite -- tar -cf - locally, and pipe to remote server via ssh user@remotehost "tar -xf -" -- there is another answer that references this type of behavior but doesn't go into as much detail.
There are a few other options that I have included to speed things up. I timed everything relentlessly to get the execution time as low as possible. You would think that using compression with tar would be pointless, but it actually speeds things up a bit, as does using the -C flag with ssh to enable ssh compression as well. I may update this post at a later date to include the exact command that I use (which is very similar to what I posted), but I don't feel like getting on VPN at the moment since I'm on vacation this week.
On Solaris 10, I also use -c blowfish, because it is the quickest cipher to authenticate with and also helps speed things up a tad, but our Solaris 11 either don't support it or have this cipher suite disabled.
Additionally, if you choose to go with the ssh/tar option, it would actually be a good idea to implement my original solution of using screen if you are doing a backup that will take awhile. If not, make sure your keepalive/timeout settings in your ssh_config are tweaked just right, or this method will also be very likely to cause a broken pipe.
Even if you go with scp, I always find it to be a best practice to use screen or tmux when doing an operation of this sort, just in case. Many times I don't follow my own advise and fail to do this, but it is indeed a good practice to use one of these tools to ensure that the remote job doesn't screw up because of your active shell session getting disconnected somehow.
I know you want to figure out the root cause of your rsync issue. However, if this is really important, these are two great workarounds that you can experiment with in the meantime.