2

First I will elaborate what is the scenario here.

We have 2 servers both are ubuntu 14.04 LTS and we have a drive called /storage/ of 70TB It includes many files of 30GB size each and other ones as well. So as both are the remote servers and I want to move all this data to my other remote server's same drive as /storage/.

Is there any way to do it fastly and stably so that there will be no data loss in that?

once i have tried to move only one file from one to other which worked fine with this link. Any help will be thankful.

Junaid Farooq
  • 605
  • 1
  • 6
  • 9
  • 1
    then try that again, it will work fine. – Ipor Sircer Nov 30 '16 at 12:12
  • Thanks for -1 but am talking about 1: moving NOT copying. 2: It was smal file of few kps now am talking about 70TB – Junaid Farooq Nov 30 '16 at 12:13
  • 1
    1. what's the difference? 2. what's the difference? – Ipor Sircer Nov 30 '16 at 12:15
  • @ijunaidfarooq Moving in this respect is a two step procedure of 1) copying 2) removing the original data. Use `scp` or `rsync` - double checking the m5d checksum is not a necessity ([see here](http://unix.stackexchange.com/questions/253233/do-i-need-to-check-for-file-corruption-once-scp-is-done)). Just make sure `scp` finished correctly and then you can go on deleting the original files. – FelixJN Nov 30 '16 at 12:19
  • Don't you what is the difference between copying and moving? Dont you know is meant by 34 byte and 70TB? – Junaid Farooq Nov 30 '16 at 12:20
  • Are the two servers in the same physical location? If not, seriously consider carrying the data on a tape (or several) rather than migrating it over a network. You really need to do the maths to see how long it will take to copy that much data by the available network bandwidth before you bother to take that route. Please can you update your question to give some physical details (same datacentre? available bandwidth? single 70TB file or big filesystem? available downtime on source to ensure an accurate copy?) I move tens of TB fairly regularly so I don't think I'm way out of your league. – roaima Nov 30 '16 at 13:20
  • What is the link speed between the two servers –  Nov 30 '16 at 22:43
  • https://what-if.xkcd.com/31/ and... perhaps you should consider a really wide bandwidth with a very slow connection speed. Then just delete it off the old location. – Jeutnarg Apr 28 '17 at 16:06

3 Answers3

4

Is there any way to do it fastly

It depends of the network connection speed between source and destination server.

70 TB is a lot of data. It might be worthy physically disconnecting the drive from the server and remounting it on the destination server.

and stably so that there will be no data loss in that?

If you copy the files via scp their integrity is ensured by the crypto protocols used by the program itself. So as long scp finishes with a zero status, you know that everything went well.

dr_
  • 28,763
  • 21
  • 89
  • 133
0

I'm not sure which of solutions have you used from mentioned question but I would strongly suggest using rsync and screen as this operation will take a lot of time.

With this amount of data something might go wrong (for example, connection might be lost) and you will take hard time finding out the point where you should restart your procedure. Rsync does it for you.

Talking about speed - well that depends on network connection and, yes, moving physical drive might be also a solution. Take a look at compression switch in rsync it might help if your files can be compressed easly. If not, then going without compression also might be a good idea (no unnecessary CPU usage).

Kalavan
  • 666
  • 4
  • 11
0

Simply

  1. Make a backup
  2. Configure the new server, ssh access as root (temporary and with public key only) and install rsync, create dir /storage
  3. transfer using rsync (on slackware ssh is enabled default, if not enable use rsync -e)
  4. After transfer check with df, du if is all ok and disable ssh access as root (mantein only ssh as user)
roaima
  • 107,089
  • 14
  • 139
  • 261
elbarna
  • 12,050
  • 22
  • 92
  • 170
  • rsync is fine for smalier amounts of data but will take forever on 70 Tb of data. – fpmurphy Nov 30 '16 at 17:02
  • 1
    @fpmurphy1 it depends on so many factors that the question asker hasn't provided. I moved several TB of data from New Zealand to France earlier this year. It took a month (so would have been faster with a disk posted through the mail) but time wasn't important. – roaima Nov 30 '16 at 17:11