1

While trying to make an automatic OS installer (via livecd) I stumbled over a phenomenon that I'm not sure how I can correct it.

I mount /mnt/source to a network drive where an iso file is located. Then I run the command:

gunzip -c /mnt/source/myRemoteFolder/myImage.iso | ntfsclone -r -O /dev/sda1 -

Now the problem is when for example the file or the folder do not exist.....the whole command hangs up indefinitely.

Thus this means if the network connection (for whatever reason) has a hangup I have a problem at hands as the process just hangs up.

Now my question here is: What can I do to avoid the hangup? The problem cases are as I see:

  • The mount fails....this I can check beforehand (so no problem here)
  • The network connection is lost just before gunzip is run (hangup)
  • The network connection is lost DURING the gunzip run (hangup)

As note: The reason behind how the command is structured is that I don't want to copy the file locally as I'm running from a live cd and want the image to be put on the only hard drive available. Maybe as additional info I'm using puppy linux as distribution but didn't tag it as I don't think that that is a possible reason for that behaviour of those 2 programs.

Thomas
  • 143
  • 1
  • 1
  • 7
  • If I get it right you observe the NFS mount point causes the command to run indefinitely (e.g. because it doesn't seem to time out) whenever the NFS share becomes unavailable, which is you'd like to avoid, correct? –  Dec 23 '15 at 11:05
  • Yes. I first observed the hangup when I accidently had put myImage.iso in the wrong folder on the external server thus the file myImage.iso didn't exist in myRemoteFolder. That is how I found out that when there is a problem with the file/mount the command hangs up and as its a network connection I'm using it would be very problematic if even small hickups would lead to the script going indefinite (aka hanging up without telling the user that there is some problem occuring) – Thomas Dec 23 '15 at 11:47
  • I also experienced from time to time those kinds of lockups wondering if they could be avoided but I'm afraid that's one inherent drawback of NFS. However this is to be confirmed. I'll try and check NFS documentation in case modern versions have addressed that problem. Are you bound to a specific protocol version? –  Dec 23 '15 at 16:39
  • @nasha not that I'm aware of. I'm using a standard puppy for start but I guess I can upgrade things so that I can use specific nfs versions if necessary – Thomas Dec 23 '15 at 17:37
  • I believe [this section](http://web.mit.edu/rhel-doc/5/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-options.html) of RedHat documentation might be of some interest to you. –  Dec 24 '15 at 20:52
  • I think `timeout` from coreutils can be the workaround you may consider. – pawel7318 Dec 25 '15 at 16:24
  • @pawel7318 interesting idea although wouldnt the timeout mean then that if the internet connection is too slow and the command takes long to finish it is stopped because of the timeout? (as it doesnt differentate between normal runtime and hangup) – Thomas Dec 25 '15 at 20:12
  • Yes, this'll work exactly that way so this is why I wrote you 'may consider' that :) Try to find something to download the file first - something that can notice it stuck and timeout than (for example `wget --read-timeout` can do that). – pawel7318 Dec 25 '15 at 20:27
  • this might help, http://unix.stackexchange.com/questions/29196/automount-nfs-autofs-timeout-settings-for-unreliable-servers-how-to-avoid-han – Keyshov Borate Jan 01 '16 at 13:35

0 Answers0