If you ran the following, what would happen?
# Do not run.
# cat /dev/random > ~/randomFile
Would it be written until the drive runs out of space, or would the system see a problem with this and stop it (like with an infinite symlink loop)?
If you ran the following, what would happen?
# Do not run.
# cat /dev/random > ~/randomFile
Would it be written until the drive runs out of space, or would the system see a problem with this and stop it (like with an infinite symlink loop)?
It writes until the disk is full (usually there is still some space reserved for the root user). But as the pool of random data is limited, this could take a while.
If you need a certain amount of random data, use dd.
For 1MB:
dd if=/dev/random iflag=fullblock of=$HOME/randomFile bs=1M count=1
Other possibilities are mentioned in answers to a related question.
However, in almost all cases it is better to use /dev/urandom instead.
It does not block if the kernel thinks that it get out of entropy.
For better understanding, you can also read myths about /dev/urandom.
Installing haveged speeds up /dev/random and also provides more entropy to /dev/urandom.
EDIT: dd needs the fullblock option as /dev/random (in opposite of /dev/urandom) can return incomplete blocks if the entropy pool is empty.
If your dd does not support units, write them out:
dd if=/dev/random iflag=fullblock of=$HOME/randomFile bs=1048576 count=1