I have about ~350Gb that I wanted to copy from server to a new local 1Tb external ssd I bought for the task. So I used rsync but the 1Tb disk ran out of space during the copy, which was odd. So I reformatted (exfat since I wanted access on both mac and linux) and tried again, and noticed that the disk used (du) was a lot more than the files themselves warranted (ls). Checking stackexchange it seemed that 'sparse files' or thin provisioning might explain it - but no, sparse files use less disk space (as seen by du) than the files need (as seen by ls) . Finally thinking to check the du of individual files, it became apparent that even the smallest file was taking 128K. This was apparently due to the default blocksize when formatting as exfat, and I've got a few million small files in the archive I'm transferring so I can't afford that waste. So on the mac I tried setting 1K blocksize,
diskutil info
diskutil unmountDisk disk4
newfs_exfat -R -v JR_SSD_1Tb -b 1024 /dev/disk4
which seemed ok (according to the diskutil report) , but the linux machine didn't automount the ssd and a manual mount ran into an error. So thinking that the mac cli utility wasn't entirely compatible, I tried formatting on linux but this doesn't seem to actually do the job: when I create a new testfile of a few bytes its got a 512K minimum size.
sudo mkfs.exfat -s 1024 -n JR_SSD /dev/sda
mkexfatfs 1.3.0
Creating... done.
Flushing... done.a
File system created successfully.
cat > /media/jeremy/JR_SSD/test.txt
ls -l /media/jeremy/JR_SSD/test.txt
-rwxrwxrwx 1 jeremy jeremy 4 Aug 25 20:14 /media/jeremy/JR_SSD/test.txt
du -h /media/jeremy/JR_SSD/test.txt
512K /media/jeremy/JR_SSD/test.txt
So - how do I do this ?