Copy hundreds of thousands files Mac to NAS

by OgreSwamp   Last Updated October 14, 2018 19:01 PM

I'm trying to transfer my Aperture library to the NAS. It contains few hundred of thousands of files (quite small ones).

After OS X finishes copying the big files, the estimated time of the copy operation goes from 2 hours to 4–5 days (and growing).

I know that a solution could be to tar them and copy the tarball, but the problem is that the library I want to copy is about 80GB and I have only 10GB left on my Mac.

Any ideas how to copy them in such conditions?



Answers 4


You can use the rsync command line tool to do a one-way synchronization of your files. The way rsync works makes it easy to interrupt the process if you don't have the patience for it—you can easily resume it later on. Only the files that are still to be transferred will be copied.

You need to open Terminal.app, and then call rsync like so:

rsync -avh --progress "~/Pictures/Aperture Library" "/Volumes/NAS/"

Here, the first path is pointing to your library, which by default should be under Pictures. If you're unsure, you can drag and drop the library to the terminal command line and it will fill the path automatically for you. The same goes for the path of your NAS.

The -a option enables the archive mode, which sets some defaults, including recursive copying. -v will make the command more verbose. -h turns on human-readable file sizes.

Rsync will show you a progress meter. If you want to cancel the process, press CtrlC. You can call the command again to have rsync continue.

slhck
slhck
August 22, 2013 21:20 PM

The problem probably is that you are working with many small files, and most copying tools have an overhead per file. It's possible to tar the files up, pipe the data over to your destination, and unpack the archive on the other end, without ever actually creating a tarball anywhere on the system.

tar cf - "~/Pictures/Aperture Library" | (cd "/Volumes/NAS/" && tar xf -)

The downside of this is that you won't have any kind of progress meter, but it should be significantly faster than a more conventional copy.

evilsoup
evilsoup
August 22, 2013 21:56 PM

if external device supports full ssh shell, you can do:

tar cf - /your/dir|ssh -C 'tar xf - -C /directory/where/you/unpack'

it is not recommended use -c option for tar because you have many files. printing him cause speed degradation. flag -C for ssh is needed for compression on the fly on network. small piece of data cause small data to crypt/decrypt, this increase speed. but if it is still to slow, you can change crypt chiper from 3des to twofish or if it's possible to 1des. last is not secure but fastest.

otherwise you can use rsync, but rsync should be installed on external device. see man. rsync can transport data by rsync server or ssh.

Znik
Znik
September 05, 2013 14:25 PM

Try the MHISoft fastcopy opensource. It copy files and directories recursively. https://github.com/mhisoft/fastcopy/releases

Tony
Tony
October 14, 2018 18:37 PM

Related Questions