gosync
gosync copied to clipboard
2GB file upload fail.
Currently gosync fails if upload file size is more than 2GB.
I think the cause is that ioutil.ReadFile is called in sync.go : line 173.
Reference URL: https://code.google.com/p/go/issues/detail?id=2743
Here is my results of execution.
$ gosync sync isos s3://s3sync-test/test1
Syncing isos with s3://s3sync-test/test1
panic: read isos/2Gdd.img: invalid argument
goroutine 1 [running]:
github.com/brettweavnet/gosync/gosync.func·001(0xc2000b2750, 0xd, 0xc2000af4b0, 0xc2000af5f0, 0x0, ...)
/Users/ryuta/.go/src/github.com/brettweavnet/gosync/gosync/sync.go:175 +0x123
path/filepath.walk(0xc2000b2750, 0xd, 0xc2000af4b0, 0xc2000af5f0, 0x7607f8, ...)
/usr/local/go/src/pkg/path/filepath/path.go:341 +0x70
path/filepath.walk(0x7fff5fbff8ba, 0x4, 0xc2000af4b0, 0xc2000af500, 0x7607f8, ...)
/usr/local/go/src/pkg/path/filepath/path.go:359 +0x32a
path/filepath.Walk(0x7fff5fbff8ba, 0x4, 0x7607f8, 0x300000050, 0x1ffbe, ...)
/usr/local/go/src/pkg/path/filepath/path.go:380 +0xb5
github.com/brettweavnet/gosync/gosync.loadLocalFiles(0x7fff5fbff8ba, 0x4, 0x720040)
/Users/ryuta/.go/src/github.com/brettweavnet/gosync/gosync/sync.go:184 +0x8e
github.com/brettweavnet/gosync/gosync.(*SyncPair).syncDirToS3(0xc2000af410, 0x4, 0xc2000af400)
/Users/ryuta/.go/src/github.com/brettweavnet/gosync/gosync/sync.go:59 +0x3d
github.com/brettweavnet/gosync/gosync.(*SyncPair).Sync(0xc2000af410, 0xc2000af410, 0x760ac8)
/Users/ryuta/.go/src/github.com/brettweavnet/gosync/gosync/sync.go:30 +0x119
main.func·001(0xc2000c5520)
/Users/ryuta/Repos/gosync/gosync.go:39 +0x38f
github.com/codegangsta/cli.Command.Run(0x29cf80, 0x4, 0x0, 0x0, 0x2d8390, ...)
/Users/ryuta/.go/src/github.com/codegangsta/cli/command.go:25 +0x2a5
github.com/codegangsta/cli.(*App).Run(0xc2000c7150, 0xc200090000, 0x4, 0x4)
/Users/ryuta/.go/src/github.com/codegangsta/cli/app.go:57 +0x5f7
main.main()
/Users/ryuta/Repos/gosync/gosync.go:49 +0x15f
goroutine 2 [syscall]:
$
Regards
Looks like readfile needs to be switched to multiple reads.
That makes more sense - ReadFile is trying to create a 15GB slice, and can't (even if your machine is 32 bytes).
Your solution is to manually call Read() multiple times, and process the file bit-by-bit. (Note that if you run read_file while the file is still being downloaded, it will /not/ wait to finish until the whole file is downloaded. Ordinary files can't be used as FIFOs.)
Perhaps the size limitations on slices should be documented in the spec? (Or better, removed? It does feel somewhat artificial.)