I've looked at programs like the following but I've come across a much easier solution - rclone.
https://gist.github.com/rgregg/37ba8929768a62131e85
https://github.com/fkalis/bash-onedrive-upload
https://github.com/fkalis/bash-onedrive-upload
Read on and learn how to send big files to your OneDrive account from start to finish.
Install rclone on your Linux box
Rclone is a Go program and comes as a single binary file. Simply run the following command to install rclone on Linux/macOS/BSD systems:
curl https://rclone.org/install.sh | sudo bash
This did it for me. If you need further assistance visit https://rclone.org/install/.
Configure rclone on a remote / headless machine
Since I am using some VPS to host my websites, I have no web browser, and thus need to configuring rclone on a remote / headless machine. Follow the tutorial at:
https://rclone.org/remote_setup/
In my case, I used a MacBook as my main desktop machine which has an Internet connected web browser. This tutorial references the following command:
rclone authorize "amazon cloud drive"
when you are supposed to use the following command instead (since you are using Microsoft OneDrive):
rclone authorize "onedrive"
Also, read https://research.reading.ac.uk/act/knowledgebase/rclone-sync/ for additional information which will help you perform this step successfully. Just remember to answer N to the following prompt:
* Say N if you are working on a remote or headless machine
Any questions let me know.
Try sending your big file with rclone
Whew! Installing rclone and configuring it on a headless VPS box alone is already a not-so-easy task. Now you are ready to send your file with rclone to OneDrive. The rclone command looks like this:
rclone copy --progress websites-backup-2022-12-25.tgz remote:websites-backup-folder
where websites-backup-2022-12-25.tgz is the big file you want to send and websites-backup-folder is the destination OneDrive folder.
In my case, my tgz file is 4GB big and it has trouble being sent to my OneDrive account. If you experience similar problems, it means you gotta split it up into smaller files. Read on.
Alternatively, split it up into several smaller files
You can use the Linux command "split" to split the big compressed file into smaller ones. Here's what the command looks like:
#1G each file max. u get xxx-aa, xxx-ab, etc.
split --bytes=1G websites-backup-2022-12-25.tgz websites-backup-2022-12-25-pieces-
split --bytes=1G websites-backup-2022-12-25.tgz websites-backup-2022-12-25-pieces-
After running the command you'll get the following files of 1G max size:
websites-backup-2022-12-25-pieces-aa
websites-backup-2022-12-25-pieces-ab
websites-backup-2022-12-25-pieces-ac
...
websites-backup-2022-12-25-pieces-ab
websites-backup-2022-12-25-pieces-ac
...
In my case rclone can handle 1G file size. Now you can send these 1G files over to OneDrive via this command:
rclone copy --include 'websites-backup-2022-12-25-pieces-*' . remote:websites-backup-folder --progress
This command means rclone will send all files matching websites-backup-2022-12-25-pieces-* in the current working directory over to your remote OneDrive folder called websites-backup-folder.
If this works congratulations! But chances are you run into out-of-memory errors like this:
2019-12-13 18:22:11 NOTICE: Time may be set wrong - time from "xxx-my.sharepoint.com" is -7m59.156559834s different from this computer
Transferred: 70M / 2.869 GBytes, 2%, 7.475 MBytes/s, ETA 6m23s
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 3, 0%
Elapsed time: 9.3s
Transferring:
* websites-backup-2022-12-25-pieces-ab: 2% /1G, 3.333M/s, 4m58s
* websites-backup-2022-12-25-pieces-ac: 2% /1G, 3.338M/s, 4m57s
* websites-backup-2022-12-25-pieces-ad: 1% /889.962M, 1.107M/s, 13m15sfatal error: runtime: out of memory
runtime stack:
runtime.throw(0x168c51d, 0x16)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/panic.go:774 +0x72
runtime.sysMap(0xc00c000000, 0x4000000, 0x2410438)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mem_linux.go:169 +0xc5
runtime.(*mheap).sysAlloc(0x23f76a0, 0x8ac000, 0xc000073e00, 0x4372a7)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:701 +0x1cd
runtime.(*mheap).grow(0x23f76a0, 0x456, 0xffffffff)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1255 +0xa3
runtime.(*mheap).allocSpanLocked(0x23f76a0, 0x456, 0x2410448, 0xc000073f40)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1170 +0x266
runtime.(*mheap).alloc_m(0x23f76a0, 0x456, 0x101, 0x0)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1022 +0xc2
runtime.(*mheap).alloc.func1()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1093 +0x4c
runtime.(*mheap).alloc(0x23f76a0, 0x456, 0xc000000101, 0xc0002f2300)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1092 +0x8a
runtime.largeAlloc(0x8ac000, 0xc000420100, 0xc0001fa788)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:1138 +0x97
runtime.mallocgc.func1()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:1033 +0x46
runtime.systemstack(0x0)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/asm_amd64.s:370 +0x66
runtime.mstart()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/proc.go:1146
.................
Transferred: 70M / 2.869 GBytes, 2%, 7.475 MBytes/s, ETA 6m23s
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 3, 0%
Elapsed time: 9.3s
Transferring:
* websites-backup-2022-12-25-pieces-ab: 2% /1G, 3.333M/s, 4m58s
* websites-backup-2022-12-25-pieces-ac: 2% /1G, 3.338M/s, 4m57s
* websites-backup-2022-12-25-pieces-ad: 1% /889.962M, 1.107M/s, 13m15sfatal error: runtime: out of memory
runtime stack:
runtime.throw(0x168c51d, 0x16)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/panic.go:774 +0x72
runtime.sysMap(0xc00c000000, 0x4000000, 0x2410438)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mem_linux.go:169 +0xc5
runtime.(*mheap).sysAlloc(0x23f76a0, 0x8ac000, 0xc000073e00, 0x4372a7)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:701 +0x1cd
runtime.(*mheap).grow(0x23f76a0, 0x456, 0xffffffff)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1255 +0xa3
runtime.(*mheap).allocSpanLocked(0x23f76a0, 0x456, 0x2410448, 0xc000073f40)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1170 +0x266
runtime.(*mheap).alloc_m(0x23f76a0, 0x456, 0x101, 0x0)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1022 +0xc2
runtime.(*mheap).alloc.func1()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1093 +0x4c
runtime.(*mheap).alloc(0x23f76a0, 0x456, 0xc000000101, 0xc0002f2300)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1092 +0x8a
runtime.largeAlloc(0x8ac000, 0xc000420100, 0xc0001fa788)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:1138 +0x97
runtime.mallocgc.func1()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:1033 +0x46
runtime.systemstack(0x0)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/asm_amd64.s:370 +0x66
runtime.mstart()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/proc.go:1146
.................
This is due to your VPS not having enough memory to send multiple files in parallel. What now? Read on:
Ask rclone to send files sequentially
Add this to the rclone command to send files sequentially / synchronously:
--transfers=1 (where 1 is the number of files concurrently copying)
But I personally haven't tried it.
Send files sequentially inside the shell script
My way is use a Bash shell script to process uploading one file at a time like this:
for i in websites-backup-2022-12-25-pieces-*; do
rclone copy --progress $i remote:websites-backup-folder
done
rclone copy --progress $i remote:websites-backup-folder
done
This did it for me.
I hope you enjoy this comprehensive tutorial teaching you to send big files to Microsoft OneDrive folder from a remote machine such as a VPS running Linux.
By the way you can use the "cat" Linux command to combine the small file pieces into the original tgz file, but make sure to do it in the order aa, ab, ac, etc.
Questions?