Press "Enter" to skip to content

Use Aria2 docker+ariang to download and Rclone to Mount Google Drive and Sync

0

Last updated on May 8, 2020

This post is to record what i did to create a process to auto mound Google Drive and sync aria2’s download files to Google drive. 

Run Aria2+AriaNG Docker

Regarding docker commands and usage, please visit post: https://blog.51sec.org/2020/04/docker-usage.html
docker run -d -i --restart=always --name ariang -p 8000:80  -p 6800:6800 -v ~/data/:/data -v /home/gdrive/:/gdrive wahyd4/aria2-ariang


Enter into docker to make configuration change for aria2 service.

docker exec -it ariang /bin/bash

Inside the docker, create a shell script , cloneupload.sh,with following code:

bash-4.3#vi cloneupload.sh

#!/bin/bash

GID="$1";
FileNum="$2";
File="$3";
MinSize="5"  #限制最低上传大小,默认5k
MaxSize="157286400"  #限制最高文件大小(单位k),默认15G
RemoteDIR="/gdrive/";  #rclone挂载的本地文件夹,最后面保留/
LocalDIR="/data/";  #Aria2下载目录,最后面保留/

if [[ -z $(echo "$FileNum" |grep -o '[0-9]*' |head -n1) ]]; then FileNum='0'; fi
if [[ "$FileNum" -le '0' ]]; then exit 0; fi
if [[ "$#" != '3' ]]; then exit 0; fi

function LoadFile(){
  IFS_BAK=$IFS
  IFS=$'\n'
  if [[ ! -d "$LocalDIR" ]]; then return; fi
  if [[ -e "$File" ]]; then
    FileLoad="${File/#$LocalDIR}"
    while true
      do
        if [[ "$FileLoad" == '/' ]]; then return; fi
        echo "$FileLoad" |grep -q '/';
        if [[ "$?" == "0" ]]; then
          FileLoad=$(dirname "$FileLoad");
        else
          break;
        fi;
      done;
    if [[ "$FileLoad" == "$LocalDIR" ]]; then return; fi
    EXEC="$(command -v mv)"
    if [[ -z "$EXEC" ]]; then return; fi
    Option=" -f";
    cd "$LocalDIR";
    if [[ -e "$FileLoad" ]]; then
      ItemSize=$(du -s "$FileLoad" |cut -f1 |grep -o '[0-9]*' |head -n1)
      if [[ -z "$ItemSize" ]]; then return; fi
      if [[ "$ItemSize" -le "$MinSize" ]]; then
        echo -ne "\033[33m$FileLoad \033[0mtoo small to spik.\n";
        return;
      fi
      if [[ "$ItemSize" -ge "$MaxSize" ]]; then
        echo -ne "\033[33m$FileLoad \033[0mtoo large to spik.\n";
        return;
      fi
      eval "${EXEC}${Option}" \'"${FileLoad}"\' "${RemoteDIR}";
    fi
  fi
  IFS=$IFS_BAK
}
LoadFile;

make file become executable: chmod +x rcloneupload.sh:q

Edit Aria2configuration file to add one line at the file end:配置文件中加上一行on-download-complete=/root/rcloneupload.sh即可,后面为脚本的路径。最后重启Aria2生效。

bash-4.3# cd /root/conf/
bash-4.3# ls
aria2.conf      aria2.session      aria2c.sh      key
bash-4.3# vi aria2.conf

Exit from docker to host server. Reboot docker ariang:

Docker restart ariang

Install Rclone

First to install epel source

  1. yum y install epelrelease

Install some components

  1. yum y install wget unzip screen fuse fusedevel

Install rclone

  1. [[email protected]s7-test1 data]# curl https://rclone.org/install.sh | sudo bash

configure rclone

  1. rclone config

first step is to choose n, then pick a name, which is google-drive in my case

  1. No remotes found make a new one
  2. n) New remote
  3. s) Set configuration password
  4. q) Quit config
  5. n/s/q> n
  6. name> google-drive

choose 13 for your storage 

  1. Type of storage to configure.
  2. Enter a string value. Press Enter for the default (“”).
  3. Choose a number from below, or type in your own value
  4. 1 / 1Fichier \ “fichier” 2 / Alias for an existing remote \ “alias” 3 / Amazon Drive \ “amazon cloud drive” 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ “s3” 5 / Backblaze B2 \ “b2” 6 / Box \ “box” 7 / Cache a remote \ “cache” 8 / Citrix Sharefile \ “sharefile” 9 / Dropbox \ “dropbox” 10 / Encrypt/Decrypt a remote \ “crypt” 11 / FTP Connection \ “ftp” 12 / Google Cloud Storage (this is not Google Drive) \ “google cloud storage” 13 / Google Drive \ “drive” 14 / Google Photos \ “google photos”
  5. Storage> 13

just press enter for Google Application Client ID and client_secret

  1. Google Application Client Id
  2. Leave blank normally.
  3. Enter a string value. Press Enter for the default (“”).
  4. client_id>
  5. Google Application Client Secret
  6. Leave blank normally.
  7. Enter a string value. Press Enter for the default (“”).
  8. client_secret

chosse 1 for your access to your drive

  1. Scope that rclone should use when requesting access from drive.
  2. Enter a string value. Press Enter for the default (“”).
  3. Choose a number from below, or type in your own value
  4. 1 / Full access all files, excluding Application Data Folder.
  5. \ “drive”
  6. 2 / Readonly access to file metadata and file contents.
  7. \ “drive.readonly”
  8. / Access to files created by rclone only.
  9. 3 | These are visible in the drive website.
  10. | File authorization is revoked when the user deauthorizes the app.
  11. \ “drive.file”
  12. / Allows read and write access to the Application Data folder.
  13. 4 | This is not visible in the drive website.
  14. \ “drive.appfolder”
  15. / Allows readonly access to file metadata but
  16. 5 | does not allow any access to read or download file content.
  17. \ “drive.metadata.readonly”
  18. scope> 1

as to your folder id and json service account file, press enter to use default vaule. 

  1. ID of the root folder
  2. Leave blank normally.
  3. Fill in to access “Computers” folders. (see docs).
  4. Enter a string value. Press Enter for the default (“”).
  5. root_folder_id>
  6. Service Account Credentials JSON file path
  7. Leave blank normally.
  8. Needed only if you want use SA instead of interactive login.
  9. Enter a string value. Press Enter for the default (“”).
  10. service_account_file

 N for advanced config.

  1. Edit advanced config? (y/n)
  2. y) Yes
  3. n) No
  4. y/n> n

Since we are working on a remote, n for auto config.

  1. Remote config
  2. Use auto config?
  3. * Say Y if not sure
  4. * Say N if you are working on a remote or headless machine
  5. y) Yes
  6. n) No
  7. y/n> n

You will have to copy the link to log in to your Google account to get auth verification code:

  1. If your browser doesn‘t open automatically go to the following link: https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=20226815644.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=55663e7e07382e3ddb9025c86de4f
  2. Log in and authorize rclone for access
  3. Enter verification code> 4/UQGiRz375eb-OixO5EUtZMxBhJwAQ4zOyvA1wtJWK2Ocmzh3zNYE

n for team drive. 

  1. Configure this as a team drive?
  2. y) Yes
  3. n) No
  4. y/n> n

 y to confirm your config

  1. ——————–
  2. [google-drive]
  3. type = drive
  4. scope = drive
  5. token = {“access_token”:“ya29.GlsQByNiBURlXoPpe-bDpa2kF99Jo4rrmjicBXdWIT6loPUhS7SJ9XWUIk2LP4vO231nra_zpUwHn6no0Y_LBbXYFZvyVf0gRthepF2VuPFdhBFEKY7XYJaelt”,“token_type”:“Bearer”,“refresh_token”:“1/ry1JGhRiqqE6-PqRN-S2icZ_Oz9uOTXfSNxWA85zUnjU5gEm-6TejL6o-hjyuY”,“expiry”:“2019-05-21T04:36:23.300542043-04:00”}
  6. ——————–
  7. y) Yes this is OK
  8. e) Edit this remote
  9. d) Delete this remote
  10. y/e/d> y

q to exit.

  1. Current remotes:
  2.  
  3. Name Type
  4. ==== ====
  5. google-drive drive
  6.  
  7. e) Edit existing remote
  8. n) New remote
  9. d) Delete remote
  10. r) Rename remote
  11. c) Copy remote
  12. s) Set configuration password
  13. q) Quit config
  14. e/n/d/r/c/s/q> q

We finished the basic rclone config. 

 

Now we will need to mount  Google Drive to VPS and even it rebooted, it will still auto-mount Google Drive

create a new folder at /home/gdrive

  1. mkdir p /home/gdrive

mount system

  1. rclone mount google-drive: /home/gdrive allowother allownonempty vfscachemode writes

google-drive is the Rclone configuration name.

You also can define a sub folder name:

  1. rclone mount google-drive:backup /home/gdrive allowother allownonempty vfscachemode writes

google-drive:backup google-drive 为 Rclone 的配置名称:backup 为网盘里的目录名

unmount Google Drive  – easiest way in a ssh session is to ctrl+c

  1. fusermount qzu /home/gdrive

 

[[email protected] data]# rclone ls google-drive:/

    33196 3916278.html

  1036266 69bbca83ly1gdr8plweo5g209e09yx6b.gif

       42 test.test

 

Mount action usually take a couple seconds. You can open a second ssh session to check.

  1. [[email protected] ~]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 462M 0 462M 0% /dev tmpfs 494M 0 494M 0% /dev/shm tmpfs 494M 14M 481M 3% /run tmpfs 494M 0 494M 0% /sys/fs/cgroup /dev/sda3 39G 3.2G 35G 9% / /dev/sda1 512M 12M 501M 3% /boot/efi tmpfs 99M 0 99M 0% /run/user/1000 google-drive: 15G 1.2G 14G 8% /home/gdrive

To unmount, simply press “CTRL+c” to stop the mount.

To make rclone mount the google drive even after rebooted the vps, create /usr/lib/systemd/system/rclone.service with following information:

  1. [Unit]
  2. Description=rclone
  3.  
  4. [Service]
  5. User=root
  6. ExecStart=rclone mount google-drive: /home/gdrive allowother allownonempty vfscachemode writes
  7. Restart=onabort
  8.  
  9. [Install]
  10. WantedBy=multiuser.target

systemctl start rclone
systemctl enable rclone

把文件上传到 Google Drive

为什么不直接把文件目录设置成挂载目录?如果直接把文件目录指定到挂载目录,会出现各种莫名其妙的错误,比如:文件无法写入、读取、保存到 Googlr Drive 的文件不完整等等奇葩的问题。这里可以用同步命令,本地目录/home/backup 同步到网盘的 backup 目录

  1. rclone sync /home/backup gdrive:backup

相反,把目录调整下,就是把网盘 backup 目录同步到 VPS 目录/home/backup

  1. rclone sync gdrive:backup /home/backup

通过添加此参数 –ignore-existing 可以忽略在网盘上已备份的文件,这相当于增量备份

  1. rclone copy ignoreexisting /home/backup gdrive:backup

如果挂载 2 个网盘的话,可以同步配置名 gd2 的网盘里的 backup 目录,到配置名为 gd 的网盘的 backup 目录,反之亦然

  1. rclone sync gdrive2:backup gdrive:backup

Command outputs

  1. [[email protected] data]# curl https://rclone.org/install.sh | sudo bash % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4437 100 4437 0 0 5792 0 –:–:– –:–:– –:–:– 5800 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15 100 15 0 0 19 0 –:–:– –:–:– –:–:– 19 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 11.3M 100 11.3M 0 0 4636k 0 0:00:02 0:00:02 –:–:– 4635k Archive: rclone-current-linux-amd64.zip creating: tmp_unzip_dir_for_rclone/rclone-v1.51.0-linux-amd64/ inflating: tmp_unzip_dir_for_rclone/rclone-v1.51.0-linux-amd64/rclone.1 [text] inflating: tmp_unzip_dir_for_rclone/rclone-v1.51.0-linux-amd64/README.txt [text] inflating: tmp_unzip_dir_for_rclone/rclone-v1.51.0-linux-amd64/README.html [text] extracting: tmp_unzip_dir_for_rclone/rclone-v1.51.0-linux-amd64/git-log.txt [empty] inflating: tmp_unzip_dir_for_rclone/rclone-v1.51.0-linux-amd64/rclone [binary] Purging old database entries in /usr/share/man… mandb: warning: /usr/share/man/man8/fsck.fat.8.manpage-fix.gz: ignoring bogus filename Processing manual pages under /usr/share/man… Updating index cache for path `/usr/share/man/man1′. Wait…mandb: warning: /usr/share/man/man1/containerd-config.1: whatis parse for containerd-config(1) failed mandb: warning: /usr/share/man/man1/containerd.1: whatis parse for containerd(1) failed mandb: warning: /usr/share/man/man1/ctr.1: whatis parse for ctr(1) failed Updating index cache for path `/usr/share/man/man4′. Wait…mandb: can’t open /usr/share/man/man/man4/crontabs.4: No such file or directory mandb: warning: /usr/share/man/man4/run-parts.4.gz: bad symlink or ROFF `.so’ request Updating index cache for path `/usr/share/man/man5′. Wait…mandb: warning: /usr/share/man/man5/containerd-config.toml.5: whatis parse for containerd-config.toml(5) failed Updating index cache for path `/usr/share/man/man8′. Wait…mandb: warning: /usr/share/man/man8/fsck.fat.8.manpage-fix.gz: ignoring bogus filename done. Checking for stray cats under /usr/share/man… Checking for stray cats under /var/cache/man… Purging old database entries in /usr/share/man/hu… Processing manual pages under /usr/share/man/hu… Purging old database entries in /usr/share/man/fr… Processing manual pages under /usr/share/man/fr… Updating index cache for path `/usr/share/man/fr/man8′. Wait…done. Checking for stray cats under /usr/share/man/fr… Checking for stray cats under /var/cache/man/fr… Purging old database entries in /usr/share/man/ja… Processing manual pages under /usr/share/man/ja… Updating index cache for path `/usr/share/man/ja/man1′. Wait…mandb: warning: /usr/share/man/ja/man1/evim.1.gz: whatis parse for evim(1) failed mandb: warning: /usr/share/man/ja/man1/vim.1.gz: whatis parse for ex(1) failed mandb: warning: /usr/share/man/ja/man1/vim.1.gz: whatis parse for rview(1) failed mandb: warning: /usr/share/man/ja/man1/vim.1.gz: whatis parse for rvim(1) failed mandb: warning: /usr/share/man/ja/man1/vim.1.gz: whatis parse for view(1) failed mandb: warning: /usr/share/man/ja/man1/vim.1.gz: whatis parse for vim(1) failed mandb: warning: /usr/share/man/ja/man1/vimdiff.1.gz: whatis parse for vimdiff(1) failed mandb: warning: /usr/share/man/ja/man1/vimtutor.1.gz: whatis parse for vimtutor(1) failed mandb: warning: /usr/share/man/ja/man1/xxd.1.gz: whatis parse for xxd(1) failed done. Checking for stray cats under /usr/share/man/ja… Checking for stray cats under /var/cache/man/ja… Purging old database entries in /usr/share/man/ko… Processing manual pages under /usr/share/man/ko… Updating index cache for path `/usr/share/man/ko/man8′. Wait…done. Checking for stray cats under /usr/share/man/ko… Checking for stray cats under /var/cache/man/ko… Purging old database entries in /usr/share/man/pl… Processing manual pages under /usr/share/man/pl… Updating index cache for path `/usr/share/man/pl/man8′. Wait…done. Checking for stray cats under /usr/share/man/pl… Checking for stray cats under /var/cache/man/pl… Purging old database entries in /usr/share/man/ru… Processing manual pages under /usr/share/man/ru… Updating index cache for path `/usr/share/man/ru/man1′. Wait…done. Checking for stray cats under /usr/share/man/ru… Checking for stray cats under /var/cache/man/ru… Purging old database entries in /usr/share/man/sk… Processing manual pages under /usr/share/man/sk… Updating index cache for path `/usr/share/man/sk/man8′. Wait…done. Checking for stray cats under /usr/share/man/sk… Checking for stray cats under /var/cache/man/sk… Purging old database entries in /usr/share/man/cs… Processing manual pages under /usr/share/man/cs… Updating index cache for path `/usr/share/man/cs/man7′. Wait…done. Checking for stray cats under /usr/share/man/cs… Checking for stray cats under /var/cache/man/cs… Purging old database entries in /usr/share/man/da… Processing manual pages under /usr/share/man/da… Purging old database entries in /usr/share/man/de… Processing manual pages under /usr/share/man/de… Purging old database entries in /usr/share/man/id… Processing manual pages under /usr/share/man/id… Purging old database entries in /usr/share/man/it… Processing manual pages under /usr/share/man/it… Purging old database entries in /usr/share/man/pt_BR… Processing manual pages under /usr/share/man/pt_BR… Purging old database entries in /usr/share/man/sv… Processing manual pages under /usr/share/man/sv… Purging old database entries in /usr/share/man/tr… Processing manual pages under /usr/share/man/tr… Purging old database entries in /usr/share/man/zh_CN… Processing manual pages under /usr/share/man/zh_CN… Purging old database entries in /usr/share/man/zh_TW… Processing manual pages under /usr/share/man/zh_TW… Purging old database entries in /usr/share/man/pt… Processing manual pages under /usr/share/man/pt… Purging old database entries in /usr/share/man/ca… Processing manual pages under /usr/share/man/ca… Updating index cache for path `/usr/share/man/ca/man8′. Wait…done. Checking for stray cats under /usr/share/man/ca… Checking for stray cats under /var/cache/man/ca… Purging old database entries in /usr/share/man/uk… Processing manual pages under /usr/share/man/uk… Updating index cache for path `/usr/share/man/uk/man8′. Wait…done. Checking for stray cats under /usr/share/man/uk… Checking for stray cats under /var/cache/man/uk… Purging old database entries in /usr/share/man/es… Processing manual pages under /usr/share/man/es… Purging old database entries in /usr/share/man/nl… Processing manual pages under /usr/share/man/nl… Purging old database entries in /usr/share/man/overrides… Processing manual pages under /usr/share/man/overrides… Updating index cache for path `/usr/share/man/overrides/man8′. Wait…done. Checking for stray cats under /usr/share/man/overrides… Checking for stray cats under /var/cache/man/overrides… Purging old database entries in /usr/share/man/en… Processing manual pages under /usr/share/man/en… Purging old database entries in /usr/local/share/man… Processing manual pages under /usr/local/share/man… Updating index cache for path `/usr/local/share/man/man1′. Wait…done. Checking for stray cats under /usr/local/share/man… Checking for stray cats under /var/cache/man/local… 29 man subdirectories contained newer manual pages. 2648 manual pages were added. 0 stray cats were added. 1 old database entry was purged. rclone v1.51.0 has successfully installed. Now run “rclone config” for setup. Check https://rclone.org/docs/ for more details. [[email protected] data]# ls 3916278.html 69bbca83ly1gdr8plweo5g209e09yx6b.gif test.test [[email protected] data]# rclone config 2020/05/05 22:02:56 NOTICE: Config file “/root/.config/rclone/rclone.conf” not found – using defaults No remotes found – make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> google-drive Type of storage to configure. Enter a string value. Press Enter for the default (“”). Choose a number from below, or type in your own value 1 / 1Fichier \ “fichier” 2 / Alias for an existing remote \ “alias” 3 / Amazon Drive \ “amazon cloud drive” 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ “s3” 5 / Backblaze B2 \ “b2” 6 / Box \ “box” 7 / Cache a remote \ “cache” 8 / Citrix Sharefile \ “sharefile” 9 / Dropbox \ “dropbox” 10 / Encrypt/Decrypt a remote \ “crypt” 11 / FTP Connection \ “ftp” 12 / Google Cloud Storage (this is not Google Drive) \ “google cloud storage” 13 / Google Drive \ “drive” 14 / Google Photos \ “google photos” 15 / Hubic \ “hubic” 16 / In memory object storage system. \ “memory” 17 / JottaCloud \ “jottacloud” 18 / Koofr \ “koofr” 19 / Local Disk \ “local” 20 / Mail.ru Cloud \ “mailru” 21 / Mega \ “mega” 22 / Microsoft Azure Blob Storage \ “azureblob” 23 / Microsoft OneDrive \ “onedrive” 24 / OpenDrive \ “opendrive” 25 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ “swift” 26 / Pcloud \ “pcloud” 27 / Put.io \ “putio” 28 / QingCloud Object Storage \ “qingstor” 29 / SSH/SFTP Connection \ “sftp” 30 / Sugarsync \ “sugarsync” 31 / Transparently chunk/split large files \ “chunker” 32 / Union merges the contents of several remotes \ “union” 33 / Webdav \ “webdav” 34 / Yandex Disk \ “yandex” 35 / http Connection \ “http” 36 / premiumize.me \ “premiumizeme” Storage> 13 ** See help for drive backend at: https://rclone.org/drive/ ** Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance. Enter a string value. Press Enter for the default (“”). client_id> Cyberark1 Google Application Client Secret Setting your own is recommended. Enter a string value. Press Enter for the default (“”). client_secret> Cyberark1 Scope that rclone should use when requesting access from drive. Enter a string value. Press Enter for the default (“”). Choose a number from below, or type in your own value 1 / Full access all files, excluding Application Data Folder. \ “drive” 2 / Read-only access to file metadata and file contents. \ “drive.readonly” / Access to files created by rclone only. 3 | These are visible in the drive website. | File authorization is revoked when the user deauthorizes the app. \ “drive.file” / Allows read and write access to the Application Data folder. 4 | This is not visible in the drive website. \ “drive.appfolder” / Allows read-only access to file metadata but 5 | does not allow any access to read or download file content. \ “drive.metadata.readonly” scope> 1 ID of the root folder Leave blank normally. Fill in to access “Computers” folders (see docs), or for rclone to use a non root folder as its starting point. Note that if this is blank, the first time rclone runs it will fill it in with the ID of the root folder. Enter a string value. Press Enter for the default (“”). root_folder_id> Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Enter a string value. Press Enter for the default (“”). service_account_file> Edit advanced config? (y/n) y) Yes n) No (default) y/n> y Only consider files owned by the authenticated user. Enter a boolean value (true or false). Press Enter for the default (“false”). auth_owner_only> Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use `–drive-use-trash=false` to delete files permanently instead. Enter a boolean value (true or false). Press Enter for the default (“true”). use_trash> Skip google documents in all listings. If given, gdocs practically become invisible to rclone. Enter a boolean value (true or false). Press Enter for the default (“false”). skip_gdocs> Skip MD5 checksum on Google photos and videos only. Use this if you get checksum errors when transferring Google photos or videos. Setting this flag will cause Google photos and videos to return a blank MD5 checksum. Google photos are identifed by being in the “photos” space. Corrupted checksums are caused by Google modifying the image/video but not updating the checksum. Enter a boolean value (true or false). Press Enter for the default (“false”). skip_checksum_gphotos> Only show files that are shared with me. Instructs rclone to operate on your “Shared with me” folder (where Google Drive lets you access the files and folders others have shared with you). This works both with the “list” (lsd, lsl, etc) and the “copy” commands (copy, sync, etc), and with all other commands too. Enter a boolean value (true or false). Press Enter for the default (“false”). shared_with_me> Only show files that are in the trash. This will show trashed files in their original directory structure. Enter a boolean value (true or false). Press Enter for the default (“false”). trashed_only> Comma separated list of preferred formats for downloading Google docs. Enter a string value. Press Enter for the default (“docx,xlsx,pptx,svg”). export_formats> Comma separated list of preferred formats for uploading Google docs. Enter a string value. Press Enter for the default (“”). import_formats> Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. Enter a boolean value (true or false). Press Enter for the default (“false”). allow_import_name_change> Size of listing chunk 100-1000. 0 to disable. Enter a signed integer. Press Enter for the default (“1000”). list_chunk> Impersonate this user when using a service account. Enter a string value. Press Enter for the default (“”). impersonate> Use alternate export URLs for google documents export., If this option is set this instructs rclone to use an alternate set of export URLs for drive documents. Users have reported that the official export URLs can’t export large documents, whereas these unofficial ones can. See rclone issue [#2243](https://github.com/rclone/rclone/issues/2243) for background, [this google drive issue](https://issuetracker.google.com/issues/36761333) and [this helpful post](https://www.labnol.org/internet/direct-links-for-google-drive/28356/). Enter a boolean value (true or false). Press Enter for the default (“false”). alternate_export> Cutoff for switching to chunked upload Enter a size with suffix k,M,G,T. Press Enter for the default (“8M”). upload_cutoff> Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. Enter a size with suffix k,M,G,T. Press Enter for the default (“8M”). chunk_size> Set to allow files which return cannotDownloadAbusiveFile to be downloaded. If downloading a file returns the error “This file has been identified as malware or spam and cannot be downloaded” with the error code “cannotDownloadAbusiveFile” then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway. Enter a boolean value (true or false). Press Enter for the default (“false”). acknowledge_abuse> Keep new head revision of each file forever. Enter a boolean value (true or false). Press Enter for the default (“false”). keep_revision_forever> If Object’s are greater, use drive v2 API to download. Enter a size with suffix k,M,G,T. Press Enter for the default (“off”). v2_download_min_size> Minimum time to sleep between API calls. Enter a duration s,m,h,d,w,M,y. Press Enter for the default (“100ms”). pacer_min_sleep> Number of API calls to allow without sleeping. Enter a signed integer. Press Enter for the default (“100”). pacer_burst> Allow server side operations (eg copy) to work across different drive configs. This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn’t enabled by default because it isn’t easy to tell if it will work between any two configurations. Enter a boolean value (true or false). Press Enter for the default (“false”). server_side_across_configs> Disable drive using http2 There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed. See: https://github.com/rclone/rclone/issues/3631 Enter a boolean value (true or false). Press Enter for the default (“true”). disable_http2> Make upload limit errors be fatal At the time of writing it is only possible to upload 750GB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync. Note that this detection is relying on error message strings which Google don’t document so it may break in the future. See: https://github.com/rclone/rclone/issues/3857 Enter a boolean value (true or false). Press Enter for the default (“false”). stop_on_upload_limit> This sets the encoding for the backend. See: the [encoding section in the overview](/overview/#encoding) for more info. Enter a encoder.MultiEncoder value. Press Enter for the default (“InvalidUtf8”). encoding> Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes (default) n) No y/n> y If your browser doesn’t open automatically go to the following link: http://127.0.0.1:53682/auth?state=pVm9an8OFijQ5dseVlnmHA Log in and authorize rclone for access Waiting for code… ^C [[email protected] data]# ls 3916278.html 69bbca83ly1gdr8plweo5g209e09yx6b.gif test.test [[email protected] data]# cd / [[email protected] /]# ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var [[email protected] /]# rclone config 2020/05/05 22:11:00 NOTICE: Config file “/root/.config/rclone/rclone.conf” not found – using defaults No remotes found – make a new one n) New remote s) Set configuration password q) Quit config n/s/q> 1 n/s/q> n name> google-drive Type of storage to configure. Enter a string value. Press Enter for the default (“”). Choose a number from below, or type in your own value 1 / 1Fichier \ “fichier” 2 / Alias for an existing remote \ “alias” 3 / Amazon Drive \ “amazon cloud drive” 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ “s3” 5 / Backblaze B2 \ “b2” 6 / Box \ “box” 7 / Cache a remote \ “cache” 8 / Citrix Sharefile \ “sharefile” 9 / Dropbox \ “dropbox” 10 / Encrypt/Decrypt a remote \ “crypt” 11 / FTP Connection \ “ftp” 12 / Google Cloud Storage (this is not Google Drive) \ “google cloud storage” 13 / Google Drive \ “drive” 14 / Google Photos \ “google photos” 15 / Hubic \ “hubic” 16 / In memory object storage system. \ “memory” 17 / JottaCloud \ “jottacloud” 18 / Koofr \ “koofr” 19 / Local Disk \ “local” 20 / Mail.ru Cloud \ “mailru” 21 / Mega \ “mega” 22 / Microsoft Azure Blob Storage \ “azureblob” 23 / Microsoft OneDrive \ “onedrive” 24 / OpenDrive \ “opendrive” 25 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ “swift” 26 / Pcloud \ “pcloud” 27 / Put.io \ “putio” 28 / QingCloud Object Storage \ “qingstor” 29 / SSH/SFTP Connection \ “sftp” 30 / Sugarsync \ “sugarsync” 31 / Transparently chunk/split large files \ “chunker” 32 / Union merges the contents of several remotes \ “union” 33 / Webdav \ “webdav” 34 / Yandex Disk \ “yandex” 35 / http Connection \ “http” 36 / premiumize.me \ “premiumizeme” Storage> 13 ** See help for drive backend at: https://rclone.org/drive/ ** Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance. Enter a string value. Press Enter for the default (“”). client_id> Google Application Client Secret Setting your own is recommended. Enter a string value. Press Enter for the default (“”). client_secret> Scope that rclone should use when requesting access from drive. Enter a string value. Press Enter for the default (“”). Choose a number from below, or type in your own value 1 / Full access all files, excluding Application Data Folder. \ “drive” 2 / Read-only access to file metadata and file contents. \ “drive.readonly” / Access to files created by rclone only. 3 | These are visible in the drive website. | File authorization is revoked when the user deauthorizes the app. \ “drive.file” / Allows read and write access to the Application Data folder. 4 | This is not visible in the drive website. \ “drive.appfolder” / Allows read-only access to file metadata but 5 | does not allow any access to read or download file content. \ “drive.metadata.readonly” scope> 1 ID of the root folder Leave blank normally. Fill in to access “Computers” folders (see docs), or for rclone to use a non root folder as its starting point. Note that if this is blank, the first time rclone runs it will fill it in with the ID of the root folder. Enter a string value. Press Enter for the default (“”). root_folder_id> Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Enter a string value. Press Enter for the default (“”). service_account_file> Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes (default) n) No y/n> n Please go to the following link: https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=202264815644.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=wxylrgDHf7QnNOy5zetieg Log in and authorize rclone for access Enter verification code> 4/zQEAebNBYt68Rj1k2ckWuffBaL35WqVd-rJhf4AiCSU5MTqnn-O_4ao Configure this as a team drive? y) Yes n) No (default) y/n> n ——————– [google-drive] type = drive scope = drive token = {“access_token”:”ya29.a0Ae4lvC1R7463egrz5gGDGFAwjA2elNfSs0T325Her5U0FBHgw_B2pZSipzP9CXBjidavMMFQNXzxgSTDgAyy6_cOHSm1MzRS5jxBIL3wlFoXzj3eCy72xcAWzJvlchzM95wOxoO6YRzu8j175S1DthpJwr6Zt_tR7Dg”,”token_type”:”Bearer”,”refresh_token”:”1//0fqzXgAuJUWU2CgYIARAAGA8SNwF-L9IrsitYDMEfAeJgOutDZV7OyqaWLjeoaj9mDw-e-4beZlwucBY8Br32tQM5cXFl7BenyLo”,”expiry”:”2020-05-05T23:14:12.286565445Z”} ——————– y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== google-drive drive e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q [[email protected] /]# mkdir -p /home/gdrive [[email protected] /]# rclone mount google-drive: /home/gdrive –allow-other –allow-non-empty –vfs-cache-mode writes 2020/05/05 22:15:48 Fatal error: failed to mount FUSE fs: fusermount: exec: “fusermount”: executable file not found in $PATH [[email protected] /]# yum install fuse Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirror.netflash.net * epel: iad.mirror.rackspace.com * extras: centos.mirror.rafal.ca * updates: centos.mirror.rafal.ca Resolving Dependencies –> Running transaction check —> Package fuse.x86_64 0:2.9.2-11.el7 will be installed –> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================================================== Package Arch Version Repository Size ================================================================================================================================================================================================================================== Installing: fuse x86_64 2.9.2-11.el7 base 86 k Transaction Summary ================================================================================================================================================================================================================================== Install 1 Package Total download size: 86 k Installed size: 218 k Is this ok [y/d/N]: y Downloading packages: fuse-2.9.2-11.el7.x86_64.rpm | 86 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : fuse-2.9.2-11.el7.x86_64 1/1 Verifying : fuse-2.9.2-11.el7.x86_64 1/1 Installed: fuse.x86_64 0:2.9.2-11.el7 Complete! [[email protected] /]# rclone mount google-drive: /home/gdrive –allow-other –allow-non-empty –vfs-cache-mode writes

Leave a Reply

Your email address will not be published. Required fields are marked *