目 录CONTENT

文章目录

使用Rclone同步&备份数据至七牛云、腾讯云OSS、谷歌云盘(Google Drive)

俊阳IT知识库
2023-07-19 / 0 评论 / 0 点赞 / 948 阅读 / 5,876 字 / 正在检测是否收录...
温馨提示:
本文最后更新于 2023-12-07,若内容或图片失效,请留言反馈。部分素材来自网络,若不小心影响到您的利益,请联系我们删除。
广告

简介

Rclone 是一个命令行工具,支持在不同对象存储、网盘间同步、上传、下载数据,还有挂载等功能。并且通过一些设置可以实现离线下载、服务器备份等非常实用的功能(目前我认为是最强大的脚本备份同步程序)。

PS:也可以把其他网盘挂载在本地,例如阿里云盘。

支持 Linux、Windows、MacOS等系统。

再配合【cron】可以实现定时备份数据至我们的网盘、OSS存储等系统。

Rclone安装教程链接:https://blog.fanjunyang.zone/archives/tools-rclone-install

配置七牛云Kodo

1、执行 rclone config,然后选择 n 创建新的 remote 端

root@junyang:~# rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

2、为 remote 端命名

Enter name for new remote.
name> qiniu

3、选择七牛云对象存储 Kodo 所兼容的 S3

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon Drive
   \ (amazon cloud drive)
 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
   \ (s3)
 6 / Backblaze B2
   \ (b2)
 7 / Better checksums for other remotes
   \ (hasher)
 8 / Box
   \ (box)
 9 / Cache a remote
   \ (cache)
10 / Citrix Sharefile
   \ (sharefile)
11 / Combine several remotes into one
   \ (combine)
12 / Compress a remote
   \ (compress)
......
Storage> 5 

4、选择七牛云对象存储 Kodo 为后端

Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Amazon Web Services (AWS) S3
   \ (AWS)
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ (Alibaba)
 3 / Ceph Object Storage
   \ (Ceph)
 4 / China Mobile Ecloud Elastic Object Storage (EOS)
   \ (ChinaMobile)
 5 / Cloudflare R2 Storage
   \ (Cloudflare)
 6 / Arvan Cloud Object Storage (AOS)
   \ (ArvanCloud)
 7 / DigitalOcean Spaces
   \ (DigitalOcean)
 8 / Dreamhost DreamObjects
   \ (Dreamhost)
 9 / Huawei Object Storage Service
   \ (HuaweiOBS)
10 / IBM COS S3
   \ (IBMCOS)
11 / IDrive e2
   \ (IDrive)
12 / IONOS Cloud
   \ (IONOS)
13 / Seagate Lyve Cloud
   \ (LyveCloud)
14 / Liara Object Storage
   \ (Liara)
15 / Minio Object Storage
   \ (Minio)
16 / Netease Object Storage (NOS)
   \ (Netease)
17 / RackCorp Object Storage
   \ (RackCorp)
18 / Scaleway Object Storage
   \ (Scaleway)
19 / SeaweedFS S3
   \ (SeaweedFS)
20 / StackPath Object Storage
   \ (StackPath)
21 / Storj (S3 Compatible Gateway)
   \ (Storj)
22 / Tencent Cloud Object Storage (COS)
   \ (TencentCOS)
23 / Wasabi Object Storage
   \ (Wasabi)
24 / Qiniu Object Storage (Kodo)
   \ (Qiniu)
25 / Any other S3 compatible provider
   \ (Other)
provider> 24

5、提供七牛云对象存储 Kodo 的 AK/SK(也就是密钥)

Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth> 1

七牛云密钥地址链接

输入七牛云对象存储 Kodo 的 AK

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> **** AK ****

输入七牛云对象存储 Kodo 的 SK

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> **** SK ****

6、选择七牛云对象存储 Kodo 的 S3 地址(也就是存储区域,根据你创建的对象存储地域进行选择,我创建的是 华南-广东,所以选择的是4、4、4)

Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / The default endpoint - a good choice if you are unsure.
 1 | East China Region 1.
   | Needs location constraint cn-east-1.
   \ (cn-east-1)
   / East China Region 2.
 2 | Needs location constraint cn-east-2.
   \ (cn-east-2)
   / North China Region 1.
 3 | Needs location constraint cn-north-1.
   \ (cn-north-1)
   / South China Region 1.
 4 | Needs location constraint cn-south-1.
   \ (cn-south-1)
   / North America Region.
 5 | Needs location constraint us-north-1.
   \ (us-north-1)
   / Southeast Asia Region 1.
 6 | Needs location constraint ap-southeast-1.
   \ (ap-southeast-1)
   / Northeast Asia Region 1.
 7 | Needs location constraint ap-northeast-1.
   \ (ap-northeast-1)
region> 4
Option endpoint.
Endpoint for Qiniu Object Storage.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / East China Endpoint 1
   \ (s3-cn-east-1.qiniucs.com)
 2 / East China Endpoint 2
   \ (s3-cn-east-2.qiniucs.com)
 3 / North China Endpoint 1
   \ (s3-cn-north-1.qiniucs.com)
 4 / South China Endpoint 1
   \ (s3-cn-south-1.qiniucs.com)
 5 / North America Endpoint 1
   \ (s3-us-north-1.qiniucs.com)
 6 / Southeast Asia Endpoint 1
   \ (s3-ap-southeast-1.qiniucs.com)
 7 / Northeast Asia Endpoint 1
   \ (s3-ap-northeast-1.qiniucs.com)
endpoint> 4
Option location_constraint.
Location constraint - must be set to match the Region.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / East China Region 1
   \ (cn-east-1)
 2 / East China Region 2
   \ (cn-east-2)
 3 / North China Region 1
   \ (cn-north-1)
 4 / South China Region 1
   \ (cn-south-1)
 5 / North America Region 1
   \ (us-north-1)
 6 / Southeast Asia Region 1
   \ (ap-southeast-1)
 7 / Northeast Asia Region 1
   \ (ap-northeast-1)
location_constraint> 4

7、选择 ACL 和存储类型(我选择公读、标准储存,所以选择2、1)

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
   / Owner gets FULL_CONTROL.
 3 | The AllUsers group gets READ and WRITE access.
   | Granting this on a bucket is generally not recommended.
   \ (public-read-write)
   / Owner gets FULL_CONTROL.
 4 | The AuthenticatedUsers group gets READ access.
   \ (authenticated-read)
   / Object owner gets FULL_CONTROL.
 5 | Bucket owner gets READ access.
   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-read)
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-full-control)
acl> 2
Option storage_class.
The storage class to use when storing new objects in Qiniu.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Standard storage class
   \ (STANDARD)
 2 / Infrequent access storage mode
   \ (LINE)
 3 / Archive storage mode
   \ (GLACIER)
 4 / Deep archive storage mode
   \ (DEEP_ARCHIVE)
storage_class> 1

8、不用高级配置

Edit advanced config?
y) Yes
n) No (default)
y/n> n

9、确认配置

Configuration complete.
Options:
- type: s3
- provider: qiniu
- access_key_id: **** AK ****
- secret_access_key: **** SK ****
- region: cn-south-1
- endpoint: s3-cn-south-1.qiniucs.com
- location_constraint: cn-south-1
- acl: public-read
- storage_class: STANDARD
Keep this "qiniu" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
qiniu                s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Rclone 管理七牛云对象存储 Kodo 的常用命令

列举

仅仅列举对象完整路径和大小

rclone ls qiniu:bucket-name/directory-path

额外列举对象修改时间

rclone lsl qiniu:bucket-name/directory-path

仅仅列举目录

rclone lsd qiniu:bucket-name/directory-path

列举目录和文件,目录以 / 结尾

rclone lsf qiniu:bucket-name/directory-path

列举对象所有信息,以 JSON 的形式

rclone lsjson qiniu:bucket-name/directory-path

以树的形式列举目录和文件

rclone tree qiniu:bucket-name/directory-path

以 CUI 的形式列举目录和文件

rclone ncdu qiniu:bucket-name/directory-path

读取

从云存储读取对象内容

rclone cat qiniu:dest-bucket-name/dest-path

从云存储获取对文件下载地址

rclone link qiniu:dest-bucket-name/dest-path

从云存储目录计算对象数量和总大小

rclone size qiniu:dest-bucket-name/dest-directory-path

上传

从标准输入流获取数据并上传到云存储

cat local-path | rclone rcat qiniu:dest-bucket-name/dest-path

同步

从本地同步到云存储

rclone sync local-path qiniu:dest-bucket-name/dest-directory-path

从云存储同步到云存储

rclone sync qiniu:src-bucket-name/src-directory-path qiniu:dest-bucket-name/dest-directory-path

对比本地与云存储

rclone check local-path qiniu:dest-bucket-name/dest-directory-path

对比云存储与云存储

rclone check qiniu:src-bucket-name/src-directory-path qiniu:dest-bucket-name/dest-directory-path

移动

移动目录

rclone move qiniu:src-bucket-name/src-directory-path qiniu:dest-bucket-name/dest-directory-path

移动文件

rclone moveto qiniu:src-bucket-name/src-path qiniu:dest-bucket-name/dest-path

复制

从指定 URL 复制内容到云存储

rclone copyurl https://url qiniu:dest-bucket-name/dest-path

复制目录

rclone copy qiniu:src-bucket-name/src-directory-path qiniu:dest-bucket-name/dest-directory-path

复制文件

rclone copyto qiniu:src-bucket-name/src-path qiniu:dest-bucket-name/dest-path

删除

删除目录

rclone delete qiniu:bucket-name/dest-directory-path

删除文件

rclone deletefile qiniu:bucket-name/dest-path

修改存储类型

修改目录

rclone settier STORAGE_CLASS qiniu:bucket-name/dest-directory-path

修改文件

rclone settier STORAGE_CLASS qiniu:bucket-name/dest-path

校验

校验目录

rclone hashsum MD5 qiniu:bucket-name/dest-directory-path

Rclone 对接到七牛云对象存储 Kodo 的常用命令

将 Kodo 用作 HTTP 服务器

rclone serve http qiniu:bucket-name/dest-directory-path --addr ip:port

将 Kodo 用作 FTP 服务器

rclone serve ftp qiniu:bucket-name/dest-directory-path --addr ip:port

将 Kodo 作为文件系统挂载到挂载点上

rclone mount qiniu:bucket-name/dest-directory-path mount-point

配置腾讯云COS

1、执行 rclone config,然后选择 n 创建新的 remote 端

root@junyang:~# rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

2、为 remote 端命名

Enter name for new remote.
name> cos

3、选择腾讯云对象存储 COS 所兼容的 S3

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon Drive
   \ (amazon cloud drive)
 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
   \ (s3)
 6 / Backblaze B2
   \ (b2)
 7 / Better checksums for other remotes
   \ (hasher)
 8 / Box
   \ (box)
 9 / Cache a remote
   \ (cache)
10 / Citrix Sharefile
   \ (sharefile)
11 / Combine several remotes into one
   \ (combine)
12 / Compress a remote
   \ (compress)
......
Storage> 5 

4、选择腾讯云对象存储 COS 为后端

Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Amazon Web Services (AWS) S3
   \ (AWS)
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ (Alibaba)
 3 / Ceph Object Storage
   \ (Ceph)
 4 / China Mobile Ecloud Elastic Object Storage (EOS)
   \ (ChinaMobile)
 5 / Cloudflare R2 Storage
   \ (Cloudflare)
 6 / Arvan Cloud Object Storage (AOS)
   \ (ArvanCloud)
 7 / DigitalOcean Spaces
   \ (DigitalOcean)
 8 / Dreamhost DreamObjects
   \ (Dreamhost)
 9 / Huawei Object Storage Service
   \ (HuaweiOBS)
10 / IBM COS S3
   \ (IBMCOS)
11 / IDrive e2
   \ (IDrive)
12 / IONOS Cloud
   \ (IONOS)
13 / Seagate Lyve Cloud
   \ (LyveCloud)
14 / Liara Object Storage
   \ (Liara)
15 / Minio Object Storage
   \ (Minio)
16 / Netease Object Storage (NOS)
   \ (Netease)
17 / RackCorp Object Storage
   \ (RackCorp)
18 / Scaleway Object Storage
   \ (Scaleway)
19 / SeaweedFS S3
   \ (SeaweedFS)
20 / StackPath Object Storage
   \ (StackPath)
21 / Storj (S3 Compatible Gateway)
   \ (Storj)
22 / Tencent Cloud Object Storage (COS)
   \ (TencentCOS)
23 / Wasabi Object Storage
   \ (Wasabi)
24 / Qiniu Object Storage (Kodo)
   \ (Qiniu)
25 / Any other S3 compatible provider
   \ (Other)
provider> 22

5、提供腾讯云对象存储 COS 的 AK/SK(也就是密钥)

Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth> 1

腾讯云密钥地址链接

输入腾讯云对象存储 COS 的 SecretId

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> **** SecretId ****

输入腾讯云对象存储 COS 的 SecretKey

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> **** SecretKey ****

6、选择腾讯云对象存储 COS 的 S3 地址(也就是存储区域,根据你创建的对象存储地域进行选择,我创建的是 广州的)

Option endpoint.
Endpoint for Tencent COS API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Beijing Region
   \ (cos.ap-beijing.myqcloud.com)
 2 / Nanjing Region
   \ (cos.ap-nanjing.myqcloud.com)
 3 / Shanghai Region
   \ (cos.ap-shanghai.myqcloud.com)
 4 / Guangzhou Region
   \ (cos.ap-guangzhou.myqcloud.com)
 5 / Nanjing Region
   \ (cos.ap-nanjing.myqcloud.com)
 6 / Chengdu Region
   \ (cos.ap-chengdu.myqcloud.com)
 7 / Chongqing Region
   \ (cos.ap-chongqing.myqcloud.com)
 8 / Hong Kong (China) Region
   \ (cos.ap-hongkong.myqcloud.com)
 9 / Singapore Region
   \ (cos.ap-singapore.myqcloud.com)
10 / Mumbai Region
   \ (cos.ap-mumbai.myqcloud.com)
11 / Seoul Region
   \ (cos.ap-seoul.myqcloud.com)
12 / Bangkok Region
   \ (cos.ap-bangkok.myqcloud.com)
13 / Tokyo Region
   \ (cos.ap-tokyo.myqcloud.com)
14 / Silicon Valley Region
   \ (cos.na-siliconvalley.myqcloud.com)
15 / Virginia Region
   \ (cos.na-ashburn.myqcloud.com)
16 / Toronto Region
   \ (cos.na-toronto.myqcloud.com)
17 / Frankfurt Region
   \ (cos.eu-frankfurt.myqcloud.com)
18 / Moscow Region
   \ (cos.eu-moscow.myqcloud.com)
19 / Use Tencent COS Accelerate Endpoint
   \ (cos.accelerate.myqcloud.com)
endpoint> 4

7、选择 ACL 和存储类型

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets Full_CONTROL.
 1 | No one else has access rights (default).
   \ (default)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
   / Owner gets FULL_CONTROL.
 3 | The AllUsers group gets READ and WRITE access.
   | Granting this on a bucket is generally not recommended.
   \ (public-read-write)
   / Owner gets FULL_CONTROL.
 4 | The AuthenticatedUsers group gets READ access.
   \ (authenticated-read)
   / Object owner gets FULL_CONTROL.
 5 | Bucket owner gets READ access.
   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-read)
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-full-control)
acl> 2
Option storage_class.
The storage class to use when storing new objects in Tencent COS.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Default
   \ ()
 2 / Standard storage class
   \ (STANDARD)
 3 / Archive storage mode
   \ (ARCHIVE)
 4 / Infrequent access storage mode
   \ (STANDARD_IA)
storage_class> 1

8、不用高级配置

Edit advanced config?
y) Yes
n) No (default)
y/n> n

9、确认配置

Configuration complete.
Options:
- type: s3
- provider: TencentCOS
- access_key_id: **** SecretId ****
- secret_access_key: **** SecretKey ****
- endpoint: cos.ap-guangzhou.myqcloud.com
- acl: public-read
Keep this "cos" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
cos                  s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Rclone 管理腾讯云对象存储 COS 的常用命令

列举

仅仅列举对象完整路径和大小

rclone ls cos:bucket-name/directory-path

额外列举对象修改时间

rclone lsl cos:bucket-name/directory-path

仅仅列举目录

rclone lsd cos:bucket-name/directory-path

列举目录和文件,目录以 / 结尾

rclone lsf cos:bucket-name/directory-path

列举对象所有信息,以 JSON 的形式

rclone lsjson cos:bucket-name/directory-path

以树的形式列举目录和文件

rclone tree cos:bucket-name/directory-path

以 CUI 的形式列举目录和文件

rclone ncdu cos:bucket-name/directory-path

读取

从云存储读取对象内容

rclone cat cos:dest-bucket-name/dest-path

从云存储获取对文件下载地址

rclone link cos:dest-bucket-name/dest-path

从云存储目录计算对象数量和总大小

rclone size cos:dest-bucket-name/dest-directory-path

上传

从标准输入流获取数据并上传到云存储

cat local-path | rclone rcat cos:dest-bucket-name/dest-path

同步

从本地同步到云存储

rclone sync local-path cos:dest-bucket-name/dest-directory-path

从云存储同步到云存储

rclone sync cos:src-bucket-name/src-directory-path cos:dest-bucket-name/dest-directory-path

对比本地与云存储

rclone check local-path cos:dest-bucket-name/dest-directory-path

对比云存储与云存储

rclone check cos:src-bucket-name/src-directory-path cos:dest-bucket-name/dest-directory-path

移动

移动目录

rclone move cos:src-bucket-name/src-directory-path cos:dest-bucket-name/dest-directory-path

移动文件

rclone moveto cos:src-bucket-name/src-path cos:dest-bucket-name/dest-path

复制

从指定 URL 复制内容到云存储

rclone copyurl https://url cos:dest-bucket-name/dest-path

复制目录

rclone copy cos:src-bucket-name/src-directory-path cos:dest-bucket-name/dest-directory-path

复制文件

rclone copyto cos:src-bucket-name/src-path cos:dest-bucket-name/dest-path

删除

删除目录

rclone delete cos:bucket-name/dest-directory-path

删除文件

rclone deletefile cos:bucket-name/dest-path

修改存储类型

修改目录

rclone settier STORAGE_CLASS cos:bucket-name/dest-directory-path

修改文件

rclone settier STORAGE_CLASS cos:bucket-name/dest-path

校验

校验目录

rclone hashsum MD5 cos:bucket-name/dest-directory-path

Rclone 对接到七牛云对象存储 Kodo 的常用命令

将 Kodo 用作 HTTP 服务器

rclone serve http cos:bucket-name/dest-directory-path --addr ip:port

将 Kodo 用作 FTP 服务器

rclone serve ftp cos:bucket-name/dest-directory-path --addr ip:port

将 Kodo 作为文件系统挂载到挂载点上

rclone mount cos:bucket-name/dest-directory-path mount-point

配置谷歌云盘(Google Drive)

国内的服务器因为访问不了Google,所以即使配置了谷歌云盘,也无法使用,除非服务器有魔法。

1、执行 rclone config,然后选择 n 创建新的 remote 端

root@junyang:~# rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

2、为 remote 端命名

Enter name for new remote.
name> gd

3、选择谷歌云盘

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon Drive
   \ (amazon cloud drive)
 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
   \ (s3)
 6 / Backblaze B2
   \ (b2)
 7 / Better checksums for other remotes
   \ (hasher)
 8 / Box
   \ (box)
 9 / Cache a remote
   \ (cache)
10 / Citrix Sharefile
   \ (sharefile)
11 / Combine several remotes into one
   \ (combine)
12 / Compress a remote
   \ (compress)
13 / Dropbox
   \ (dropbox)
14 / Encrypt/Decrypt a remote
   \ (crypt)
15 / Enterprise File Fabric
   \ (filefabric)
16 / FTP
   \ (ftp)
17 / Google Cloud Storage (this is not Google Drive)
   \ (google cloud storage)
18 / Google Drive
   \ (drive)
19 / Google Photos
   \ (google photos)
20 / HTTP
   \ (http)
21 / Hadoop distributed file system
   \ (hdfs)
22 / HiDrive
   \ (hidrive)
......
Storage> 18

4、然后输入 client_idclient_secret

及其建议申请并使用谷歌云盘密钥,这样能保证服务稳定性,Rclone官方认证方式经常用不了,用的人太多,申请方式参考下面文章:

申请谷歌云盘(Google Drive)API密钥-客户端ID和Secret Key

Option client_id.
Google Application Client Id
Setting your own is recommended.
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
If you leave this blank, it will use an internal key which is low performance.
Enter a value. Press Enter to leave empty.
client_id> ***** Google Drive client_id *****
Option client_secret.
OAuth Client Secret.
Leave blank normally.
Enter a value. Press Enter to leave empty.
client_secret> ***** Google Drive client_secret *****

5、选择对Google Drive谷歌云盘的操作权限

Option scope.
Scope that rclone should use when requesting access from drive.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Full access all files, excluding Application Data Folder.
   \ (drive)
 2 / Read-only access to file metadata and file contents.
   \ (drive.readonly)
   / Access to files created by rclone only.
 3 | These are visible in the drive website.
   | File authorization is revoked when the user deauthorizes the app.
   \ (drive.file)
   / Allows read and write access to the Application Data folder.
 4 | This is not visible in the drive website.
   \ (drive.appfolder)
   / Allows read-only access to file metadata but
 5 | does not allow any access to read or download file content.
   \ (drive.metadata.readonly)
scope> 1

回车默认

Option service_account_file.
Service Account Credentials JSON file path.
Leave blank normally.
Needed only if you want use SA instead of interactive login.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Enter a value. Press Enter to leave empty.
service_account_file> 

6、不用高级配置

Edit advanced config?
y) Yes
n) No (default)
y/n> n

7、用web浏览器进行认证
因为Rclone在Linux端部署,本地(Linux)没浏览器,所以需要在Windows或Mac上提前下载好对应的Rclone客户端进行认证,步骤如下:

Use web browser to automatically authenticate rclone with remote?
 * Say Y if the machine running rclone has a web browser you can use
 * Say N if running rclone on a (remote) machine without web browser access
If not sure try Y. If Y failed, try N.

y) Yes (default)
n) No
y/n> n

然后会返回一串字符

Option config_token.
For this to work, you will need rclone available on a machine that has
a web browser available.
For more help and alternate methods see: https://rclone.org/remote_setup/
Execute the following on the machine with the web browser (same rclone
version recommended):
        rclone authorize "drive" "eyJjbGllbnRfaWQiOiIyMTAk4OGR2MmVpMXB1bmE0azc3YjdmaWQuYXBwcy5nb29nbGV1c2VyY29udGVudC5jb20iLCJjbGllbnRfc2VjcmV0IjoiR09DU1BYLWNCQjRXelA2XzR4enVLbkEwb0lBamxLcWVmQWoiLCJzY29wZSI6ImRyaXZlIn0"
Then paste the result.
Enter a value.
config_token> 

打开 Rclone 的 Windows 或 Mac 终端界面,把给的这串字符输入进去,回车执行

PS C:\Users\junyang\Downloads\rclone> rclone authorize "drive" "eyJjbGllbnRfaWQiOiIyMTAk4OGR2MmVpMXB1bmE0azc3YjdmaWQuYXBwcy5nb29nbGV1c2VyY29udGVudC5jb20iLCJjbGllbnRfc2VjcmV0IjoiR09DU1BYLWNCQjRXelA2XzR4enVLbkEwb0lBamxLcWVmQWoiLCJzY29wZSI6ImRyaXZlIn0"

然后系统会自动打开浏览器让你登录谷歌账号,直接登录,
如果浏览器提示【此应用未经 Google 验证】,直接点击【高级】-【转至rclone(不安全)】,

然后授权 rclone 访问你的谷歌账号,点击继续,会提示一个 成功 界面。

之后回到终端里,会发现给你生成了一个 token:

2023/07/20 19:56:13 NOTICE: Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
2023/07/20 19:56:13 NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=7ioqOj3MJFPI6uN86LYzcg
2023/07/20 19:56:13 NOTICE: Log in and authorize rclone for access
2023/07/20 19:56:13 NOTICE: Waiting for code...
2023/07/20 20:00:00 NOTICE: Got code
Paste the following into your remote machine --->
eyJ0b2tlbiI6IntcImFjY2Vzc190b2tlblwiOlwieWEyOS5hMEFiVmJZNk5wTkhkVGZzNVN3ZjhWeFVkbklxUzRZZzBaNHNHb2dpTm53V1hMOSkNUeTZmVFZGVXdKem1RcmcwMTYzXCIsXCJ0b2tlbl90eXBlXCI6XCJCZWFyZXJcIixcInJlZnJlc2hfdG9rZW5cIjpcIjEvLzBlRENBT0ZfUUN1Y0pDZ1lJQVJBQUdBNFNOd0YtTDlJck9TNzRTRnZabU53TGVPVDVKaEwzQmFHd3NIclNlcGxPQ1c0Y3lwVVl4SkNQT2E5YWttbEFpZTRQV1ctODd3dlBDeU1cIixcImV4cGlyeVwiOlwiMjAyMy0wNy0yMFQyMDo1OTo1OS41MTAwMDQ1KzA4OjAwXCJ9In0
<---End paste

把这个 token 复制到 Linux rclone的 config_token 里回车

Option config_token.
For this to work, you will need rclone available on a machine that has
a web browser available.
For more help and alternate methods see: https://rclone.org/remote_setup/
Execute the following on the machine with the web browser (same rclone
version recommended):
        rclone authorize "drive" "eyJjbGllbnRfaWQiOiIyMTA3ODUzc3YjdmaWQuYXBwcy5nb29nbGV1c2VyY29udGVudC5jb20iLCJjbGllbnRfc2VjcmV0IjoiR09DU1BYLWNCQjRXelA2XzR4enVLbkEwb0lBamxLcWVmQWoiLCJzY29wZSI6ImRyaXZlIn0"
Then paste the result.
Enter a value.
config_token> eyJ0b2tlbiI6IntcImFjY2Vzc190b2tlblwiOlwieWEyOS5hMEFiVmJZNk5wTkhkVGZzNVN3ZjhWeFVkbklxUzRZZzBaNHNHb2dpTm53V1hMOSkNUeTZmVFZGVXdKem1RcmcwMTYzXCIsXCJ0b2tlbl90eXBlXCI6XCJCZWFyZXJcIixcInJlZnJlc2hfdG9rZW5cIjpcIjEvLzBlRENBT0ZfUUN1Y0pDZ1lJQVJBQUdBNFNOd0YtTDlJck9TNzRTRnZabU53TGVPVDVKaEwzQmFHd3NIclNlcGxPQ1c0Y3lwVVl4SkNQT2E5YWttbEFpZTRQV1ctODd3dlBDeU1cIixcImV4cGlyeVwiOlwiMjAyMy0wNy0yMFQyMDo1OTo1OS41MTAwMDQ1KzA4OjAwXCJ9In0

8、选择是否是团队盘

Configure this as a Shared Drive (Team Drive)?

y) Yes
n) No (default)
y/n> n

9、确认配置

Configuration complete.
Options:
- type: drive
- client_id: ***** Google Drive client_id *****
- client_secret: ***** Google Drive client_secret *****
- scope: drive
- token: {"access_token":"ya29.a0AbVbY6NpNsPBaftgP5DXhXeKTFfx_aZ5LjCuOo3WS-1GzBk9VYFjitk2Wig-KPLwJWBLQeBl7g38rhiw0LeYSMX2Xh0esXDTHZYHWOnGTekxQPfqs8vMYaCgYKAc4SARASFQFWKvPlz9HVNJCTy6fTVFUwJzmQrg0163","token_type":"Bearer","refresh_token":"1//0eDCAOF_QCucJCgYIARAAGA4SNwF-L9IrOS74SFvZmNwLeOT5JhL3BaGwsHrSeplOCW4cypUYxJCPOa9akmlAie4PWW-87wvPCyM","expiry":"2023-07-20T20:59:59.5100045+08:00"}
- team_drive: 
Keep this "gd" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
gd                   drive

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Rclone 管理谷歌云盘常用命令

列举

仅仅列举对象完整路径和大小

rclone ls gd:/directory-path

额外列举对象修改时间

rclone lsl gd:/directory-path

仅仅列举目录

rclone lsd gd:/directory-path

列举目录和文件,目录以 / 结尾

rclone lsf gd:/directory-path

列举对象所有信息,以 JSON 的形式

rclone lsjson gd:/directory-path

以树的形式列举目录和文件

rclone tree gd:/directory-path

以 CUI 的形式列举目录和文件

rclone ncdu gd:/directory-path

读取

从云存储读取对象内容

rclone cat gd:/dest-path

从云存储获取对文件下载地址

rclone link gd:/dest-path

从云存储目录计算对象数量和总大小

rclone size gd:/dest-directory-path

上传

从标准输入流获取数据并上传到云存储

cat local-path | rclone rcat gd:/dest-path

同步

从本地同步到云存储

rclone sync local-path gd:/dest-directory-path

从云存储同步到云存储

rclone sync gd:/src-directory-path gd:/dest-directory-path

对比本地与云存储

rclone check local-path gd:/dest-directory-path

对比云存储与云存储

rclone check gd:/src-directory-path gd:/dest-directory-path

移动

移动目录

rclone move gd:/src-directory-path gd:/dest-directory-path

移动文件

rclone moveto gd:/src-path gd:/dest-path

复制

从指定 URL 复制内容到云存储

rclone copyurl https://url gd:/dest-path

复制目录

rclone copy gd:/src-directory-path gd:/dest-directory-path

复制文件

rclone copyto gd:/src-path gd:/dest-path

删除

删除目录

rclone delete gd:/dest-directory-path

删除文件

rclone deletefile gd:/dest-path

修改存储类型

修改目录

rclone settier STORAGE_CLASS gd:/dest-directory-path

修改文件

rclone settier STORAGE_CLASS gd:/dest-path

校验

校验目录

rclone hashsum MD5 gd:/dest-directory-path

Rclone 对接到七牛云对象存储 Kodo 的常用命令

将 Kodo 用作 HTTP 服务器

rclone serve http gd:/dest-directory-path --addr ip:port

将 Kodo 用作 FTP 服务器

rclone serve ftp gd:/dest-directory-path --addr ip:port

将 Kodo 作为文件系统挂载到挂载点上

rclone mount gd:/dest-directory-path mount-point

视频链接

0

评论区