主题切换
Rclone使用手册
简介
Rclone 是一个用 Go 编写的开源指令行程序,用于管理或迁移云存储上的文件。它支持40+ 云服务包括所有主流提供商,如 Google Drive、DropBox、Mega、pcloud、S3 仅举几例。Rclone在Linux、Windows和Mac上广泛使用,第三方开发人员可以使用Rclone指令行或API创建备份、恢复、和业务流程解决方案。
安装
要在 Linux/macOS/BSD 系统上安装 rclone,指令:
shell
sudo -v ; curl https://rclone.org/install.sh | sudo bash
请注意,此脚本首先检查安装的rclone 版本,并如果不需要,不会重新下载。
其他安装方式见Rcolne官网。
查看版本号,指令:
shell
rclone version
本案例按如下环境进行
shell
rclone v1.66.0
- os/version: centos 7.9.2009 (64 bit)
- os/kernel: 3.10.0-1160.92.1.el7.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.1
- go/linking: static
- go/tags: none
添加OSCA存储
创建配置
shell
rclone config
shell
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
提示没有远端配置,输入n
创建新配置。
输入名称
shell
Enter name for new remote.
name> osca
选择远端存储类型
shell
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
1 / 1Fichier
\ (fichier)
2 / Akamai NetStorage
\ (netstorage)
3 / Alias for an existing remote
\ (alias)
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
\ (s3)
5 / Backblaze B2
...
Storage> s3
输入s3
,选择对象存储类型
选择对象存储厂商
shell
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Amazon Web Services (AWS) S3
\ (AWS)
31 / Any other S3 compatible provider
\ (Other)
provider> 31
我们选择最后一项,其他厂商的对象存储。
选择是否使用AWS认证信息
shell
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
选择1,不使用,我们在下文中输入认证信息。
输入认证信息
shell
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> AccessKey
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SecretKey
输入自己账号的认证AKSK。
输入Region地址
shell
Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
region>
直接回车,使用默认值。
输入网关地址
shell
Option endpoint.
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Enter a value. Press Enter to leave empty.
endpoint> https://fgws3-ocloud.ihep.ac.cn
输入可用的网关地址。
输入location
shell
Option location_constraint.
Location constraint - must be set to match the Region.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
location_constraint>
直接回车,使用默认值。
设置对象ACL权限
shell
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
acl>
直接回车,使用默认值。
高级设置
shell
Edit advanced config?
y) Yes
n) No (default)
y/n> y
到此步已经可以正常使用,为了更好的使用体验,可输入y
进行高级配置。 当使用中出现问题,可单独配置高级设置。 请留言高级配置中必须配置的参数。
设置桶ACL权限
shell
Option bucket_acl.
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when only when creating buckets. If it
isn't set then "acl" is used instead.
If the "acl" and "bucket_acl" are empty strings then no X-Amz-Acl:
header is added and the default (private) will be used.
Choose a number from below, or type in your own string value.
Press Enter for the default (private).
直接回车,使用默认值。
设置分块上传阈值
shell
Option upload_cutoff.
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5 GiB.
Enter a size with suffix K,M,G,T. Press Enter for the default (5Mi).
upload_cutoff>16Mi
输入16Mi。
设置分块大小
shell
Option chunk_size.
Chunk size to use for uploading.
When uploading files larger than upload_cutoff or files with unknown
size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
photos or google docs) they will be uploaded as multipart uploads
using this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high-speed links and you have
enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured
chunk_size. Since the default chunk size is 5 MiB and there can be at
most 10,000 chunks, this means that by default the maximum size of
a file you can stream upload is 48 GiB. If you wish to stream upload
larger files then you will need to increase chunk_size.
Increasing the chunk size decreases the accuracy of the progress
statistics displayed with "-P" flag. Rclone treats chunk as sent when
it's buffered by the AWS SDK, when in fact it may still be uploading.
A bigger chunk size means a bigger AWS SDK buffer and progress
reporting more deviating from the truth.
Enter a size with suffix K,M,G,T. Press Enter for the default (5Mi).
chunk_size>16Mi
输入16Mi。
设置最大上传分块数量
shell
Option max_upload_parts.
Maximum number of parts in a multipart upload.
This option defines the maximum number of multipart chunks to use
when doing a multipart upload.
This can be useful if a service does not support the AWS S3
specification of 10,000 chunks.
Rclone will automatically increase the chunk size when uploading a
large file of a known size to stay below this number of chunks limit.
Enter a signed integer. Press Enter for the default (10000).
max_upload_parts>
系统不做限制,可不输入,直接回车。
设置服务侧复制对象的大小阈值
shell
Option copy_cutoff.
Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 5 GiB.
Enter a size with suffix K,M,G,T. Press Enter for the default (4.656Gi).
copy_cutoff>
当服务侧复制对象时,rclone仍然会下载数据进行缓存,可不输入,直接回车。
设置MD5校验
shell
Option disable_checksum.
Don't store MD5 checksum with object metadata.
Normally rclone will calculate the MD5 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.
Enter a boolean value (true or false). Press Enter for the default (false).
disable_checksum>
为true时关闭Md5校验,建议不关闭,默认为false。
设置共享证书文件
shell
Option shared_credentials_file.
Path to the shared credentials file.
If env_auth = true then rclone can use a shared credentials file.
If this variable is empty rclone will look for the
"AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty
it will default to the current user's home directory.
Linux/OSX: "$HOME/.aws/credentials"
Windows: "%USERPROFILE%\.aws\credentials"
Enter a value. Press Enter to leave empty.
shared_credentials_file>
直接回车,使用默认值。
设置证书配置文件
shell
Option profile.
Profile to use in the shared credentials file.
If env_auth = true then rclone can use a shared credentials file. This
variable controls which profile is used in that file.
If empty it will default to the environment variable "AWS_PROFILE" or
"default" if that environment variable is also not set.
Enter a value. Press Enter to leave empty.
profile>
直接回车,使用默认值。
设置访问会话Token
shell
Option session_token.
An AWS session token.
Enter a value. Press Enter to leave empty.
session_token>
直接回车,使用默认值。
设置上传并发数
shell
Option upload_concurrency.
Concurrency for multipart uploads and copies.
This is the number of chunks of the same file that are uploaded
concurrently for multipart uploads and copies.
If you are uploading small numbers of large files over high-speed links
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.
Enter a signed integer. Press Enter for the default (10).
upload_concurrency>
默认为10并发,大并发数会对用户CPU造成压力,请谨慎设置,可直接使用默认值。
设置强制目录风格(必须配置)
shell
Option force_path_style.
If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access,
if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to
false - rclone will do this automatically based on the provider
setting.
Enter a boolean value (true or false). Press Enter for the default (true).
force_path_style>true
输入true,否则可能导致目录识别失败
设置V2认证
shell
Option v2_auth.
If true use v2 authentication.
If this is false (the default) then rclone will use v4 authentication.
If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
Enter a boolean value (true or false). Press Enter for the default (false).
v2_auth>
系统两种认证都是支持,直接回车,使用默认值。
设置双栈连接
shell
Option use_dual_stack.
If true use AWS S3 dual-stack endpoint (IPv6 support).
See [AWS Docs on Dualstack Endpoints](https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html)
Enter a boolean value (true or false). Press Enter for the default (false).
use_dual_stack>
直接回车,使用默认值。
设置每次查询对象的数量
shell
Option list_chunk.
Size of listing chunk (response list for each ListObject S3 request).
This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
Most services truncate the response list to 1000 objects even if requested more than that.
In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html).
In Ceph, this can be increased with the "rgw list buckets max chunk" option.
Enter a signed integer. Press Enter for the default (1000).
list_chunk>
较大的值可以加速查询,但对网络要求比较高,建议使用默认值。
设置查询对象版本
shell
Option list_version.
Version of ListObjects to use: 1,2 or 0 for auto.
When S3 originally launched it only provided the ListObjects call to
enumerate objects in a bucket.
However in May 2016 the ListObjectsV2 call was introduced. This is
much higher performance and should be used if at all possible.
If set to the default, 0, rclone will guess according to the provider
set which list objects method to call. If it guesses wrong, then it
may be set manually here.
Enter a signed integer. Press Enter for the default (0).
list_version>
直接回车,使用默认值。
编码查询URL
shell
Option list_url_encode.
Whether to url encode listings: true/false/unset
Some providers support URL encoding listings and where this is
available this is more reliable when using control characters in file
names. If this is set to unset (the default) then rclone will choose
according to the provider setting what to apply, but you can override
rclone's choice here.
Enter a fs.Tristate value. Press Enter for the default (unset).
list_url_encode>
直接回车,使用默认值。
不校验桶是否存在
shell
Option no_check_bucket.
If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
It can also be needed if the user you are using does not have bucket
creation permissions. Before v1.52.0 this would have passed silently
due to a bug.
Enter a boolean value (true or false). Press Enter for the default (false).
no_check_bucket>
系统不支持客户端使用API创建桶,直接回车,使用默认值。
上传时不使用HEAD校验对象信息
shell
Option no_head.
If set, don't HEAD uploaded objects to check integrity.
This can be useful when trying to minimise the number of transactions
rclone does.
Setting it means that if rclone receives a 200 OK message after
uploading an object with PUT then it will assume that it got uploaded
properly.
In particular it will assume:
- the metadata, including modtime, storage class and content type was as uploaded
- the size was as uploaded
It reads the following items from the response for a single part PUT:
- the MD5SUM
- The uploaded date
For multipart uploads these items aren't read.
If an source object of unknown length is uploaded then rclone **will** do a
HEAD request.
Setting this flag increases the chance for undetected upload failures,
in particular an incorrect size, so it isn't recommended for normal
operation. In practice the chance of an undetected upload failure is
very small even with this flag.
Enter a boolean value (true or false). Press Enter for the default (false).
no_head>
直接回车,使用默认值。
下载时不使用HEAD校验对象信息
shell
Option no_head_object.
If set, do not do HEAD before GET when getting objects.
Enter a boolean value (true or false). Press Enter for the default (false).
no_head_object>
直接回车,使用默认值。
设置编码格式
shell
Option encoding.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,InvalidUtf8,Dot).
encoding>
直接回车,使用默认值。
不使用HTTP2请求
shell
Option disable_http2.
Disable usage of http2 for S3 backends.
There is currently an unsolved issue with the s3 (specifically minio) backend
and HTTP/2. HTTP/2 is enabled by default for the s3 backend but can be
disabled here. When the issue is solved this flag will be removed.
See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rclone/issues/3631
Enter a boolean value (true or false). Press Enter for the default (false).
disable_http2>
直接回车,使用默认值。
自定义CDN下载链接
shell
Option download_url.
Custom endpoint for downloads.
This is usually set to a CloudFront CDN URL as AWS S3 offers
cheaper egress for data downloaded through the CloudFront network.
Enter a value. Press Enter to leave empty.
download_url>
直接回车,使用默认值。
设置支持目录创建(必须配置)
shell
Option directory_markers.
Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option creates an empty
object ending with "/", to persist the folder.
Enter a boolean value (true or false). Press Enter for the default (true).
directory_markers>true
设置为true,处理目录创建。
多分块校验ETag
shell
Option use_multipart_etag.
Whether to use ETag in multipart uploads for verification
This should be true, false or left unset to use the default for the provider.
Enter a fs.Tristate value. Press Enter for the default (unset).
use_multipart_etag>
直接回车,使用默认值。
使用预签名请求
shell
Option use_presigned_request.
Whether to use a presigned request or PutObject for single part uploads
If this is false rclone will use PutObject from the AWS SDK to upload
an object.
Versions of rclone < 1.59 use presigned requests to upload a single
part object and setting this flag to true will re-enable that
functionality. This shouldn't be necessary except in exceptional
circumstances or for testing.
Enter a boolean value (true or false). Press Enter for the default (false).
use_presigned_request>
直接回车,使用默认值。
目录版本
shell
Option versions.
Include old versions in directory listings.
Enter a boolean value (true or false). Press Enter for the default (false).
versions>
直接回车,使用默认值。
特定目标版本
shell
Option version_at.
Show file versions as they were at the specified time.
The parameter should be a date, "2006-01-02", datetime "2006-01-02
15:04:05" or a duration for that long ago, eg "100d" or "1h".
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
See [the time option docs](/docs/#time-option) for valid formats.
Enter a fs.Time value. Press Enter for the default (off).
version_at>
直接回车,使用默认值。
特定目标版本删除
shell
Option version_deleted.
Show deleted file markers when using versions.
This shows deleted file markers in the listing when using versions. These will appear
as 0 size files. The only operation which can be performed on them is deletion.
Deleting a delete marker will reveal the previous version.
Deleted files will always show with a timestamp.
Enter a boolean value (true or false). Press Enter for the default (false).
version_deleted>
直接回车,使用默认值。
解压压缩文件
shell
Option decompress.
If set this will decompress gzip encoded objects.
It is possible to upload objects to S3 with "Content-Encoding: gzip"
set. Normally rclone will download these files as compressed objects.
If this flag is set then rclone will decompress these files with
"Content-Encoding: gzip" as they are received. This means that rclone
can't check the size and hash but the file contents will be decompressed.
Enter a boolean value (true or false). Press Enter for the default (false).
decompress>
直接回车,使用默认值。
压缩包探测
shell
Option might_gzip.
Set this if the backend might gzip objects.
Normally providers will not alter objects when they are downloaded. If
an object was not uploaded with `Content-Encoding: gzip` then it won't
be set on download.
However some providers may gzip objects even if they weren't uploaded
with `Content-Encoding: gzip` (eg Cloudflare).
A symptom of this would be receiving errors like
ERROR corrupted on transfer: sizes differ NNN vs MMM
If you set this flag and rclone downloads an object with
Content-Encoding: gzip set and chunked transfer encoding, then rclone
will decompress the object on the fly.
If this is set to unset (the default) then rclone will choose
according to the provider setting what to apply, but you can override
rclone's choice here.
Enter a fs.Tristate value. Press Enter for the default (unset).
might_gzip>
直接回车,使用默认值。
允许编码GZIP
shell
Option use_accept_encoding_gzip.
Whether to send `Accept-Encoding: gzip` header.
By default, rclone will append `Accept-Encoding: gzip` to the request to download
compressed objects whenever possible.
However some providers such as Google Cloud Storage may alter the HTTP headers, breaking
the signature of the request.
A symptom of this would be receiving errors like
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.
In this case, you might want to try disabling this option.
Enter a fs.Tristate value. Press Enter for the default (unset).
use_accept_encoding_gzip>
直接回车,使用默认值。
不读取系统元数据
shell
Option no_system_metadata.
Suppress setting and reading of system metadata
Enter a boolean value (true or false). Press Enter for the default (false).
no_system_metadata>
直接回车,使用默认值。
使用已经存在的桶
shell
Option use_already_exists.
Set if rclone should report BucketAlreadyExists errors on bucket creation.
At some point during the evolution of the s3 protocol, AWS started
returning an `AlreadyOwnedByYou` error when attempting to create a
bucket that the user already owned, rather than a
`BucketAlreadyExists` error.
Unfortunately exactly what has been implemented by s3 clones is a
little inconsistent, some return `AlreadyOwnedByYou`, some return
`BucketAlreadyExists` and some return no error at all.
This is important to rclone because it ensures the bucket exists by
creating it on quite a lot of operations (unless
`--s3-no-check-bucket` is used).
If rclone knows the provider can return `AlreadyOwnedByYou` or returns
no error then it can report `BucketAlreadyExists` errors when the user
attempts to create a bucket not owned by them. Otherwise rclone
ignores the `BucketAlreadyExists` error which can lead to confusion.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
Enter a fs.Tristate value. Press Enter for the default (unset).
use_already_exists>
直接回车,使用默认值。
使用多分块上传
shell
Option use_multipart_uploads.
Set if rclone should use multipart uploads.
You can change this if you want to disable the use of multipart uploads.
This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
Enter a fs.Tristate value. Press Enter for the default (unset).
use_multipart_uploads>
直接回车,使用默认值。
远程存储的描述
shell
Option description.
Description of the remote
Enter a value. Press Enter to leave empty.
description>
用于标记该存储,可直接回车。
推出高级配置
shell
Edit advanced config?
y) Yes
n) No (default)
y/n>
直接回车,使用默认值。
保存配置
Configuration complete.
Options:
- type: s3
- provider: Other
- access_key_id: AccessKey
- secret_access_key: SecretKey
- endpoint: https://fgws3-ocloud.ihep.ac.cn
- acl: private
- bucket_acl: private
- upload_cutoff: 16Mi
- chunk_size: 16Mi
- upload_concurrency: 10
- directory_markers: true
- use_presigned_request: false
- region: other-v2-signature
Keep this "osca" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
直接回车,使用默认值。
当前远端存储列表
shell
Current remotes:
Name Type
==== ====
osca s3
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q>
可以看到远端OSCA已经添加成功
查看配置
指令:
shell
rclone config show {Args1}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 配置名称,刚才设定的配置,如:osca |
指令:
shell
rclone config show osca
结果:
shell
[osca]
type = s3
provider = Other
access_key_id = AccessKey
secret_access_key = SecretKey
endpoint = https://fgws3-ocloud.ihep.ac.cn
acl = private
bucket_acl = private
upload_cutoff = 16Mi
chunk_size = 16Mi
upload_concurrency = 10
directory_markers = true
use_presigned_request = false
region = other-v2-signature
列出所有存储桶
指令:
shell
rclone lsd {Args1}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 路径地址,如:osca: |
指令:
shell
rclone lsd osca:
结果:
shell
-1 2024-04-22 16:19:35 -1 20073-rclonetest
-1 2024-04-22 16:29:34 -1 20073-test01
列出对象
指令:
shell
rclone ls {Args1} {Args2}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 路径地址,如:osca: |
Args2 | bucket的名称,如:20073-test01 |
指令:
shell
rclone ls osca:20073-test01
结果:
shell
287 systemd/tailscaled.defaults
674 systemd/tailscaled.service
21831680 tailscaled
1551 域名规律.drawio
11002 服务管理方案.drawio
50839 电脑配置.png
文件同步(增量)
单个文件同步(增量)
指令:
shell
rclone copyto {Args1} {Args2} {Args3}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 源文件的路径,具体的文件名 |
Args2 | 复制到目标文件,具体的文件名 |
Args3 | 选项,一般用-v,来详细计数打印更多内容(重复获取更多内容) |
指令:
shell
rclone copyto /root/dns.log osca:20073-test01/dns.log -v
结果:
shell
2024/04/23 15:09:40 INFO : dns.log: Copied (new)
2024/04/23 15:09:40 INFO :
Transferred: 90.330 KiB / 90.330 KiB, 100%, 45.158 KiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 2.7s
多个文件同步(增量)
指令:
shell
rclone copy {Args1} {Args2} {Args3}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 源文件的路径,可以是文件夹也可以是具体的文件名 |
Args2 | 复制到目标文件,可以是文件夹也可以是具体的文件名 |
Args3 | 选项,一般用-v,来详细计数打印更多内容(重复获取更多内容) |
指令:
shell
rclone copy osca:20073-test01/test/ osca:20073-test01/dns/ -v
结果:
shell
2024/04/23 18:06:03 INFO : 域名规律.drawio: Copied (server-side copy)
2024/04/23 18:06:03 INFO :
Transferred: 1.515 KiB / 1.515 KiB, 100%, 0 B/s, ETA -
Transferred: 1 / 1, 100%
Server Side Copies: 1 @ 1.515 KiB
Elapsed time: 0.2s
移动
单个文件移动
将文件或目录从源移动到目标
指令:
shell
rclone moveto {Args1} {Args2} {Args3}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 源文件的路径 |
Args2 | 目标文件的路径,与第一个参数对应,第一个参数是文件名这里也是文件名,第一个参数是文件夹名,这里也是文件夹名 |
Args3 | 选项,一般用-v,来详细计数打印更多内容(重复获取更多内容) |
指令:
shell
rclone moveto osca:20073-test01/dns.log osca:20073-test01/pkg/dns.log -v
结果:
shell
2024/04/23 17:31:07 INFO : dns.log: Copied (server-side copy)
2024/04/23 17:31:07 INFO : dns.log: Deleted
2024/04/23 17:31:07 INFO :
Transferred: 90.330 KiB / 90.330 KiB, 100%, 0 B/s, ETA -
Checks: 1 / 1, 100%
Deleted: 1 (files), 0 (dirs)
Renamed: 1
Transferred: 1 / 1, 100%
Server Side Copies: 1 @ 90.330 KiB
Elapsed time: 0.4s
多个文件移动
将文件从源移动到目标
指令:
shell
rclone move {Args1} {Args2} {Args3}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 源文件的路径,可以是文件夹也可以是具体的文件名 |
Args2 | 复制到目标文件,一般是目标文件夹路径 |
Args3 | 选项,一般用-v,来详细计数打印更多内容(重复获取更多内容) |
指令:
shell
rclone move osca:20073-test01/bin/ osca:20073-test01/pkg/bin/ -v
结果:
shell
2024/04/23 17:43:53 INFO : go-outline: Copied (server-side copy)
2024/04/23 17:43:53 INFO : go-outline: Deleted
2024/04/23 17:43:54 INFO : godef: Copied (server-side copy)
2024/04/23 17:43:54 INFO : godef: Deleted
2024/04/23 17:44:03 INFO : dlv: Copied (server-side copy)
2024/04/23 17:44:03 INFO : gopls: Copied (server-side copy)
2024/04/23 17:44:03 INFO : staticcheck: Copied (server-side copy)
2024/04/23 17:44:03 INFO : dlv: Deleted
2024/04/23 17:44:03 INFO : gopls: Deleted
2024/04/23 17:44:03 INFO : staticcheck: Deleted
2024/04/23 17:44:03 INFO :
Transferred: 68.361 MiB / 68.361 MiB, 100%, 0 B/s, ETA -
Checks: 5 / 5, 100%
Deleted: 5 (files), 0 (dirs)
Renamed: 5
Transferred: 5 / 5, 100%
Server Side Copies: 5 @ 68.361 MiB
Elapsed time: 17.7s
删除
文件删除
指令:
shell
rclone delete {Args1} {Args2}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 源文件的路径,可以是文件夹也可以是具体的文件名 |
Args2 | 选项,一般用-v,来详细计数打印更多内容(重复获取更多内容) |
指令:
shell
rclone delete osca:20073-test01/dns.log -v
结果:
shell
2024/04/23 15:10:11 INFO : dns.log: Deleted
清除
删除路径及其所有内容
指令:
shell
rclone purge {Args1} {Args2}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 目录的路径,一般是文件夹地址 |
Args2 | 选项,一般用-v,来详细计数打印更多内容(重复获取更多内容) |
指令:
shell
rclone purge osca:20073-test01/server/test/ -v
结果:
shell
2024/04/23 16:53:39 INFO : gorm-master/logger/: Deleted
2024/04/23 16:53:39 INFO : gorm-master/clause/: Deleted
2024/04/23 16:53:39 INFO : gorm-master/callbacks/: Deleted
2024/04/23 16:53:39 INFO : 1/: Deleted
2024/04/23 16:53:39 INFO : gorm-master/.github/workflows/: Deleted
2024/04/23 16:53:40 INFO : gorm-master/migrator/: Deleted
2024/04/23 16:53:40 INFO : gorm-master/tests/: Deleted
2024/04/23 16:53:40 INFO : gorm-master/schema/: Deleted
2024/04/23 16:53:40 INFO : gorm-master/.github/: Deleted
2024/04/23 16:53:45 INFO : /: Deleted
2024/04/23 16:53:45 INFO : gorm-master/: Deleted
大小和数量
对路径中的对象进行计数并计算总大小,打印结果到标准输出。
指令:
shell
rclone size {Args1} {Args2}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 路径,既可以是文件也可以是目录 |
Args2 | 选项,一般用-v,来详细计数打印更多内容(重复获取更多内容) |
指令:
shell
rclone size osca:20073-test01/dns/ -v
结果:
shell
Total objects: 2
Total size: 91.845 KiB (94049 Byte)
添加挂载点(Windows)
将远程器作为文件系统挂载到挂载点上
指令:
shell
rclone mount {Args1} {Args2} {Args3}
配置参数如下:
参数名称 | 描述 |
---|---|
Args1 | 源文件的路径,或者说是云存储中的路径 |
Args2 | 要挂载到的本地目录 |
Args3 | 选项,一般用 --vfs-cache-mode writes 设置缓存 |
指令:
shell
rclone mount osca:20073-test01 D:\NewMountPoint --vfs-cache-mode writes
结果:
shell
2024/04/26 11:01:34 INFO : S3 bucket 20073-test01: poll-interval is not supported by this remote
The service rclone has been started.
检测指令:
shell
D:\NewMountPoint>dir
驱动器 D 中的卷是 软件
卷的序列号是 DA18-EBFA
D:\NewMountPoint 的目录
2000/01/01 周六 08:00 <DIR> bin
2000/01/01 周六 08:00 <DIR> dns
2024/01/23 周二 11:19 92,498 dns.log
2000/01/01 周六 08:00 <DIR> pkg
2000/01/01 周六 08:00 <DIR> systemd
2022/03/18 周五 10:41 21,831,680 tailscaled
2000/01/01 周六 08:00 <DIR> test
2024/04/22 周一 16:34 1,551 域名规律.drawio
2024/04/22 周一 16:34 11,002 服务管理方案.drawio
2024/04/22 周一 16:34 50,839 电脑配置.png
5 个文件 21,987,570 字节
5 个目录 1,125,899,906,842,624 可用字节
D:\NewMountPoint>