GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages
kopia(1) 20250704 kopia(1)

kopia

Kopia - Fast And Secure Open-Source Backup

Show context-sensitive help (also try --help-long and --help-man).
Show application version.
Override log file.
Directory where log files should be written.
Console log level
File log level
Show help for all commands, including hidden
Specify the config file to use
Repository password.
Persist credentials

help [<command>...]

Show help.

blob delete <blobIDs>...

Delete blobs by ID

blob gc [<flags>]

Garbage-collect unused blobs

Whether to delete unused blobs
Number of parallel blob scans
Only GC blobs with given prefix
Safety level

blob list [<flags>]

List BLOBs

Blob ID prefix
Blob ID prefixes to exclude
Minimum size
Maximum size
Only list data blobs
Output result in JSON format to stdout

blob show [<flags>] <blobID>...

Show contents of BLOBs

Decrypt blob if possible

blob stats [<flags>]

Blob statistics

Raw numbers
Blob name prefix

benchmark compression --data-file=DATA-FILE [<flags>]

Run compression benchmarks

Number of repetitions
Use data from the given file
Sort results by size
Sort results by allocated bytes
Number of parallel goroutines
Operations
Verify that compression is stable
Print out options usable for repository creation
Included deprecated compression algorithms
Comma-separated list of algorithms to benchmark

benchmark crypto [<flags>]

Run combined hash and encryption benchmarks

Size of a block to encrypt
Number of repetitions
Include deprecated algorithms
Number of parallel goroutines
Print out options usable for repository creation

benchmark splitter [<flags>]

Run splitter benchmarks

Random seed
Size of a data to split
Number of data blocks to split
Print out the fastest dynamic splitter option
Number of parallel goroutines

benchmark hashing [<flags>]

Run hashing function benchmarks

Size of a block to hash
Number of repetitions
Number of parallel goroutines
Print out options usable for repository creation

benchmark encryption [<flags>]

Run encryption benchmarks

Size of a block to encrypt
Number of repetitions
Include deprecated algorithms
Number of parallel goroutines
Print out options usable for repository creation

benchmark ecc [<flags>]

Run ECC benchmarks

Size of a block to encrypt
Number of repetitions
Number of parallel goroutines
Print out options usable for repository creation

cache clear [<flags>]

Clears the cache

Specifies the cache to clear

cache info [<flags>]

Displays cache information and statistics

Only display cache path

cache prefetch [<flags>] <object>...

Prefetches the provided objects into cache

Prefetch hint

cache set [<flags>]

Sets parameters local caching of repository data

Desired size of local content cache (soft limit)
Maximum size of local content cache (hard limit)
Minimal age of content cache item to be subject to sweeping
Desired size of local metadata cache (soft limit)
Maximum size of local metadata cache (hard limit)
Minimal age of metadata cache item to be subject to sweeping
Minimal age of index cache item to be subject to sweeping
Duration of index cache
Directory where to store cache files

cache sync [<flags>]

Synchronizes the metadata cache with blobs in storage

Fetch parallelism

content delete <id>...

Remove content

content list [<flags>]

List contents

Long output
Compression
Include deleted content
Only show deleted content
Summarize the list
Human-readable output
Content ID prefix
Apply to content IDs with (any) prefix
Apply to content IDs without prefix
Output result in JSON format to stdout

content rewrite [<flags>] [<contentID>...]

Rewrite content using most recent format

Number of parallel workers
Rewrite contents from short packs
Rewrite contents using the provided format version
Only rewrite contents from pack blobs with a given prefix
Do not actually rewrite, only print what would happen
Content ID prefix
Apply to content IDs with (any) prefix
Apply to content IDs without prefix
Safety level

content show [<flags>] <id>...

Show contents by ID.

Pretty-print JSON content
Transparently decompress the content

content stats [<flags>]

Content statistics

Raw numbers
Content ID prefix
Apply to content IDs with (any) prefix
Apply to content IDs without prefix

content verify [<flags>]

Verify that each content is backed by a valid blob

Parallelism
Full verification (including download)
Include deleted contents
Download a percentage of files [0.0 .. 100.0]
Progress output interval
Content ID prefix
Apply to content IDs with (any) prefix
Apply to content IDs without prefix

diff [<flags>] <object-path1> <object-path2>

Displays differences between two repository objects (files or directories)

Compare files by launching diff command for all pairs of (old,new)
Displays only aggregate statistics of the changes between two repository objects
Displays differences between two repository objects (files or directories)

index epoch list

List the status of epochs.

index inspect [<flags>] [<blobs>...]

Inspect index blob

Inspect all index blobs in the repository, including inactive
Inspect all active index blobs
Inspect all active index blobs
Parallelism

index list [<flags>]

List content indexes

Display index blob summary
Include inactive index files superseded by compaction
Index blob sort order
Output result in JSON format to stdout

index optimize [<flags>]

Optimize indexes blobs.

Maximum number of small index blobs that can be left after compaction.
Drop deleted contents above given age
Drop contents with given IDs
Optimize all indexes, even those above maximum size.

index recover [<flags>]

Recover indexes from pack blobs

Prefixes of pack blobs to recover from (default=all packs)
Names of pack blobs to recover from (default=all packs)
Recover parallelism
Ignore errors when recovering
Delete all indexes before recovering
Commit recovered content

list [<flags>] <object-path>

List a directory stored in repository object.

Long output
Recursive output
Show object IDs
Emit error summary

logs cleanup [<flags>]

Clean up logs

Maximal age
Maximal number of files to keep
Maximal total size in MiB
Do not delete

logs list [<flags>]

List logs.

Show all logs
Include last N logs, by default the last one is shown
Include logs younger than X (e.g. '1h')
Include logs older than X (e.g. '1h')

logs show [<flags>] [<session-id>...]

Show contents of the log. When no flags or arguments are specified, only the last log is shown.

Show all logs
Include last N logs, by default the last one is shown
Include logs younger than X (e.g. '1h')
Include logs older than X (e.g. '1h')

notification profile configure email --profile-name=PROFILE-NAME [<flags>]

E-mail notification.

Profile name
Test the notification
Minimum severity
SMTP server
SMTP port
SMTP identity
SMTP username
SMTP password
From address
To address
CC address
Format of the message

notification profile configure pushover --profile-name=PROFILE-NAME [<flags>]

Pushover notification.

Profile name
Test the notification
Minimum severity
Pushover App Token
Pushover User Key
Format of the message

notification profile configure webhook --profile-name=PROFILE-NAME [<flags>]

Webhook notification.

Profile name
Test the notification
Minimum severity
SMTP server
HTTP Method
HTTP Header (key:value)
Format of the message

notification profile delete --profile-name=PROFILE-NAME

Delete notification profile

Profile name

notification profile test --profile-name=PROFILE-NAME

Send test notification

Profile name

notification profile list [<flags>]

List notification profiles

Output result in JSON format to stdout
Raw output

notification profile show --profile-name=PROFILE-NAME [<flags>]

Show notification profile

Output result in JSON format to stdout
Profile name
Raw output

notification template list [<flags>]

List templates

Output result in JSON format to stdout

notification template set [<flags>] <template>

Set the notification template

Read new template from stdin
Read new template from file
Edit template using default editor

notification template show [<flags>] <template>

Show template

Template format
Show original template
Convert the output to HTML

notification template remove <template>

Remove the notification template

server start [<flags>]

Start Kopia server

Server the provided HTML at the root URL
Start the server with HTML UI
Start the GRPC server
Start the control API
Frequency for refreshing repository status
Maximum number of server goroutines
Server control username
Server control password
TLS certificate PEM
TLS key PEM file
Persist logs in a file
Path to JSON file storing UI preferences
Grace period for shutting down the server
Enable notifications to be printed to stdout for KopiaUI
Server address
HTTP server username (basic auth)
HTTP server password (basic auth)
Cache directory
Desired size of local content cache (soft limit)
Maximum size of local content cache (hard limit)
Minimal age of content cache item to be subject to sweeping
Desired size of local metadata cache (soft limit)
Maximum size of local metadata cache (hard limit)
Minimal age of metadata cache item to be subject to sweeping
Minimal age of index cache item to be subject to sweeping
Duration of index cache
Periodically check for Kopia updates on GitHub
Make repository read-only to avoid accidental changes
Human-readable description of the repository
Allow snapshot actions

server acl add --user=USER --target=TARGET --access=ACCESS [<flags>]

Add ACL entry

User the ACL targets
Manifests targeted by the rule (type:T,key1:value1,...,keyN:valueN)
Access the user gets to subject
Overwrite existing rule with the same user and target

server acl delete [<flags>] [<id>...]

Delete ACL entry

Remove all ACL entries
Really delete

server acl enable [<flags>]

Enable ACLs and install default entries

Reset all ACLs to default

server acl list [<flags>]

List ACL entries

Output result in JSON format to stdout

server users add [<flags>] <username>

Add new repository user

Ask for user password
Password
Password hash

server users set [<flags>] <username>

Set password for a repository user.

Ask for user password
Password
Password hash

server users delete <username>

Delete user

server users hash-password [<flags>]

Hash a user password that can be passed to the 'server user add/set' command

Password

server users info <username>

Info about particular user

server users list [<flags>]

List users

Output result in JSON format to stdout

server status [<flags>]

Status of Kopia server

Show remote sources
Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint

server refresh [<flags>]

Refresh the cache in Kopia server to observe new sources, etc.

Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint

server flush [<flags>]

Flush the state of Kopia server to persistent storage, etc.

Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint

server shutdown [<flags>]

Gracefully shutdown the server

Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint

server snapshot [<flags>] [<source>]

Trigger upload for one or more existing sources

All paths managed by server
Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint

server cancel [<flags>] [<source>]

Cancels in-progress uploads for one or more sources

All paths managed by server
Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint

server pause [<flags>] [<source>]

Pause the scheduled snapshots for one or more sources

All paths managed by server
Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint

server resume [<flags>] [<source>]

Resume the scheduled snapshots for one or more sources

All paths managed by server
Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint

server throttle get [<flags>]

Get throttling parameters for a running server

Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint
Output result in JSON format to stdout

server throttle set [<flags>]

Set throttling parameters for a running server

Address of the server to connect to
Server control username
Server control password
Server certificate fingerprint
Set the download bytes per second
Set the upload bytes per second
Set max reads per second
Set max writes per second
Set max lists per second
Set max concurrent reads
Set max concurrent writes

session list

List sessions

restore [<flags>] <sources>...

Restore a directory or a file.

Restore can operate in two modes:

* from a snapshot: restoring (possibly shallowly) a specified file or directory from a snapshot into a target path. By default, the target path will be created by the restore command if it does not exist.

* by expanding a shallow placeholder in situ where the placeholder was created by a previous restore.

In the from-snapshot mode:

The source to be restored is specified in the form of a directory or file ID and optionally a sub-directory path.

For example, the following source and target arguments will restore the contents of the 'kffbb7c28ea6c34d6cbe555d1cf80faa9' directory into a new, local directory named 'd1'

Similarly, the following command will restore the contents of a subdirectory directory named 'sd2'

When restoring to a target path that already has existing data, by default the restore will attempt to overwrite, unless one or more of the following flags has been set (to prevent overwrite of each type):

--no-overwrite-files --no-overwrite-directories --no-overwrite-symlinks

If the '--shallow' option is provided, files and directories this depth and below in the directory hierarchy will be represented by compact placeholder files of the form 'entry.kopia-entry' instead of being restored. (I.e. setting '--shallow' to 0 will only shallow restore.) Snapshots created of directory contents represented by placeholder files will be identical to snapshots of the equivalent fully expanded tree.

In the expanding-a-placeholder mode:

The source to be restored is a pre-existing placeholder entry of the form of the expansion and defaults to 0. For example:

will remove the d3.kopiadir placeholder and restore the referenced repository contents into path d3 where the contents of the newly created path d3 will themselves be placeholder files.

Overwrite existing directories
Specifies whether or not to overwrite already existing files
Specifies whether or not to overwrite already existing symlinks
When doing a restore, attempt to write files sparsely-allocating the minimum amount of disk space needed.
When multiple snapshots match, fail if they have inconsistent attributes
Override restore mode
Restore parallelism (1=disable)
Skip owners during restore
Skip permissions during restore
Skip times during restore
Ignore permission errors
Write files atomically to disk, ensuring they are either fully committed, or not written at all, preventing partially written files
Ignore all errors
Skip files and symlinks that exist in the output
Shallow restore the directory hierarchy starting at this level (default is to deep restore the entire hierarchy.)
When doing a shallow restore, write actual files instead of placeholders smaller than this size.
When using a path as the source, use the latest snapshot available before this date. Default is latest

show <object-path>

Displays the contents of a repository object.

snapshot copy-history [<flags>] <source> [<destination>]

Performs a copy of the history of snapshots from another user or host. This command will copy snapshot manifests of the specified source to the respective destination. This is typically used when renaming a host, switching username or moving directory around to maintain snapshot history.

Both source and destination can be specified using user@host, @host or user@host:/path where destination values override the corresponding parts of the source, so both targeted and mass copy is supported.

Source: Destination Behavior --------------------------------------------------- @host1 @host2 copy snapshots from all users of host1 @host1 user2@host2 (disallowed as it would potentially collapse users) @host1 user2@host2:/path2 (disallowed as it would potentially collapse paths) user1@host1 @host2 copy all snapshots to user1@host2 user1@host1 user2@host2 copy all snapshots to user2@host2 user1@host1 user2@host2:/path2 (disallowed as it would potentially collapse paths) user1@host1:/path1 @host2 copy to user1@host2:/path1 user1@host1:/path1 user2@host2 copy to user2@host2:/path1 user1@host1:/path1 user2@host2:/path2 copy snapshots from single path.

Do not actually copy snapshots, only print what would happen

snapshot move-history [<flags>] <source> [<destination>]

Performs a move of the history of snapshots from another user or host. This command will move snapshot manifests of the specified source to the respective destination. This is typically used when renaming a host, switching username or moving directory around to maintain snapshot history.

Both source and destination can be specified using user@host, @host or user@host:/path where destination values override the corresponding parts of the source, so both targeted and mass move is supported.

Source: Destination Behavior --------------------------------------------------- @host1 @host2 move snapshots from all users of host1 @host1 user2@host2 (disallowed as it would potentially collapse users) @host1 user2@host2:/path2 (disallowed as it would potentially collapse paths) user1@host1 @host2 move all snapshots to user1@host2 user1@host1 user2@host2 move all snapshots to user2@host2 user1@host1 user2@host2:/path2 (disallowed as it would potentially collapse paths) user1@host1:/path1 @host2 move to user1@host2:/path1 user1@host1:/path1 user2@host2 move to user2@host2:/path1 user1@host1:/path1 user2@host2:/path2 move snapshots from single path.

Do not actually copy snapshots, only print what would happen

snapshot create [<flags>] [<source>...]

Creates a snapshot of local directory or file.

Create snapshots for files or directories previously backed up by this user on this computer. Cannot be used when a source path argument is also specified.
Stop the backup process after the specified amount of data (in MB) has been uploaded.
Free-form snapshot description.
Fail fast when creating snapshot.
Force hashing of source files for a given percentage of files [0.0 .. 100.0]
Upload N files in parallel
Override snapshot start timestamp.
Override snapshot end timestamp.
File path to be used for stdin data snapshot.
Tags applied on the snapshot. Must be provided in the <key>:<value> format.
Create a pinned snapshot that will not expire automatically
Override the source of the snapshot.
Send a snapshot report notification using configured notification profiles
Override log level for directories
Override log level for entries
Output result in JSON format to stdout

snapshot delete [<flags>] <id>...

Explicitly delete a snapshot by providing a snapshot ID.

Delete all snapshots for a source
Confirm deletion

snapshot estimate [<flags>] <source>

Estimate the snapshot size and upload time.

Show files
Do not display scanning progress
Upload speed to use for estimation
Max examples per bucket

snapshot expire [<flags>] [<path>...]

Remove old snapshots according to defined expiration policies.

Expire all snapshots
Whether to actually delete snapshots

snapshot fix invalid-files [<flags>]

Remove references to any invalid (unreadable) files from snapshots.

Manifest IDs
Source to target (username@hostname:/path)
Update snapshot manifests
Parallelism
Handling of invalid directories
How to handle invalid files
Verify a percentage of files by fully downloading them [0.0 .. 100.0]

snapshot fix remove-files [<flags>]

Remove references to the specified files from snapshots.

Manifest IDs
Source to target (username@hostname:/path)
Update snapshot manifests
Parallelism
Handling of invalid directories
Remove files by their object ID
Remove files by filename (wildcards are supported)

snapshot list [<flags>] [<source>]

List snapshots of files and directories.

Include incomplete.
Show human-readable units
Include deltas.
Include manifest item ID.
Include retention reasons.
Include file mod time
Include owner
Show identical snapshots
Compute and show storage statistics
Reverse sort order
Show all snapshots (not just current username/host)
Maximum number of entries per source.
Tag filters to apply on the list items. Must be provided in the <key>:<value> format.
Output result in JSON format to stdout

snapshot migrate --source-config=SOURCE-CONFIG [<flags>]

Migrate snapshots from another repository

Configuration file for the source repository
List of sources to migrate
Migrate all sources
Migrate policies too
Overwrite policies
Only migrate the latest snapshot
Number of sources to migrate in parallel
When migrating also apply current ignore rules

snapshot pin [<flags>] <id>...

Add or remove pins preventing snapshot deletion

Add pins
Remove pins

snapshot restore [<flags>] <sources>...

Restore a directory or a file.

Restore can operate in two modes:

* from a snapshot: restoring (possibly shallowly) a specified file or directory from a snapshot into a target path. By default, the target path will be created by the restore command if it does not exist.

* by expanding a shallow placeholder in situ where the placeholder was created by a previous restore.

In the from-snapshot mode:

The source to be restored is specified in the form of a directory or file ID and optionally a sub-directory path.

For example, the following source and target arguments will restore the contents of the 'kffbb7c28ea6c34d6cbe555d1cf80faa9' directory into a new, local directory named 'd1'

Similarly, the following command will restore the contents of a subdirectory directory named 'sd2'

When restoring to a target path that already has existing data, by default the restore will attempt to overwrite, unless one or more of the following flags has been set (to prevent overwrite of each type):

--no-overwrite-files --no-overwrite-directories --no-overwrite-symlinks

If the '--shallow' option is provided, files and directories this depth and below in the directory hierarchy will be represented by compact placeholder files of the form 'entry.kopia-entry' instead of being restored. (I.e. setting '--shallow' to 0 will only shallow restore.) Snapshots created of directory contents represented by placeholder files will be identical to snapshots of the equivalent fully expanded tree.

In the expanding-a-placeholder mode:

The source to be restored is a pre-existing placeholder entry of the form of the expansion and defaults to 0. For example:

will remove the d3.kopiadir placeholder and restore the referenced repository contents into path d3 where the contents of the newly created path d3 will themselves be placeholder files.

Overwrite existing directories
Specifies whether or not to overwrite already existing files
Specifies whether or not to overwrite already existing symlinks
When doing a restore, attempt to write files sparsely-allocating the minimum amount of disk space needed.
When multiple snapshots match, fail if they have inconsistent attributes
Override restore mode
Restore parallelism (1=disable)
Skip owners during restore
Skip permissions during restore
Skip times during restore
Ignore permission errors
Write files atomically to disk, ensuring they are either fully committed, or not written at all, preventing partially written files
Ignore all errors
Skip files and symlinks that exist in the output
Shallow restore the directory hierarchy starting at this level (default is to deep restore the entire hierarchy.)
When doing a shallow restore, write actual files instead of placeholders smaller than this size.
When using a path as the source, use the latest snapshot available before this date. Default is latest

snapshot verify [<flags>] [<snapshot-ids>...]

Verify the contents of stored snapshot

Maximum number of errors before stopping
Directory object IDs to verify
File object IDs to verify
Verify the provided sources
Parallelization
Queue length for file verification
Parallelism for file verification
Randomly verify a percentage of files by downloading them [0.0 .. 100.0]

manifest delete <item>...

Remove manifest items

manifest list [<flags>]

List manifest items

List of key:value pairs
List of keys to sort by
Output result in JSON format to stdout

manifest show <item>...

Show manifest items

policy edit [<flags>] [<target>...]

Edit policy.

Select the global policy.

policy list [<flags>]

List policies.

Output result in JSON format to stdout

policy delete [<flags>] [<target>...]

Remove policy.

Select the global policy.
Do not remove

policy set [<flags>] [<target>...]

Set policy.

Select the global policy.
Enable or disable inheriting policies from the parent
Path to before-folder action command ('none' to remove)
Path to after-folder action command ('none' to remove)
Path to before-snapshot-root action command ('none' to remove or 'inherit')
Path to after-snapshot-root action command ('none' to remove or 'inherit')
Max time allowed for an action to run in seconds
Action command mode
Persist action script
Compression algorithm
Min size of file to attempt compression for
Max size of file to attempt compression for
List of extensions to add to the only-compress list
List of extensions to remove from the only-compress list
Clear list of extensions in the only-compress list
List of extensions to add to the never compress list
List of extensions to remove from the never compress list
Clear list of extensions in the never compress list
Metadata Compression algorithm
Splitter algorithm override
Ignore errors reading files while traversing ('true', 'false', 'inherit')
Ignore errors reading directories while traversing ('true', 'false', 'inherit
Ignore unknown entry types in directories ('true', 'false', 'inherit
List of paths to add to the ignore list
List of paths to remove from the ignore list
Clear list of paths in the ignore list
List of paths to add to the dot-ignore list
List of paths to remove from the dot-ignore list
Clear list of paths in the dot-ignore list
Exclude files above given size
Stay in parent filesystem when finding files ('true', 'false', 'inherit')
Ignore cache directories ('true', 'false', 'inherit')
Log detail when a directory is snapshotted (or 'inherit')
Log detail when a directory is ignored (or 'inherit')
Log detail when an entry is snapshotted (or 'inherit')
Log detail when an entry is ignored (or 'inherit')
Log detail on entry cache hit (or 'inherit')
Log detail on entry cache miss (or 'inherit')
Number of most recent backups to keep per source (or 'inherit')
Number of most-recent hourly backups to keep per source (or 'inherit')
Number of most-recent daily backups to keep per source (or 'inherit')
Number of most-recent weekly backups to keep per source (or 'inherit')
Number of most-recent monthly backups to keep per source (or 'inherit')
Number of most-recent annual backups to keep per source (or 'inherit')
Do not save identical snapshots (or 'inherit')
Interval between snapshots
Comma-separated times of day when to take snapshot (HH:mm,HH:mm,...) or 'inherit' to remove override
Semicolon-separated crontab-compatible expressions (or 'inherit')
Run missed time-of-day or cron snapshots ('true', 'false', 'inherit')
Only create snapshots manually
Enable Volume Shadow Copy snapshots ('never', 'always', 'when-available', 'inherit')
Maximum number of parallel file reads
Maximum number of parallel snapshots (server, KopiaUI only)
Use parallel uploads above size

policy show [<flags>] [<target>...]

Show snapshot policy.

Select the global policy.
Output result in JSON format to stdout

policy export [<flags>] [<target>...]

Exports the policies to the specified file, or to stdout if none is specified.

File path to export to
Overwrite the file if it exists
Select the global policy.

policy import [<flags>] [<target>...]

Imports policies from a specified file, or stdin if no file is specified.

File path to import from
Allow unknown fields in the policy file
Delete all other policies, keeping only those that got imported
Select the global policy.

mount [<flags>] [<path>] [<mountPoint>]

Mount repository object as a local filesystem.

Open file browser
Trace filesystem operations
Allows other users to access the file system.
Allows the mounting over a non-empty directory. The files in it will be shadowed by the freshly created mount.
Use WebDAV to mount the repository object regardless of fuse availability.
Limit the number of cached directory entries
Limit the number of cached directories

maintenance info [<flags>]

Display maintenance information

Output result in JSON format to stdout

maintenance run [<flags>]

Run repository maintenance

Full maintenance
Safety level

maintenance set [<flags>]

Set maintenance parameters

Set maintenance owner user@hostname
Enable or disable quick maintenance
Enable or disable full maintenance
Set quick maintenance interval
Set full maintenance interval
Pause quick maintenance for a specified duration
Pause full maintenance for a specified duration
Set maximum number of log sessions to retain
Set maximum age of log sessions to retain
Set maximum total size of log sessions
Extend retention period of locked objects as part of full maintenance.
Override list parallelism.

repository connect server --url=URL [<flags>]

Connect to a repository API Server.

Server URL
Server certificate fingerprint

repository connect from-config [<flags>]

Connect to repository in the provided configuration file

Path to the configuration file
Configuration token
Path to the configuration token file
Read configuration token from stdin

repository connect azure --container=CONTAINER --storage-account=STORAGE-ACCOUNT [<flags>]

Connect to repository in an Azure blob storage

Name of the Azure blob container
Azure storage account name (overrides AZURE_STORAGE_ACCOUNT environment variable)
Azure storage account key (overrides AZURE_STORAGE_KEY environment variable)
Azure storage domain
Azure SAS Token
Prefix to use for objects in the bucket
Azure service principle tenant ID (overrides AZURE_TENANT_ID environment variable)
Azure service principle client ID (overrides AZURE_CLIENT_ID environment variable)
Azure service principle client secret (overrides AZURE_CLIENT_SECRET environment variable)
Azure client certificate (overrides AZURE_CLIENT_CERT environment variable)
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported

repository connect b2 --bucket=BUCKET --key-id=KEY-ID --key=KEY [<flags>]

Connect to repository in a B2 bucket

Name of the B2 bucket
Key ID (overrides B2_KEY_ID environment variable)
Secret key (overrides B2_KEY environment variable)
Prefix to use for objects in the bucket
Limit the download speed.
Limit the upload speed.

repository connect filesystem --path=PATH [<flags>]

Connect to repository in a filesystem

Path to the repository
User ID owning newly created files
Group ID owning newly created files
File mode for newly created files (0600)
Mode of newly directory files (0700)
Use flat directory structure
Limit the download speed.
Limit the upload speed.

repository connect gcs --bucket=BUCKET [<flags>]

Connect to repository in a Google Cloud Storage bucket

Name of the Google Cloud Storage bucket
Prefix to use for objects in the bucket
Use read-only GCS scope to prevent write access
Use the provided JSON file with credentials
Embed GCS credentials JSON in Kopia configuration
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported

repository connect gdrive --folder-id=FOLDER-ID [<flags>]

Connect to repository in a Google Drive folder

FolderID to use for objects in the bucket
Use read-only scope to prevent write access
Use the provided JSON file with credentials
Embed GCS credentials JSON in Kopia configuration
Limit the download speed.
Limit the upload speed.

repository connect rclone --remote-path=REMOTE-PATH [<flags>]

Connect to repository in a rclone-based provided

Rclone remote:path
Use flat directory structure
Path to rclone binary
Pass additional parameters to rclone
Pass additional environment (key=value) to rclone
Embed the provider RClone config
Assume provider writes are atomic
Time in seconds to wait for rclone to start
Limit the download speed.
Limit the upload speed.

repository connect s3 --bucket=BUCKET --access-key=ACCESS-KEY --secret-access-key=SECRET-ACCESS-KEY [<flags>]

Connect to repository in an S3 bucket

Name of the S3 bucket
Endpoint to use
S3 Region
Access key ID (overrides AWS_ACCESS_KEY_ID environment variable)
Secret access key (overrides AWS_SECRET_ACCESS_KEY environment variable)
Session token (overrides AWS_SESSION_TOKEN environment variable)
Prefix to use for objects in the bucket. Put trailing slash (/) if you want to use prefix as directory. e.g my-backup-dir/ would put repository contents inside my-backup-dir directory
Disable TLS security (HTTPS)
Disable TLS (HTTPS) certificate verification
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported
Certificate authority in-line (base64 enc.)
Certificate authority file path

repository connect sftp --path=PATH --host=HOST --username=USERNAME [<flags>]

Connect to repository in an SFTP storage

Path to the repository in the SFTP/SSH server
SFTP/SSH server hostname
SFTP/SSH server port
SFTP/SSH server username
SFTP/SSH server password
path to private key file for SFTP/SSH server
private key data
path to known_hosts file
known_hosts file entries
Embed key and known_hosts in Kopia configuration
Launch external passwordless SSH command
SSH command
Arguments to external SSH command
Use flat directory structure
Limit the download speed.
Limit the upload speed.

repository connect webdav --url=URL [<flags>]

Connect to repository in a WebDAV storage

URL of WebDAV server
Use flat directory structure
WebDAV username
WebDAV password
Assume WebDAV provider implements atomic writes
Limit the download speed.
Limit the upload speed.

repository create from-config [<flags>]

Create repository in the provided configuration file

Path to the configuration file
Configuration token
Path to the configuration token file
Read configuration token from stdin

repository create azure --container=CONTAINER --storage-account=STORAGE-ACCOUNT [<flags>]

Create repository in an Azure blob storage

Name of the Azure blob container
Azure storage account name (overrides AZURE_STORAGE_ACCOUNT environment variable)
Azure storage account key (overrides AZURE_STORAGE_KEY environment variable)
Azure storage domain
Azure SAS Token
Prefix to use for objects in the bucket
Azure service principle tenant ID (overrides AZURE_TENANT_ID environment variable)
Azure service principle client ID (overrides AZURE_CLIENT_ID environment variable)
Azure service principle client secret (overrides AZURE_CLIENT_SECRET environment variable)
Azure client certificate (overrides AZURE_CLIENT_CERT environment variable)
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported

repository create b2 --bucket=BUCKET --key-id=KEY-ID --key=KEY [<flags>]

Create repository in a B2 bucket

Name of the B2 bucket
Key ID (overrides B2_KEY_ID environment variable)
Secret key (overrides B2_KEY environment variable)
Prefix to use for objects in the bucket
Limit the download speed.
Limit the upload speed.

repository create filesystem --path=PATH [<flags>]

Create repository in a filesystem

Path to the repository
User ID owning newly created files
Group ID owning newly created files
File mode for newly created files (0600)
Mode of newly directory files (0700)
Use flat directory structure
Limit the download speed.
Limit the upload speed.

repository create gcs --bucket=BUCKET [<flags>]

Create repository in a Google Cloud Storage bucket

Name of the Google Cloud Storage bucket
Prefix to use for objects in the bucket
Use read-only GCS scope to prevent write access
Use the provided JSON file with credentials
Embed GCS credentials JSON in Kopia configuration
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported

repository create gdrive --folder-id=FOLDER-ID [<flags>]

Create repository in a Google Drive folder

FolderID to use for objects in the bucket
Use read-only scope to prevent write access
Use the provided JSON file with credentials
Embed GCS credentials JSON in Kopia configuration
Limit the download speed.
Limit the upload speed.

repository create rclone --remote-path=REMOTE-PATH [<flags>]

Create repository in a rclone-based provided

Rclone remote:path
Use flat directory structure
Path to rclone binary
Pass additional parameters to rclone
Pass additional environment (key=value) to rclone
Embed the provider RClone config
Assume provider writes are atomic
Time in seconds to wait for rclone to start
Limit the download speed.
Limit the upload speed.

repository create s3 --bucket=BUCKET --access-key=ACCESS-KEY --secret-access-key=SECRET-ACCESS-KEY [<flags>]

Create repository in an S3 bucket

Name of the S3 bucket
Endpoint to use
S3 Region
Access key ID (overrides AWS_ACCESS_KEY_ID environment variable)
Secret access key (overrides AWS_SECRET_ACCESS_KEY environment variable)
Session token (overrides AWS_SESSION_TOKEN environment variable)
Prefix to use for objects in the bucket. Put trailing slash (/) if you want to use prefix as directory. e.g my-backup-dir/ would put repository contents inside my-backup-dir directory
Disable TLS security (HTTPS)
Disable TLS (HTTPS) certificate verification
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported
Certificate authority in-line (base64 enc.)
Certificate authority file path

repository create sftp --path=PATH --host=HOST --username=USERNAME [<flags>]

Create repository in an SFTP storage

Path to the repository in the SFTP/SSH server
SFTP/SSH server hostname
SFTP/SSH server port
SFTP/SSH server username
SFTP/SSH server password
path to private key file for SFTP/SSH server
private key data
path to known_hosts file
known_hosts file entries
Embed key and known_hosts in Kopia configuration
Launch external passwordless SSH command
SSH command
Arguments to external SSH command
Use flat directory structure
Limit the download speed.
Limit the upload speed.

repository create webdav --url=URL [<flags>]

Create repository in a WebDAV storage

URL of WebDAV server
Use flat directory structure
WebDAV username
WebDAV password
Assume WebDAV provider implements atomic writes
Limit the download speed.
Limit the upload speed.

repository disconnect

Disconnect from a repository.

repository repair from-config [<flags>]

Repair repository in the provided configuration file

Path to the configuration file
Configuration token
Path to the configuration token file
Read configuration token from stdin

repository repair azure --container=CONTAINER --storage-account=STORAGE-ACCOUNT [<flags>]

Repair repository in an Azure blob storage

Name of the Azure blob container
Azure storage account name (overrides AZURE_STORAGE_ACCOUNT environment variable)
Azure storage account key (overrides AZURE_STORAGE_KEY environment variable)
Azure storage domain
Azure SAS Token
Prefix to use for objects in the bucket
Azure service principle tenant ID (overrides AZURE_TENANT_ID environment variable)
Azure service principle client ID (overrides AZURE_CLIENT_ID environment variable)
Azure service principle client secret (overrides AZURE_CLIENT_SECRET environment variable)
Azure client certificate (overrides AZURE_CLIENT_CERT environment variable)
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported

repository repair b2 --bucket=BUCKET --key-id=KEY-ID --key=KEY [<flags>]

Repair repository in a B2 bucket

Name of the B2 bucket
Key ID (overrides B2_KEY_ID environment variable)
Secret key (overrides B2_KEY environment variable)
Prefix to use for objects in the bucket
Limit the download speed.
Limit the upload speed.

repository repair filesystem --path=PATH [<flags>]

Repair repository in a filesystem

Path to the repository
User ID owning newly created files
Group ID owning newly created files
File mode for newly created files (0600)
Mode of newly directory files (0700)
Use flat directory structure
Limit the download speed.
Limit the upload speed.

repository repair gcs --bucket=BUCKET [<flags>]

Repair repository in a Google Cloud Storage bucket

Name of the Google Cloud Storage bucket
Prefix to use for objects in the bucket
Use read-only GCS scope to prevent write access
Use the provided JSON file with credentials
Embed GCS credentials JSON in Kopia configuration
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported

repository repair gdrive --folder-id=FOLDER-ID [<flags>]

Repair repository in a Google Drive folder

FolderID to use for objects in the bucket
Use read-only scope to prevent write access
Use the provided JSON file with credentials
Embed GCS credentials JSON in Kopia configuration
Limit the download speed.
Limit the upload speed.

repository repair rclone --remote-path=REMOTE-PATH [<flags>]

Repair repository in a rclone-based provided

Rclone remote:path
Use flat directory structure
Path to rclone binary
Pass additional parameters to rclone
Pass additional environment (key=value) to rclone
Embed the provider RClone config
Assume provider writes are atomic
Time in seconds to wait for rclone to start
Limit the download speed.
Limit the upload speed.

repository repair s3 --bucket=BUCKET --access-key=ACCESS-KEY --secret-access-key=SECRET-ACCESS-KEY [<flags>]

Repair repository in an S3 bucket

Name of the S3 bucket
Endpoint to use
S3 Region
Access key ID (overrides AWS_ACCESS_KEY_ID environment variable)
Secret access key (overrides AWS_SECRET_ACCESS_KEY environment variable)
Session token (overrides AWS_SESSION_TOKEN environment variable)
Prefix to use for objects in the bucket. Put trailing slash (/) if you want to use prefix as directory. e.g my-backup-dir/ would put repository contents inside my-backup-dir directory
Disable TLS security (HTTPS)
Disable TLS (HTTPS) certificate verification
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported
Certificate authority in-line (base64 enc.)
Certificate authority file path

repository repair sftp --path=PATH --host=HOST --username=USERNAME [<flags>]

Repair repository in an SFTP storage

Path to the repository in the SFTP/SSH server
SFTP/SSH server hostname
SFTP/SSH server port
SFTP/SSH server username
SFTP/SSH server password
path to private key file for SFTP/SSH server
private key data
path to known_hosts file
known_hosts file entries
Embed key and known_hosts in Kopia configuration
Launch external passwordless SSH command
SSH command
Arguments to external SSH command
Use flat directory structure
Limit the download speed.
Limit the upload speed.

repository repair webdav --url=URL [<flags>]

Repair repository in a WebDAV storage

URL of WebDAV server
Use flat directory structure
WebDAV username
WebDAV password
Assume WebDAV provider implements atomic writes
Limit the download speed.
Limit the upload speed.

repository set-client [<flags>]

Set repository client options.

Set repository to read-only
Set repository to read-write
Change description
Change username
Change hostname
Duration of kopia.repository format blob cache
Disable caching of kopia.repository format blob

repository set-parameters [<flags>]

Set repository parameters.

Set max pack file size
Set version of index format used for writing
Set the blob retention-mode for supported storage backends.
Set the blob retention-period for supported storage backends.
Upgrade repository to the latest stable format
Epoch refresh frequency
Minimal duration of a single epoch
Epoch cleanup safety margin
Advance epoch if the number of indexes exceeds given threshold
Advance epoch if the total size of indexes exceeds given threshold
Epoch delete parallelism
Checkpoint frequency

repository status [<flags>]

Display the status of connected repository.

Display reconnect command
Include password in reconnect token
Output result in JSON format to stdout

repository sync-to from-config [<flags>]

Synchronize repository data to another repository in the provided configuration file

Path to the configuration file
Configuration token
Path to the configuration token file
Read configuration token from stdin

repository sync-to azure --container=CONTAINER --storage-account=STORAGE-ACCOUNT [<flags>]

Synchronize repository data to another repository in an Azure blob storage

Name of the Azure blob container
Azure storage account name (overrides AZURE_STORAGE_ACCOUNT environment variable)
Azure storage account key (overrides AZURE_STORAGE_KEY environment variable)
Azure storage domain
Azure SAS Token
Prefix to use for objects in the bucket
Azure service principle tenant ID (overrides AZURE_TENANT_ID environment variable)
Azure service principle client ID (overrides AZURE_CLIENT_ID environment variable)
Azure service principle client secret (overrides AZURE_CLIENT_SECRET environment variable)
Azure client certificate (overrides AZURE_CLIENT_CERT environment variable)
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported

repository sync-to b2 --bucket=BUCKET --key-id=KEY-ID --key=KEY [<flags>]

Synchronize repository data to another repository in a B2 bucket

Name of the B2 bucket
Key ID (overrides B2_KEY_ID environment variable)
Secret key (overrides B2_KEY environment variable)
Prefix to use for objects in the bucket
Limit the download speed.
Limit the upload speed.

repository sync-to filesystem --path=PATH [<flags>]

Synchronize repository data to another repository in a filesystem

Path to the repository
User ID owning newly created files
Group ID owning newly created files
File mode for newly created files (0600)
Mode of newly directory files (0700)
Use flat directory structure
Limit the download speed.
Limit the upload speed.

repository sync-to gcs --bucket=BUCKET [<flags>]

Synchronize repository data to another repository in a Google Cloud Storage bucket

Name of the Google Cloud Storage bucket
Prefix to use for objects in the bucket
Use read-only GCS scope to prevent write access
Use the provided JSON file with credentials
Embed GCS credentials JSON in Kopia configuration
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported

repository sync-to gdrive --folder-id=FOLDER-ID [<flags>]

Synchronize repository data to another repository in a Google Drive folder

FolderID to use for objects in the bucket
Use read-only scope to prevent write access
Use the provided JSON file with credentials
Embed GCS credentials JSON in Kopia configuration
Limit the download speed.
Limit the upload speed.

repository sync-to rclone --remote-path=REMOTE-PATH [<flags>]

Synchronize repository data to another repository in a rclone-based provided

Rclone remote:path
Use flat directory structure
Path to rclone binary
Pass additional parameters to rclone
Pass additional environment (key=value) to rclone
Embed the provider RClone config
Assume provider writes are atomic
Time in seconds to wait for rclone to start
Limit the download speed.
Limit the upload speed.

repository sync-to s3 --bucket=BUCKET --access-key=ACCESS-KEY --secret-access-key=SECRET-ACCESS-KEY [<flags>]

Synchronize repository data to another repository in an S3 bucket

Name of the S3 bucket
Endpoint to use
S3 Region
Access key ID (overrides AWS_ACCESS_KEY_ID environment variable)
Secret access key (overrides AWS_SECRET_ACCESS_KEY environment variable)
Session token (overrides AWS_SESSION_TOKEN environment variable)
Prefix to use for objects in the bucket. Put trailing slash (/) if you want to use prefix as directory. e.g my-backup-dir/ would put repository contents inside my-backup-dir directory
Disable TLS security (HTTPS)
Disable TLS (HTTPS) certificate verification
Limit the download speed.
Limit the upload speed.
Use a point-in-time view of the storage repository when supported
Certificate authority in-line (base64 enc.)
Certificate authority file path

repository sync-to sftp --path=PATH --host=HOST --username=USERNAME [<flags>]

Synchronize repository data to another repository in an SFTP storage

Path to the repository in the SFTP/SSH server
SFTP/SSH server hostname
SFTP/SSH server port
SFTP/SSH server username
SFTP/SSH server password
path to private key file for SFTP/SSH server
private key data
path to known_hosts file
known_hosts file entries
Embed key and known_hosts in Kopia configuration
Launch external passwordless SSH command
SSH command
Arguments to external SSH command
Use flat directory structure
Limit the download speed.
Limit the upload speed.

repository sync-to webdav --url=URL [<flags>]

Synchronize repository data to another repository in a WebDAV storage

URL of WebDAV server
Use flat directory structure
WebDAV username
WebDAV password
Assume WebDAV provider implements atomic writes
Limit the download speed.
Limit the upload speed.

repository throttle get [<flags>]

Get throttling parameters for a repository

Output result in JSON format to stdout

repository throttle set [<flags>]

Set throttling parameters for a repository

Set the download bytes per second
Set the upload bytes per second
Set max reads per second
Set max writes per second
Set max lists per second
Set max concurrent reads
Set max concurrent writes

repository change-password [<flags>]

Change repository password

New password

repository validate-provider [<flags>]

Validates that a repository provider is compatible with Kopia

Number of storage connections
Duration of concurrency test
Number of PutBlob workers
Number of GetBlob workers
Number of GetMetadata workers

repository upgrade begin [<flags>]

Begin upgrade.

Max time it should take all other Kopia clients to drop repository connections
An advisory polling interval to check for the status of upgrade
The maximum drift between repository and client clocks

repository upgrade rollback [<flags>]

Rollback the repository upgrade.

Force rollback the repository upgrade, this action can cause repository corruption

repository upgrade validate

Validate the upgraded indexes.

0.20.1 build:

Search for    or go to Top of page |  Section 1 |  Main Index

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with ManDoc.