GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages


Manual Reference Pages  -  S3FS (1)

NAME

S3FS - FUSE-based file system backed by Amazon S3

CONTENTS

Synopsis
Description
Authentication
Options
Fuse/mount Options
Notes
Bugs
Author

SYNOPSIS

    mounting

s3fs bucket[:/path] mountpoint [options]
 

    unmounting

umount mountpoint
 

    utility mode ( remove interrupted multipart uploading objects )

s3fs -u bucket
 

DESCRIPTION

s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It stores files natively and transparently in S3 (i.e., you can use other programs to access the same files).

AUTHENTICATION

The s3fs password file has this format (use this format if you have only one set of credentials):
accessKeyId:secretAccessKey

If you have more than one set of credentials, this syntax is also recognized:

bucketName:accessKeyId:secretAccessKey

Password files can be stored in two locations:


/etc/passwd-s3fs [0640]
$HOME/.passwd-s3fs [0600]

OPTIONS

    general options

-h --help
  print help
--version
  print version
-f FUSE foreground option - do not run as daemon.
-s FUSE singlethreaded option (disables multi-threaded operation)

    mount options

All s3fs options must given in the form where "opt" is:
  <option_name>=<option_value>
-o default_acl (default="private")
  the default canned acl to apply to all written S3 objects, e.g., "public-read". Any created files will have this canned acl. Any updated files will also have this canned acl applied!
-o prefix (default="") (coming soon!)
  a prefix to append to all S3 objects.
-o retries (default="2")
  number of times to retry a failed S3 transaction.
-o use_cache (default="" which means disabled)
  local folder to use for local file cache.
-o del_cache - delete local file cache
  delete local file cache when s3fs starts and exits.
-o use_rrs (default is disable)
  use Amazon’s Reduced Redundancy Storage. this option can not be specified with use_sse. (can specify use_rrs=1 for old version)
-o use_sse (default is disable)
  use Amazon�fs Server-Site Encryption or Server-Side Encryption with Customer-Provided Encryption Keys. this option can not be specified with use_rrs. specifying only "use_sse" or "use_sse=1" enables Server-Side Encryption.(use_sse=1 for old version) specifying this option with file path which has some SSE-C secret key enables Server-Side Encryption with Customer-Provided Encryption Keys.(use_sse=file) the file must be 600 permission. the file can have some lines, each line is one SSE-C key. the first line in file is used as Customer-Provided Encryption Keys for uploading and change headers etc. if there are some keys after first line, those are used downloading object which are encripted by not first key. so that, you can keep all SSE-C keys in file, that is SSE-C key history. if AWSSSECKEYS environment is set, you can set SSE-C key instead of this option.
-o passwd_file (default="")
  specify the path to the password file, which which takes precedence over the password in $HOME/.passwd-s3fs and /etc/passwd-s3fs
-o ahbe_conf (default="" which means disabled)
  This option specifies the configuration file path which file is the additional HTTP header by file(object) extension.
The configuration file format is below:
-----------
line = [file suffix] HTTP-header [HTTP-values]
file suffix = file(object) suffix, if this field is empty, it means "*"(all object).
HTTP-header = additional HTTP header name
HTTP-values = additional HTTP header value
-----------
Sample:
-----------
.gz Content-Encoding gzip
.Z Content-Encoding compress
X-S3FS-MYHTTPHEAD myvalue
-----------
A sample configuration file is uploaded in "test" directory. If you specify this option for set "Content-Encoding" HTTP header, please take care for RFC 2616.
-o public_bucket (default="" which means disabled)
  anonymously mount a public bucket when set to 1, ignores the $HOME/.passwd-s3fs and /etc/passwd-s3fs files.
-o connect_timeout (default="10" seconds)
  time to wait for connection before giving up.
-o readwrite_timeout (default="30" seconds)
  time to wait between read/write activity before giving up.
-o max_stat_cache_size (default="1000" entries (about 4MB))
  maximum number of entries in the stat cache
-o stat_cache_expire (default is no expire)
  specify expire time(seconds) for entries in the stat cache
-o enable_noobj_cache (default is disable)
  enable cache entries for the object which does not exist. s3fs always has to check whether file(or sub directory) exists under object(path) when s3fs does some command, since s3fs has recognized a directory which does not exist and has files or sub directories under itself. It increases ListBucket request and makes performance bad. You can specify this option for performance, s3fs memorizes in stat cache that the object(file or directory) does not exist.
-o nodnscache - disable dns cache.
  s3fs is always using dns cache, this option make dns cache disable.
-o nosscache - disable ssl session cache.
  s3fs is always using ssl session cache, this option make ssl session cache disable.
-o multireq_max (default="20")
  maximum number of parallel request for listing objects.
-o parallel_count (default="5")
  number of parallel request for uploading big objects. s3fs uploads large object(default:over 20MB) by multipart post request, and sends parallel requests. This option limits parallel request count which s3fs requests at once. It is necessary to set this value depending on a CPU and a network band. This option is lated to fd_page_size option and affects it.
-o fd_page_size(default="52428800"(50MB))
  number of internal management page size for each file discriptor. For delayed reading and writing by s3fs, s3fs manages pages which is separated from object. Each pages has a status that data is already loaded(or not loaded yet). This option should not be changed when you don’t have a trouble with performance. This value is changed automatically by parallel_count and multipart_size values(fd_page_size value = parallel_count * multipart_size).
-o multipart_size(default="10"(10MB))
  number of one part size in multipart uploading request. The default size is 10MB(10485760byte), this value is minimum size. Specify number of MB and over 10(MB). This option is lated to fd_page_size option and affects it.
-o url (default="http://s3.amazonaws.com")
  sets the url to use to access Amazon S3. If you want to use HTTPS, then you can set url=https://s3.amazonaws.com
-o nomultipart - disable multipart uploads
-o enable_content_md5 ( default is disable )
  verifying uploaded data without multipart by content-md5 header. Enable to send "Content-MD5" header when uploading a object without multipart posting. If this option is enabled, it has some influences on a performance of s3fs when uploading small object. Because s3fs always checks MD5 when uploading large object, this option does not affect on large object.
-o iam_role ( default is no role )
  set the IAM Role that will supply the credentials from the instance meta-data.
-o noxmlns - disable registing xml name space.
  disable registing xml name space for response of ListBucketResult and ListVersionsResult etc. Default name space is looked up from "http://s3.amazonaws.com/doc/2006-03-01". This option should not be specified now, because s3fs looks up xmlns automatically after v1.66.
-o nocopyapi - for other incomplete compatibility object storage.
  For a distributed object storage which is compatibility S3 API without PUT(copy api). If you set this option, s3fs do not use PUT with "x-amz-copy-source"(copy api). Because traffic is increased 2-3 times by this option, we do not recommend this.
-o norenameapi - for other incomplete compatibility object storage.
  For a distributed object storage which is compatibility S3 API without PUT(copy api). This option is a subset of nocopyapi option. The nocopyapi option does not use copy-api for all command(ex. chmod, chown, touch, mv, etc), but this option does not use copy-api for only rename command(ex. mv). If this option is specified with nocopapi, the s3fs ignores it.
-o use_path_request_style (use legacy API calling style)
  Enble compatibility with S3-like APIs which do not support the virtual-host request style, by using the older path request style.

FUSE/MOUNT OPTIONS

Most of the generic mount options described in ’man mount’ are supported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime, sync async, dirsync). Filesystems are mounted with ’-onodev,nosuid’ by default, which can only be overridden by a privileged user.
There are many FUSE specific mount options that can be specified. e.g. allow_other. See the FUSE README for the full set.
 

NOTES

Maximum file size=64GB (limited by s3fs, not Amazon).
If enabled via the "use_cache" option, s3fs automatically maintains a local cache of files in the folder specified by use_cache. Whenever s3fs needs to read or write a file on S3, it first downloads the entire file locally to the folder specified by use_cache and operates on it. When fuse_release() is called, s3fs will re-upload the file to S3 if it has been changed. s3fs uses md5 checksums to minimize downloads from S3.
The folder specified by use_cache is just a local cache. It can be deleted at any time. s3fs rebuilds it on demand.
Local file caching works by calculating and comparing md5 checksums (ETag HTTP header).
s3fs leverages /etc/mime.types to "guess" the "correct" content-type based on file name extension. This means that you can copy a website to S3 and serve it up directly from S3 with correct content-types!
 

BUGS

Due to S3’s "eventual consistency" limitations, file creation can and will occasionally fail. Even after a successful create, subsequent reads can fail for an indeterminate time, even after one or more successful reads. Create and read enough files and you will eventually encounter this failure. This is not a flaw in s3fs and it is not something a FUSE wrapper like s3fs can work around. The retries option does not address this issue. Your application must either tolerate or compensate for these failures, for example by retrying creates or reads.

AUTHOR

s3fs has been written by Randy Rizun <rrizun@gmail.com>.
Search for    or go to Top of page |  Section 1 |  Main Index


S3FS S3FS (1) February 2011

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with manServer 1.07.