136K Views

How to Mount S3 bucket on EC2 Linux Instance

A S3 bucket can be mounted in a AWS instance as a file system known as S3fs. S3fs is a FUSE file-system that allows you to mount an Amazon S3 bucket as a local file-system. It behaves like a network attached drive, as it does not store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance.

Filesystem in Userspace (FUSE) is a simple interface for userspace programs to export a virtual file-system to the Linux kernel. It also aims to provide a secure method for non privileged users to create and mount their own file-system implementations.

S3fs-fuse project is written in python backed by Amazons Simple Storage Service. Amazon offers an open API to build applications on top of this service, which several companies have done, using a variety of interfaces (web, rsync, fuse, etc).

Follow the below steps to mount your S3 bucket to your Linux Instance.

This Tutorial assumes that you have a running Linux EC2 instance on AWS with root access and a bucket created in S3 which is to be mounted on your Linux Instance. You will also require Access and Secret key pair with sufficient permissions of S3 or else an IAM access to generate or Create it.

We will perform the steps as a root user. You can also use sudo command if you are a normal user with sudo access. So lets get started.

 

Step-1:- If you are using a new centos or ubuntu instance. Update the system.

-> For CentOS or Red Hat

-> For Ubuntu

 

Step-2:- Install the dependencies.

-> In CentOS or Red Hat

 

In Ubuntu or Debian

 

Step-3:- Clone s3fs source code from git.

 

Step-4:- Now change to source code  directory, and compile and install the code with the following commands:

 

Step-5:- Use below command to check where s3fs command is placed in O.S. It will also tell you the installation is ok.

 

Step-6:- Getting the access key and secret key.

You will need AWS Access key and Secret key with appropriate permissions in order to access your s3 bucket from your EC2 instance. You can easily manage your user permissions from IAM (Identity and Access Management) Service provided by AWS. Create an IAM user with S3 full access(or with a role with sufficient permissions) or use root credentials of your Account. Here we will use the root credentials for simplicity.

Go to AWS Menu -> Your AWS Account Name -> My Security Credentials. Here your IAM console will appear. You have to go to Users > Your Account name and under permissions Tab, check whether you have sufficient access on S3 bucket. If not, you can manually assign an existing  “S3 Full-Access” policy or create a new policy with sufficient permissions.

Now go to Security Credentials Tab and Create Access Key. A new Access Key and Secret Key pair will be generated. Here you can see access key and secret key (secret key is visible when you click on show tab) which you can also download. Copy these both keys separately.

Note that you can always use an existing access and secret key pair. Alternatively, you can also create a new IAM user and assign it sufficient permissions to generate the access and secret key.

 

Step-7 :- Create a new file in /etc with the name passwd-s3fs and Paste the access key and secret key in the below format .

 

Step-8:- change the permission of file

 

Step-9:- Now create a directory or provide the path of an existing directory and mount S3bucket in it.

If you have a simple bucket without dot(.) in the bucket name, use the commands used in point “a” or else for bucket with dot(.) in bucket name, follow step “b”:

a) Bucket name without dot(.):

where, “your_bucketname” = the name of your S3 bucket that you have created on AWS S3, use_cache = to use a directory for its cache purpose, allow_other= to allow other users to write to the mount-point, uid= uid of the user/owner of the mountpoint (can also add “-o gid=1001” for group), mp_umask= to remove other users permission. multireq_max= parameter to send request to s3 bucket, /mys3bucket= mountpoint where the bucket will be mounted.

You can make an entry in /etc/rc.local to automatically remount after reboot.  Find the s3fs binary file by “which” command and make the entry before the “exit 0” line as below.

 

b) Bucket name with dot(.):

where, “your_bucketname” = the name of your S3 bucket that you have created on AWS S3, use_cache = to use a directory for its cache purpose, allow_other= to allow other users to write to the mount-point, uid= uid of the user/owner of the mountpoint (can also add “-o gid=1001” for group), mp_umask= to remove other users permission. multireq_max= parameter to send request to s3 bucket, /mys3bucket= mountpoint where the bucket will be mounted .

Remember to replace “{{aws_region}}” with your bucket region (example: eu-west-1).

You can make an entry in /etc/rc.local to automatically remount after reboot.  Find the s3fs binary file by “which” command and make the entry before the “exit 0” line as below.

 

To debug at any point, add  “-o dbglevel=info -f -o curldbg” in the s3fs mount command.

 

Step-10:- Check mounted s3 bucket. Output will be similar as shown below but Used size may differ.

“or”

If it shows the mounted file system, you have successfully mounted the S3 bucket on your EC2 Instance. You can also test it further by creating a test file.

This change should also reflect on S3 bucket. So Login to your S3 bucket to verify if the test file is present or not.

Note : If you already had some data in s3bucket and it is not visible, then you have to set permission in ACL at the S3 AWS management console for that s3 bucket.

Also, If you get any s3fs error such as “transport end point is not connected”, you have to unmount and remount the file-system. You can also do so through a custom script to detect and perform remount automatically.

 

Congrats!! You have successfully mounted your S3 bucket to your EC2 instance. Any files written to /mys3bucket will be replicated to your Amazon S3 bucket.

 

In case of any help or query, please contact us.

. . .

Comments (62)

Add Your Comment

  • Harish
    when i try to mount s3 bucket, i m getting following error
    [[email protected] ~]# s3fs s3buckettestharish -o allow_other -o multireq_max=5 /mys3bucket
    -bash: code: No such file or directory

    please help me to solve issue asap.

    • Kamal Verma
      @Harish, Please check if the file name and path of the mountpoint and bucket name is correct. Also verify that the aws secret key and access key are installed properly with correct permissions. Then finally use the below command:
      s3fs your_bucketname -o use_cache=/tmp -o allow_other -o multireq_max=5 /mys3bucket
  • wwww
    thanks!
  • User123
    Does it automatically mount after a reboot?
    • Kamal Verma
      Yes, it automatically remounts after reboot. For this, we have done the entry in /etc/rc.local in step-9.
  • Asok
    Everything looks okay and there is no error but it’s not mounting. This is what I get:

    [[email protected] da]# s3fs -d -d dreamsite /usr/tomcat/webapps/da/docs_new
    FUSE library version: 2.9.4
    nullpath_ok: 0
    nopath: 0
    utime_omit_ok: 0
    unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
    INIT: 7.26
    flags=0x001ffffb
    max_readahead=0x00020000
    INIT: 7.19
    flags=0x00000011
    max_readahead=0x00020000
    max_write=0x00020000
    max_background=0
    congestion_threshold=0
    unique: 1, success, outsize: 40
    [[email protected] da]# umount /usr/tomcat/webapps/da/docs_new
    umount: /usr/tomcat/webapps/da/docs_new: not mounted

    Am I missing something

    • Kamal Verma
      Hello @Asok,

      Please use the command mentioned in the blog to mount the bucket with the specified options to avoid any unwanted error. As per your shared logs, you have used -d twice in the command which should be corrected. Also ensure that you have properly followed the blog instructions and configured the secret and access keys with proper access and permissions. Also check the bucket name and the path of the mount directories.

  • aws123
    If you don’t want to use key pairs then will this work?

    [[email protected]]# s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 -o iam_role=your_role /mys3bucket

    and is this the correct order of syntax?

    For persistence in /etc/rc.local add the following?

    /usr/local/bin/s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 -o iam_role=your_role/mys3bucket exit 0

    • Kamal Verma
      If you do not configure and use the the aws keys (secret key and access key), you will not be able to mount the bucket due to lack of authentication.
      The order of the syntax is correct. However the order of the options(indicated by “-o “) can be changed/modified as per requirement.
      Adding the mentioned line in /etc/rc.local will make the mount persistence as it will execute the command on each reboot mounting the bucket.
      Note : “exit 0” is not a part of the command. It is the last line in the file /etc/rc.local just above which the command should be written.
  • keshav
    Many thanks Kamal it works great
    • Kamal Verma
      Thank you for your feedback @Keshav
  • Srinivas Kamath
    Thank you very much… Great Document…

    At first try i just copy pasted the commands without understanding and failed…

    Then read each and every word and understood how to change it according to my specific requirements and then successfully completed the mount procedure.

    Thank you very much for spreading the wisdom…

    TO ALL THE READERS :- dont blindly copy the commands, you will fail; read and understand how to tweak them as per you requirements and it will work…

    • Kamal Verma
      Thank you for your valuable feedback @Srinivas. Feels great that it helped!
  • Mike Bekky
    Thank you very much… Very Helpful Guide. Good Job Buddy. Keep it up.
  • kamal
    Done the mounting, But not able to see older items in mounting drive, can you Please help me out here.
    • Kamal Verma
      You should always mount the bucket in an empty directory. If you mount the bucket in a directory containing some previous data, the data may not be visible after mounting the S3 bucket at the same location. Only the contents of the s3 bucket will be visible. So first unmount it and then create a new empty directory and remount it again.
  • kamal jain
    I am getting the below error, can you Please help me out
    s3fs: MOUNTPOINT directory /home/**/mnt/***/ is not empty. if you are sure this is safe, can use the ‘nonempty’ mount option.
    • Kamal Verma
      Probably you have some data already in the directory /home/**/mnt/***/ .So please mount the bucket in an empty directory by creating a new one or removing the files from that directory if possible.
  • Dean
    How do I mount a second bucket on the same server?
    • Kamal Verma
      Yes you can. Create another directory and mount the second bucket in the directory using the mount command mentioned in the blog. Don’t forget to change the mountpoint path and bucket name in the command.
  • Keshav Shriniwasan
    Hi Kamal,

    S3 mount worked great. But, seeing issue after rebooting EC2 instance – at AWS console 1/2 checks failed and you would get below error in snapshot. I tried to fix this issue by creating S3 demean/service to start after all server is up on run level 2

    Give root password for maintenance
    (or type Control-D to continue):

    • Kamal Verma
      Hi Keshav,
      This should not be due to the s3 mount configuration. Make sure you have followed the blog carefully and properly inserted the mount command in /etc/rc.local file which automatically gets executed at startup. You can also share what status fail(out of the two) that you are getting.
      To troubleshoot further, you can also create a test instance and test the mount configuration.
  • sandeep kumar
    worked liked charm on centos7 in one go ..thanks
  • Rakesh
    Hi Kamal,

    Seem same steps are not working on RedHat linux. getting below error while mounting the FS.

    [[email protected] s3fs-fuse]# mount /mys3bucket
    mount: can’t find /mys3bucket in /etc/fstab

    Kinldy help.

    • Kamal Verma
      Hi Rakesh, It should work with RedHat/centos as well. Please follow the steps carefully. The command you have used here to mount the bucket “mount /mys3bucket” is not as per the blog. So please read and understand the blog carefully and redo it. It should work.
  • vikas
    Hi,
    while mounting the s3 bucket I’ve used following command :
    “s3fs mystorage -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /my-bucket”

    but this is giving me following message:
    s3fs: unable to access MOUNTPOINT /my-bucket: Transport endpoint is not connected

    I don’t know where I am going wrong
    thank you

    • Kamal Verma
      Hi Vikas,
      Please check if you have create a directory at “/” path as /my-bucket. If you are mounting the bucket to a directory on different path, please change the path from “/my-bucket” to /path_to_mount_directory/directory_name. Also check if the bucket name is correct. If you still face the problem, please re-check the configurations as per the blog.
  • vikas
    Hey,

    I’ve successfully mounted the s3 bucket with my instance; is it just replicate the folder or it gonna take space from my instance?
    And if I start and stop the instance, will it getting unmount from my instance??

    • Kamal Verma
      @vikas, It will not take space from your instance as it will replicate the data to your s3 bucket. If you have done the entry in rc.local file as per step-9, your bucket will be remounted after the instance is restarted.
  • Vikas
    Hey,

    I’ve been struggling a lot while mounting the s3 to my ec2 storage.
    every step is working fine. but after mounting successfully when I used “df -Th” it’s showing the following error :

    df: /home/ubuntu/s3Storage: Transport endpoint is not connected

    Let me know if you can resolve this problem.

    Thanks.

    • Kamal Verma
      Please Try unmounting and remounting the s3 bucket.
  • Gurunarayan
    HI

    I followed up all the above steps, its mounting properly. Later on after 5min its throwing below error.

    Can you please let us know if yo faced the same problem
    s3fs: unable to access MOUNTPOINT /var/www/html: Transport endpoint is not connected

    • Kamal Verma
      Hi @Gurunarayan,
      It seems like the file system was not mounted properly. Try unmounting it and remounting it again. Hope this helps.
  • Sanjeev
    Hi Kamal,

    Thanks for providing this documentation. Really helpful. But I am getting this error “fuse: device not found, try ‘modprobe fuse’ first” when running the following command:

    s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket

    Can you advise what might be missing?

    • Kamal Verma
      It may be due to some missing dependency. try installing “dkms-fuse” if you are using redhat or centos system.
      yum install dkms-fuse
  • Daniel
    hi,
    I did everything as described. the command
    s3fs mys3bucket -o use_cache=temp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /storage/
    does not fail.

    but it will not mount anything. df -h shows me that the bucket is not mounted.

    what could be wrong?

  • Daniel
    hi
    thx for the instructions, but I have a problem. everything was OK until the command:

    s3fs mystorage -o use_cache=temp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mystorage

    the command executes without any errors. but the bucket don’t mounts:

    [[email protected] ec2-user]# df -h
    Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
    devtmpfs 16G 44K 16G 1% /dev
    tmpfs 16G 0 16G 0% /dev/shm
    /dev/nvme0n1p1 99G 1,5G 97G 2% /
    [[email protected] ec2-user]#

    as you can see, no mounted drive here, why?

    • Kamal Verma
      Hi Denial,
      Please make sure that you are not running the unmount command as mentioned in the blog just below the mount command.
      Also check if the bucket name and the mountpoints are correct.
  • PK
    I followed up all the listed steps and everything went fine. But S3 bucket is not mounted successfully. I could not see s3fs file system when i do df or list files.
    • Kamal Verma
      Hi,
      Please make sure that you are not running the unmount command as mentioned in the blog just below the mount command.
      Also check if the bucket name and the mountpoints are correct.
  • Dennis
    everything installs fine. i can see files in s3 bucket on mounted directory. however i cannot upload files to the directory. it looks like permission error, however, the bucket directory has 775 permission.
    • Kamal Verma
      Hi Dennis,
      Please cross-check if the AWS keys have adequate access rights to write to the S3 bucket.
  • Dennis
    everything installs fine. i can see files in s3 bucket on mounted directory. however i cannot upload files to the directory. it looks like permission error, however, the bucket directory has 775 permission.

    [email protected]:/bucket$ sudo echo “this is a test file to check s3fs” >> test.txt
    -bash: test.txt: Permission denied

    • Kamal Verma
      Hi Dennis,
      It seems like some permission issue either in directory or the bucket mounted on it.
      Please check if you have proper permission to write on the s3 bucket which is mounted on the instance. You also verify the secret and access key pairs are generated with proper permissions if generated through IAM. If the issue still exists, Please try unmounting and remounting the bucket to ensure the proper configuration is done.
  • Jeff
    Nice Tut Kamal,

    Some weirdness I’ve noticed:

    1.) I am only using this instance to move files around and it’s currently all manual. I am pulling a 110G file via sftp to the S3 slice on this instance. Nothing else is happening on this instance. It appears that the counters on my root slice are incrementing in conjunction with the data that is being pulled to the s3 slice. The size looks right via ls -alth, df -Th shows the s3 slice, yet the s3 slice and my root slice are both incrementing. I have also confirmed via s3 interface that the file has shown up there.

    2.) When I initially started the transfer I was getting roughly 35mbs in transfer. When it hit about 20G downloaded, the transfer speed dropped to 7.2mbs .. very strange.

  • Gary
    Kamal,

    Thank you so much for this blog. I followed the instructions and everything worked greatly. I don have one question. To make the mount permanent, when I open /etc/rc.local, the whole content is as following:

    #!/bin/bash
    # THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
    #
    # It is highly advisable to create own systemd services or udev rules
    # to run scripts during boot instead of using this file.
    #
    # In contrast to previous versions due to parallel execution during boot
    # this script will NOT be run after all other services.
    #
    # Please note that you must run ‘chmod +x /etc/rc.d/rc.local’ to ensure
    # that this script will be executed during boot.

    touch /var/lock/subsys/local

    #disable THP at boot time
    if test -f /sys/kernel/mm/redhat_transparent_hugepage/enabled; then
    echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
    fi
    if test -f /sys/kernel/mm/redhat_transparent_hugepage/defrag; then
    echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
    fi

    So, what is the other way to make a permanent mount? I don’t quite understand the note in the file.

    The other question not exactly relevant to your blog, when I run chmod with -R option, or using * in place of file name, the command runs forever until the session timeout. The folder has fewer than 30 files. Do you know the reason?

    Again, thank you so much!

    • Kamal Verma
      Hi Gray,
      You can put the exact command which you used for mounting the bucket in the rc.local file. the rc.local file is executed by default everytime the system boots. It will mount the bucket again on reboot.

      You can also permanently mount the bucket by putting the entry in /etc/fstab. for instructions follow: https://github.com/s3fs-fuse/s3fs-fuse

      Also, for the mounted bucket, you cannot change any permission on the objects in the mounted bucket as s3 is itself not a filesystem.

      Thanks

  • Sheshadri
    df -h not showing my s3fs usages and paths.

    it showing following output

    [[email protected] /]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/vda1 50G 48G 2.6G 95% /
    devtmpfs 898M 0 898M 0% /dev
    tmpfs 920M 4.0K 920M 1% /dev/shm
    tmpfs 920M 105M 815M 12% /run
    tmpfs 920M 0 920M 0% /sys/fs/cgroup
    /dev/loop0 2.1G 3.5M 2.0G 1% /tmp
    tmpfs 184M 0 184M 0% /run/user/0

    • Kamal Verma
      If you have bucket name with dot, follow the related step. I have updated the blog.
  • Rishi
    Hi
    I did everything but when I try see df -Th I don’t see it mounted so my local directory

    s3fs access-log-ritchieein-machine -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket
    # df -Th
    Filesystem Type Size Used Avail Use% Mounted on
    devtmpfs devtmpfs 484M 56K 484M 1% /dev
    tmpfs tmpfs 494M 0 494M 0% /dev/shm
    /dev/xvda1 ext4 15G 1.5G 14G 11% /

    not sure where I am getting wrong
    Please help

    • Kamal Verma
      If you have bucket name with dot, please follow the updated blog.
  • RNZB
    Hello
    i used the same commands and i don’t have bucket name with dots and i didn’t used the unmount command and all commands runs successfully without errors and df -h didn’t show s3fs
    • Kamal Verma
      Hi @RNZB,

      Please verify if your mount path and bucket name are correct. Also, the aws keys should have proper access to the bucket.

  • Abdulaziz
    Hello,

    We have a Webkul marketplace installed on an autoscaling AWS EC2, but we need to store Magento media files on any storage other than the instance itself.

    So which is the best approach to do that? by:

    1. mounting a bucket (https://cloudkul.com/blog/mounting-s3-bucket-linux-ec2-instance/),

    or by
    2. installing a module? (see https://marketplace.magento.com/thaiphan-magento2-s3.html#bazaarvoice.reviews.tab)

    And if number 1 is a better option, how much will it cost to customize it?

    Thank you,

  • Aziz
    Hello,

    Is this method useful for saving Magento 2 media files in S3? or which is the best way to save Magento 2 media other than the EC2?

    Thank you,

  • Karthik Nagadevara
    Thank you so much for writing this article. It was very helpful.
  • Nick
    Thanks for this post! I’m having a problem that maybe someone can can help with? It almost looks like a DNS issue, but I am not having any other DNS issues on this server. Anyone know what I’m doing wrong?

    [CRT] s3fs.cpp:set_s3fs_log_level(257): change debug level from [CRT] to [INF]
    [INF] s3fs.cpp:set_mountpoint_attribute(4193): PROC(uid=0, gid=0) – MountPoint(uid=0, gid=0, mode=40755)
    [CRT] s3fs.cpp:s3fs_init(3378): init v1.82(commit:unknown) with GnuTLS(gcrypt)
    [INF] s3fs.cpp:s3fs_check_service(3754): check services.
    [INF] curl.cpp:CheckBucket(2914): check a bucket.
    [INF] curl.cpp:prepare_url(4205): URL is https://s3-us-east-1.amazonaws.com/{{bucketNameWithDots}}/
    [INF] curl.cpp:prepare_url(4237): URL changed is https://s3-us-east-1.amazonaws.com/{{bucketNameWithDots}}/
    [INF] curl.cpp:insertV4Headers(2267): computing signature [GET] [/] [] []
    [INF] curl.cpp:url_to_host(100): url is https://s3-us-east-1.amazonaws.com
    * Could not resolve host: s3-us-east-1.amazonaws.com
    * Closing connection 0
    [ERR] curl.cpp:RequestPerform(1984): ### CURLE_COULDNT_RESOLVE_HOST
    [INF] curl.cpp:RequestPerform(2082): ### retrying…

    • Kamal Verma
      I think you have not provided bucket name in URL properly.

      [INF] curl.cpp:prepare_url(4205): URL is https://s3-us-east-1.amazonaws.com/{{bucketNameWithDots}}/
      [INF] curl.cpp:prepare_url(4237): URL changed is https://s3-us-east-1.amazonaws.com/{{bucketNameWithDots}}/

      You should replace {{bucketNameWithDots}} with your bucket name.
      Thanks

  • Egbert Frankenberg
    when I enter the command as listed above (considering the necessary adjustments for my bucket name and directory) I get this response:
    s3fs: could not determine how to establish security credentials
    • Kamal Verma
      This seems like some credentials issue possibly due to misconfiguration. Please follow the step 7 & 8 properly.
  • Rakesh
    [[email protected] ~]$ touch /etc/passwd-s3fs
    touch: cannot touch ‘/etc/passwd-s3fs’: Permission denied
    • Kamal Verma
      Please use sudo before the command.
      eg: sudo touch /etc/passwd-s3fs
  • css.php