Skip to main content

How to Mount AWS S3 Bucket on Linux (OEL/Centos/Ubuntu)

How to Mount AWS S3 Bucket on Linux (OEL/Centos/Ubuntu)

I am going to use  S3FS solution which is FUSE (File System in User Space). Using this we can use commands like cp, mv on the system. It will be a normal mount on the Linux system.

Prerequisites:

 You must create s3 bucket in AWS console. I created a folder funmount.

Steps:

 

1: Remove Existing Packages

Login to your Linux instance. 

First of all, check whether you have already installed any existing fuse or S3FS on your server. In case it exists, then remove it to avoid conflicts on the server.

For CentOS OR RHEL Users:

 # yum remove fuse fuse-s3fs

For Ubuntu Users:

 $ sudo apt-get remove fuse

 

2: Install dependency Packages.

Now you must install packages that are required for fuse and s3cmd.

For CentOS or RHEL users:

#  yum install openssl-devel gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel mailcap git automake

For Ubuntu Users:

# apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support

 

3: Download and Compile Latest Fuse.

Change your directory location to /usr/src using cd command then download and compile fuse source code. After compiling, add fuse to the kernel. In our example we are using fuse version 3.0.1.

#cd /usr/src/

#wget https://github.com/libfuse/libfuse/releases/download/fuse-3.0.1/fuse-3.0.1.tar.gz

#tar xzf fuse-3.0.1.tar.gz

#cd fuse-3.0.1

#./configure --prefix=/usr/local

#make && make install

#export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

#ldconfig

#modprobe fuse

 

4: Download and Compile Latest S3FS

To download the latest version of s3FS change your directory to “/usr/src/” along with below list of commands.

#cd /usr/src/

#wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.82.tar.gz

#tar xzf  v1.82.tar.gz

#cd s3fs-fuse-1.82

#./autogen.sh

#./configure --prefix=/usr --with-openssl

#make

#make install

 

5:Setup Access Key

To configure s3fs you need both access key and secret key of your s3 AWS account  I am using root account to do the setup.

NOTE: Kindly replace the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual key values.

 

# echo AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY > ~/.passwd-s3fs

# chmod 600 ~/.passwd-s3fs

 

6:Mount S3 Bucket on Linux

The final step would be to mount the s3 bucket on Linux flavors such as CentOS, RHEL and Ubuntu.

For this example, we are using s3 bucket name as “funmount“ and mount point as /s3mnt_pt.

# mkdir /tmp/cache

# mkdir /s3mount

# chmod 777 /tmp/cache /s3mount

# s3fs -o use_cache=/tmp/cache funmount /s3mount


Add below entry in fstab to automatically mount after reboot.

s3fs#funmount /s3mount fuse _netdev,rw,nosuid,nodev,allow_other,nonempty,use_cache=/tmp/cache 0 0 

How to access your s3 bucket just use normal cd , ls command.

# cd /s3mount

# ll

total 1

d---------. 1 root root 0 Jan 12 10:37 myfolder

# cd myfolder

# ll

total 1

----------. 1 root root root an 12 10:39 bucket.rtf

# pwd

/s3mount/myfolder




If you like please follow and comment

Comments

Popular posts from this blog

WebLogic migration to OCI using WDT tool

WebLogic migration to OCI using WDT tool Oracle WebLogic Deploy Tool (WDT) is an open-source project designed to simplify and streamline the management of Oracle WebLogic Server domains. With WDT, you can export configuration and application files from one WebLogic Server domain and import them into another, making it a highly effective tool for tasks like migrating on-premises WebLogic configurations to Oracle Cloud. This blog outlines a detailed step-by-step process for using WDT to migrate WebLogic resources and configurations. Supported WLS versions Why Use WDT for Migration? When moving Oracle WebLogic resources from an on-premises environment to Oracle Cloud (or another WebLogic Server), WDT provides an efficient and reliable approach to: Discover and export domain configurations and application binaries. Create reusable models and archives for deployment in a target domain. Key Pre-Requisites Source System: An Oracle WebLogic Server with pre-configured resources such as: Applica...

How to Validate TDE Wallet Password in Oracle Database

How to Validate TDE Wallet Password in Oracle Database Validating the Transparent Data Encryption (TDE) wallet password is crucial, especially when ensuring that the password is correct without using the OPEN or CLOSE commands in the database. This blog post explains a straightforward method to validate the TDE password using the mkstore utility. Steps to Validate TDE Wallet Password Follow these steps to validate the TDE wallet password: Step 1: Copy the Keystore/Wallet File Navigate to your existing TDE wallet directory. Copy only the ewallet.p12 file to a new directory. If a cwallet.sso file exists, do not copy it . The absence of cwallet.sso ensures that the wallet does not use auto-login, forcing the utility to prompt for the password. Step 2: Validate Using mkstore Use the mkstore utility to check the contents of the wallet file. The mkstore utility will prompt you for the TDE wallet password, allowing you to validate its correctness. Command Syntax To display the conten...

Rename a PDB in Oracle Database Multitenant Architecture in TDE and Non TDE Environment

Rename a PDB in Oracle Database Multitenant Architecture I am sharing a step-by-step guide to help you rename a PDB. This approach uses SQL commands. Without TDE or encryption Wallet Initial Check Check the Current Database Name and Open Mode: SQL > SELECT NAME, OPEN_MODE FROM V$DATABASE; NAME OPEN_MODE --------- -------------------- BEECDB READ WRITE List Current PDBs: SQL > SHOW PDBS; CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 FUAT READ WRITE NO We need to RENAME FUAT to BEE  Steps to Rename the PDB Step 1: Export ORACLE_SID Set the Oracle SID to the Container Database (CDB): export ORACLE_SID=BEECDB Step 2: Verify Target PDB Name Availability If the target PDB name is different from the current PDB name, ensure no service exists with the target PDB name. Run SQL to Check Exi...