AWS S3 Implementation using Node.js
AWS S3 Implementation using Node.js

AWS S3 Implementation using Node.js

Spread the love
AWS S3 Implementation using Node.js
AWS S3 access via SDK (Source: Amazon JavaScript API Documentation)

AWS S3 (Simple Storage Service) allows you to store & retrieve any amount of data, any time and simply from anywhere. With excessive use of file data, it is not possible to store the files on the server/DB. So S3 provides a way to store unlimited data, and you can just store the end-point in your DB. Today’s post contains AWS S3 implementation using Node.js & its basics.

  1. Buckets
  2. Storing Data
  3. Retrieving Data
  4. Permissions on Bucket/Data

Step 1. AWS Setup

  1. Create an AWS account here https://portal.aws.amazon.com/billing/signup#/start
  2. Now go to IAM(Identity and Access Management – IAM user creation is required for giving access to your application to perform the action.)
    a. Add new User
    b. Select AWS access type as“Programmatic access”
    c. Set permissions -> Attach existing policy -> i. AmazonS3FullAccess
    d. Select the permission Boundry
    e. Give the tag
    f. Review & create a user.
    g. Download the access key & Id for future use.

Step 2. Node.js Setup

1. Install aws-sdk using npm or bower.

npm i aws-sdk
bower install aws-sdk-js

2. Store AWS access key & secret access to config.json file.

{
"accessKeyId":"AWS_ACCESS_KEY_ID",
"secretAccessKey":"AWS_SECRET_ACCESS_KEY",
"region":"AWS_DEFAULT_REGION"
}

3. Create s3helper.js file for performing s3 operation.

Import AWS module:
const AWS = require('aws-sdk');
const config = require('../config.json');
Initialize S3 instance with access info:
const s3Config = {
 apiVersion: '2006-03-01',
 accessKeyId: config.accessKeyId,
 secretAccessKey: config.secretAccessKey,
 region: config.region,
}
const s3 = new AWS.S3(s3Config);
Create Bucket & Save the Bucket Name:
module.exports.createBucket = (bucketName) => {
 s3.createBucket({
   Bucket: bucketName,    
   CreateBucketConfiguration: {  
    LocationConstraint: config.region  
   },
   ACL: 'private',
   GrantRead: 'IAM_USERID',
   GrantWrite: 'IAM_USERID',
   GrantFullControl: 'IAM_USERID',
   GrantReadACP: 'IAM_USERID',
   GrantWriteACP: 'IAM_USERID',
   ObjectLockEnabledForBucket: false
  }).promise();
}

Buckets are container which holes the objects inside it. You can think about it as a folder that contains a group of related files together.

In creating a bucket:

  • Bucket specifies the bucket name
  • GrantRead specifies who can read the bucket & contents inside the bucket
  • GrantWrite specifies who can write inside the bucket
  • GrantFullControl specifies full access to the bucket & its content
  • GrantReadACP specifies who can read the ACP of the bucket
  • GrantWriteACP specifies who can write the ACP of the bucket
  • CreateBucketConfiguration specifies location constraint for the bucket, where the bucket is created.
  • Granting permission can be specific IAM Role, User, or group
Upload a file to Bucket:
module.exports.uploadFile = (file, contentType, serverPath, filename) => {
 if (!filename) {
  filename = serverPath.split('/').pop()
 }
 return S3.upload({
  Bucket: BUCKET,
  ACL: 'private',
  Key: serverPath,
  Body: file,
  ContentType: contentType,
  ContentDisposition: `attachment; filename=${filename}`,
 }).promise();
}

Storing Data means putting objects inside a bucket along with all the metadata information about the object like expiry, cache control, ACL, etc. On successful storage of data unique key is generated which is used in the future for accessing the object.

To upload a file/object into the bucket:

  • Bucket specifies the name of the bucket.
  • ACL specifies the access control list which can be either public-read, public-read-write, private, public-read-write, bucket-owner-read, etc.
  • key specifies the path where you want to upload object
  • Body specifies the Buffer, Typed Array, Blob, String, ReadableStream.
  • ContentType specifies the standard MIME type describing the format of the contents.
  • ContentDisposition Specifies presentational information for the object.
Deleting a file/object from a bucket:
module.exports.deleteFile = (serverPath) => S3.deleteObject({
 Bucket: BUCKET,
 Key: serverPath,
}).promise();

const serverPaths = [{      
  Key: "1.jpg"
 },{      
  Key: "2.jpg"
 }]

module.exports.deleteFiles = (serverPaths) => S3.deleteObjects({
 Bucket: BUCKET,
 Delete: [{
  Objects: serverPaths
 }]
}).promise();

When an object/data is no longer needed you can delete it by specifying the name & key of the object. You can delete the multiple files together by sending an array of Key id

Generate signed URL for accessing the Private object:
const downloadUrl = (key) => S3.getSignedUrlPromise('getObject', {
 Bucket: BUCKET,
 Key: key,
 Expires: 1800,
});

Retrieving Data is generating a signed URL that will allow you to access the private object inside a bucket. The public object is accessible without a signed URL.

The bucket will specify the bucket name, Key specifies the Key of the object you want to access & Expires specifies the validity of the signed URL in seconds.

The below file contains all the code in an s3helper.js file.

AWS S3 Functions

Permissions on Bucket/Data is specifying the access control policy, which means who has access to perform what kind of action on the bucket and its content. Depending on the type of data you can choose permission like storing sensitive data requires private ACL and storing profile photo of user can be public.

The below file shows how to upload a dynamic file uploaded by the user, using busboy. Based on the requirement you can pass ACL dynamically while uploading objects to the bucket.

Thanks for reading. I hope you have found this useful.

Leave a Reply

Your email address will not be published.