As your Drupal website grows, the default local file system starts showing its limitations. Large media libraries slow down deployments, backups become unwieldy, and horizontal scaling becomes impossible when uploaded files live only on a single server. Amazon S3 solves all of these problems by offloading your file storage to a durable, infinitely scalable object store.

In this guide we'll walk through the complete setup โ€” from creating your S3 bucket and configuring IAM permissions to installing the Drupal S3FS module and optionally wiring up CloudFront as a CDN.

1. Why Move Drupal File Storage to S3?

The benefits go beyond just storage capacity. Moving Drupal's public and private file systems to S3 delivers:

  • Horizontal scaling: Multiple web servers can all read and write the same files โ€” essential for load-balanced or containerised deployments.
  • Reduced server disk usage: Your EC2 or VPS instance only needs space for code, not gigabytes of uploaded images and documents.
  • Cost efficiency: S3 Standard storage costs around $0.023/GB/month โ€” far cheaper than equivalent SSD block storage.
  • Built-in redundancy: S3 replicates objects across multiple availability zones automatically (99.999999999% durability).
  • CDN-ready: Pairing S3 with CloudFront delivers files from edge locations closest to your users, dramatically improving media load times.

2. Setting Up Your S3 Bucket

Log in to the AWS Management Console, navigate to S3, and create a new bucket:

  • Bucket name: Use a descriptive name such as diamondtechsoft-drupal-files. Bucket names must be globally unique across all AWS accounts.
  • Region: Choose the region closest to your web server and users (e.g., ap-south-1 for Mumbai).
  • Block Public Access: For Drupal's public file system, you need to allow public read access. Uncheck "Block all public access" and acknowledge the warning. For private files, keep this blocked.

After creating the bucket, add a bucket policy to allow public read access to the public/ prefix:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadForDrupalPublicFiles",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::diamondtechsoft-drupal-files/public/*"
    }
  ]
}

3. Creating an IAM User and Policy

Never use your AWS root account credentials in application configuration. Create a dedicated IAM user with the minimum permissions required:

  1. Go to IAM โ†’ Users โ†’ Create User. Name it drupal-s3-user.
  2. Choose "Programmatic access" to generate an access key and secret.
  3. Attach the following inline policy (replace the bucket name with yours):
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": "arn:aws:s3:::diamondtechsoft-drupal-files"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:PutObjectAcl"
      ],
      "Resource": "arn:aws:s3:::diamondtechsoft-drupal-files/*"
    }
  ]
}

Save the Access Key ID and Secret Access Key โ€” you'll need these in the S3FS module configuration.

4. Installing and Configuring the S3FS Module

The s3fs Drupal module integrates Amazon S3 directly into Drupal's stream wrapper system, making S3 behave exactly like a local file system to the rest of Drupal:

composer require drupal/s3fs
drush en s3fs
drush cr

Go to Configuration โ†’ File System โ†’ S3 File System Settings and enter:

  • AWS Access Key: The key ID from the IAM user you created
  • AWS Secret Key: The secret from the IAM user
  • S3 Bucket: diamondtechsoft-drupal-files
  • AWS Region: ap-south-1 (or your chosen region)
  • Public folder path: public
  • Private folder path: private

Then navigate to Configuration โ†’ File System and change the Default download method to Amazon Simple Storage Service.

5. Migrating Existing Files to S3

The S3FS module includes a Drush command to copy all existing files from your local filesystem to S3:

# Copy all public files to S3
drush s3fs:copy-local --scheme=public

# Copy all private files to S3
drush s3fs:copy-local --scheme=private

# Refresh the S3FS file metadata cache
drush s3fs:refresh-cache

After migration, verify a few files are accessible at their S3 URLs before removing local copies.

6. Connecting a CDN (CloudFront)

For production sites, front your S3 bucket with Amazon CloudFront to serve files from edge locations globally:

  1. Create a CloudFront distribution with your S3 bucket as the origin.
  2. Set the Origin Access Control to restrict direct S3 access to CloudFront only.
  3. In S3FS settings, enable Use a CDN and enter your CloudFront domain (e.g., d1abc2def3.cloudfront.net).

This ensures all file requests are served through CloudFront's global edge network rather than directly from S3, improving both performance and cost.

7. Troubleshooting Common Issues

  • 403 Forbidden on public files: Check your bucket policy โ€” the s3:GetObject permission for the public/* prefix must be set correctly.
  • Slow first-page load after migration: Run drush s3fs:refresh-cache to warm the S3FS file metadata cache in Drupal's database.
  • Private files accessible publicly: Ensure your private folder is NOT covered by the public read bucket policy and that Block Public Access is enabled for that prefix.
  • Composer error on s3fs install: The module requires aws/aws-sdk-php. Run composer require aws/aws-sdk-php if it's not already a dependency.

Conclusion

Moving Drupal's file system to S3 is one of the most impactful infrastructure changes you can make for a growing site. Once configured, file storage becomes effectively limitless, deployments become faster (no large uploads to sync), and your site is ready for horizontal scaling. Combined with CloudFront, your media assets load faster for users anywhere in the world.

The setup takes around two hours for a typical Drupal site, and the long-term operational benefits make it well worth the investment.