Getting caching right yields huge performance benefits, saves bandwidth and reduces server costs... more
import { join } from 'path'
import type { Construct } from 'constructs'
import * as s3deploy from 'aws-cdk-lib/aws-s3-deployment'
type CacheControlMaxAge = '0' | '31536000'
type SiteBucketDeploymentProps = Omit<s3deploy.BucketDeploymentProps, 'sources'> & {
sitePaths: string[]
longCacheFileExtensions: string[]
}
type CachedBucketDeploymentProps = SiteBucketDeploymentProps & { maxAge: CacheControlMaxAge }
class CachedBucketDeployment extends s3deploy.BucketDeployment {
constructor(scope: Construct, id: string, props: CachedBucketDeploymentProps) {
const { sitePaths, longCacheFileExtensions, maxAge } = props
const longCacheFiles = `.{${longCacheFileExtensions.join(',')}}`
const assetOptions = {
exclude: [
...(maxAge === '31536000'
? ['**/*.*', '!**/*' + longCacheFiles]
: ['**/*' + longCacheFiles]),
],
}
const cacheControl = `max-age=${maxAge},public,${
maxAge === '31536000' ? 'immutable' : 'must-revalidate'
}`
super(scope, id, {
...props,
sources: [s3deploy.Source.asset(join(__dirname, ...sitePaths), assetOptions)],
cacheControl: [s3deploy.CacheControl.fromString(cacheControl)],
prune: false,
})
}
}
/**
* Creates multiple bucket deployments for the same destination bucket
* in order to set different Cache-Control headers (rule of thumb:
* either a very long `maxAge` or `maxAge: 0`) for different types of files.
*
* `.css`, `.js` files are usually distributed with a hash such as `[name].[contenthash].js`
* so a `max-age=31536000,public,immutable` Cache-Control header will be set.
*
* `.html` files are usually expected to be invalidated every time since their URLs
* cannot be versioned and their content must be able to change so a
* `public,max-age=0,must-revalidate` header will be set for all the file extensions
* that are not found inside `longCacheFileExtensions`.
*
* @example
* new SiteBucketDeployment(this, 'SiteDeployment', {
* longCacheFileExtensions: ['js', 'css'],
* sitePaths: ['..', '..', 'dist'],
* })
*
*/
export default class SiteBucketDeployment {
private readonly _maxAgePatterns: CacheControlMaxAge[] = ['0', '31536000']
private readonly _constructIdSuffix: Record<CacheControlMaxAge, string> = {
'0': 'NoCache',
'31536000': 'LongCache',
}
private _createCachedBucketDeployment(
scope: Construct,
id: string,
props: CachedBucketDeploymentProps,
cleanupDeployment: s3deploy.BucketDeployment
) {
new CachedBucketDeployment(scope, id + this._constructIdSuffix[props.maxAge], {
...props,
maxAge: props.maxAge,
}).node.addDependency(cleanupDeployment)
}
constructor(scope: Construct, id: string, props: SiteBucketDeploymentProps) {
/**
* Initial deployment for cleaning up the content from previous deploys.
* Since `prune: false` is used for the actual deployments, in some cases the
* unnecessary files (i.e some frameworks generate different static/<hash> directories
* on every new build, etc.) won't be deleted otherwise.
*
* @see https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3_deployment-readme.html#prune
*/
const cleanupDeployment = new s3deploy.BucketDeployment(scope, id + 'Cleanup', {
...props,
sources: [s3deploy.Source.asset(join(__dirname, ...props.sitePaths))],
prune: true,
})
this._maxAgePatterns.forEach((maxAge) => {
this._createCachedBucketDeployment(scope, id, { ...props, maxAge }, cleanupDeployment)
})
}
}
When using AWS CDK we can set the Cache-Control headers with S3 object metadata on our BucketDeployment
construct (check out my article on how to deploy a site to AWS with CDK below if you need an introduction to CDK).
Deploy a static site to AWS S3 and CloudFront using AWS CDK
Erik Petrinec γ» Jan 26 '23
Rule of thumb
- Set the
max-age=31536000,public,immutable
directives for all assets that have been processed with[contenthash]
substitutions. Many frameworks typically produce files with names such as[name].[contenthash].js
, etc. when building the app (see Webpack Caching). These are usually all the.js
,.css
, and statically imported images. These directives can also be used for other files, but we need to ensure that we version the path to the files ourselves by changing the file name to revalidate. Otherwise, our users may have stale files loaded until they perform a hard refresh. - Alternatively, set
public,max-age=0,must-revalidate
(ormax-age=0,no-cache,no-store
- see difference) directives. These are typically all the document.html
files. Since these URLs cannot be versioned and the content changes frequently, we don't want to cache them at all.
Custom Props
-
sitePaths
- The site build folder that is being deployed. Uses path.join to join and normalize the resulting path. -
longCacheFileExtensions
- Setsmax-age=31536000
to all the files with the specified extension located inside thesitePaths
directory. Otherwise, it setsmax-age=0
to all the files that are not found inside the array.
You can play around with the asset bundling exclude filters and exclude/include directories or files instead of file extensions as I did above. The s3deploy.BucketDeployment can be then replaced with the SiteBucketDeployment
construct
new SiteBucketDeployment(this, 'BlogBucketDeployment', {
longCacheFileExtensions: ['js', 'css'],
// relative to where the `SiteBucketDeployment.ts` file is located
sitePaths: ['..', '..', 'dist'],
destinationBucket: bucket,
distributionPaths: ['/*'],
})
π‘ The same results could also be achieved through the AWS Console manually if you are not using CDK, see editing object metadata in the Amazon S3 console
- Open the Amazon S3 console and your bucket.
- Select the check box to the left of a file/directory.
- On the Actions menu, choose Edit actions, and choose Edit metadata.
- Choose Add metadata.
- For metadata Type, select System-defined.
- Select
Cache-Control
for the key and addmax-age=31536000,public,immutable
as the value. - When you are done, hit Save Changes and Amazon S3 should edit all the selected files recursively, you can verify it by opening a specific file and scrolling down to the metadata section.
Top comments (1)
Great post, thanks for sharing. It helped us out!