You probably know that these days AWS advises you to create a separate account for every team/env/component/etc. I am certain you know that best practices and reality often differ. Maybe two teams need access to a shared component in one AWS account. Maybe the account still stems from the time AWS advised against too many accounts and refactoring to separate accounts is too cumbersome. Even if your account contains a single component managed by a single team, the component might encompass multiple services and you want to improve on defense in depth.
Whatever the reason, if you are in a situation where an AWS account is shared, it's very likely that everyone uses S3. Users may have data there that others are allowed to read, but not manipulate or even not touch at all. There are multiple ways to enforce rules like these and IAM seems like the go-to option in these cases. By the end of this post I hope you'll see things can be restricted more tightly than with IAM.
Best option: triple denial bucket policies
I am not going to drone about all the possible options before finally arriving at what I think works best. I have listed other viable options with their drawbacks below.
Bucket policies are your best option. Before you think "trivial answer", the exact statements are tricky and partly undocumented. Otherwise, I probably wouldn't have bothered to write this blog in the first place. The best way to start explaining the solution is by first showing the actual policy:
{
"Version": "2012-10-17",
"Id": "Triple-Denial-Policy",
"Statement": [
{
"Sid": "AllowOnlySpecificPrincipals",
"Effect": "Deny",
"Principal": "*",
"NotAction": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::a-team-specific-bucket/*",
"arn:aws:s3:::a-team-specific-bucket"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"AIDAXXXXXXXXXX",
"AROAXXXXXXXXXX:*"
]
}
}
}
]
}
The meaning of a triple denial might be hard to grasp at first, so let me try to put into words what is happening. I think it works easiest if you read the policy from the bottom up: if you are not one of the users with a specified userid
and you want to do something else with the specified resources than get
/list
, then you are not allowed.
Interestingly, the NotAction
is specified in the IAM documentation, but not in the S3 documentation. But it works, go try it out :) Note that you still need to actually grant people read-only or more rights to this bucket and its objects, but there are already more than enough examples on how to do that, so I'll leave that out here. This statement only prevents others from ever manipulating that bucket and its data. Save the AWS root account of course, but I think that's fair.
A bit about the userid
's. AWS actually has a nicely concise write-up about those. In short, they can prevent that reusing a particular name for an IAM entity will grant unwanted access. So keep in mind you have to update this policy each time the team's composition changes! If a team locks itself out, only the root account can save them.
Although it's not documented, I found that IAM roles (the userid
's starting with AROA
) need the :*
appendix to work. (Maybe they are appended by a session key each time used or something?) Anyway, I haven't seen this behavior documented anywhere and it's there for both CodeBuild and CodePipeline. Maybe it's different for other services… At any rate, be warned that referencing to a userid
alone in an S3 bucket policy might not be enough to actually grant access to roles. By the way, be careful with granting access to the bucket to too many users and roles, you may end up with the same issues I will be addressing below when discussing IAM.
Note, that the policy is easily convertible to fully deny access to all others than specified by changing the "NotAction":[...]
to "Action": "s3:*"
. Finally, I haven't tried it, but a policy like this probably also works on other services with resource policies, such as KMS.
Bonus: Terraform example
So how to get this into your infrastructure-as-code. Suppose team-a wants a bucket with objects which only they and their pipeline can possibly manipulate and other teams are present in the account. Then this could be the way to go:
data "aws_iam_group" "team_a" {
group_name = "team-a"
}
locals {
team_a_and_pipeline = concat(
data.aws_iam_group.team_a.users[*].user_id,
["${aws_iam_role.team_a_pipeline.unique_id}:*"]
)
}
resource "aws_s3_bucket" "team_a_stuff" {
bucket = "team-a-stuff"
}
data "aws_iam_policy_document" "team_a_stuff_bucket" {
policy_id = "Triple-Denial-Policy"
# No less than 3 denials in 1 statement so a little explanation:
# This statement should prevent write operations by all users
# except the ones specifically allowed
statement {
sid = "AllowOnlySpecificPrincipals"
effect = "Deny"
principals {
type = "*"
identifiers = ["*"]
}
not_actions = [
"s3:Get*",
"s3:List*"
]
resources = [
local.bucket_arn,
"${local.bucket_arn}/*"
]
condition {
test = "StringNotLike"
variable = "aws:userid"
values = local.team_a_and_pipeline
}
}
}
resource "aws_s3_bucket_policy" "team_a_stuff" {
bucket = aws_s3_bucket.team_a_stuff.id
policy = data.aws_iam_policy_document.team_a_stuff_bucket.json
}
I would suggest leaving some comments near the triple-denial statement to keep the code more maintainable. Also, as already implied above when discussing the userid
's, someone from team-a will have to run this code snippet when the team composition changes.
Other options (and their drawbacks)
IAM
You may rightly ask "Why all the complicated denials when a simple IAM deny on the specific resource would also suffice?" This is indeed possible, but in a shared account I would advise against it, because you are not really solving but rather shifting the problem. Who governs the IAM policies in a shared account? If you have a dedicated IAM team in that account you just created an extra burden for them. If IAM is a shared responsibility, then the IAM deny statement can be overwritten by anyone from the other team with sufficient IAM permissions, while the resource policy can only be overwritten by the accounts listed when it was created.
Furthermore, IAM usually suffers from privilege creep. Also, in shared accounts privilege escalation is always lurking because of all the roles spawned by multiple teams, which when used in the correct order can probably get you to any IAM rights you want. (Even when using triple denial bucket policies, you could still override someone else's password this way to gain access, but I think getting and using those rights unnoticed is the least likely form of privilege escalation in a well-designed AWS account.)
Lastly, you will have to enforce in some way that all users and roles spawned from unauthorized users also have this deny statement. Good luck. (It is possible to do that by the way, but the implementation is beyond the scope of this article and such a solution is not easier let alone clearer than a single resource policy.)
KMS
You could put one single simple statement in the bucket policy enforcing that all objects put to the bucket need to be encrypted with a specific KMS key, which only certain people have access to. This works, but again you are only shifting the problem here like with IAM. If you go for this strategy, you will have to protect the bucket policy and restrict usage of the key to some specific users… And so you are back at IAM problems.
Conclusion
If you want a bucket and its objects to be kept absolutely private to only some users/roles/root in an AWS account or at least keep others from write actions, then a tightly written bucket policy as shown above is your best option. IAM and KMS can perform the same task seemingly simpler, but keeping others from coming through the cracks will be a lot more complicated in the long run.
I wrote this blog and performed the work described within it at Simacan, my current employer. Are you just as passionate as me about this, then have a look at our working@-page or our developer portal and maybe we'll be working together soon :)
Top comments (0)