-
Notifications
You must be signed in to change notification settings - Fork 0
news3.tf #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
news3.tf #2
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bridgecrew has found infrastructure configuration errors in this PR ⬇️
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket_object" "data_object" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket Object is encrypted by KMS using a customer managed Key (CMK)
Resource: aws_s3_bucket_object.data_object | ID: BC_AWS_GENERAL_106
How to Fix
resource "aws_s3_bucket_object" "object" {
bucket = "your_bucket_name"
key = "new_object_key"
source = "path/to/file"
+ kms_key_id = "ckv_kms"
# The filemd5() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the md5() function and the file() function:
# etag = "${md5(file("path/to/file"))}"
etag = filemd5("path/to/file")
}
Description
This is a simple check to ensure that the S3 bucket Object is using AWS key management - KMS to encrypt its contents. To resolve add the ARN of your KMS or link on creation of the object.Dependent Resources
Calculating...
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| resource "aws_s3_bucket" "data" { | |
| resource "aws_s3_bucket" "data" { | |
| # bucket is public | |
| # bucket is not encrypted | |
| # bucket does not have access logs | |
| # bucket does not have versioning | |
| bucket = "${local.resource_prefix.value}-data" | |
| acl = "private" | |
| force_destroy = true | |
| tags = merge({ | |
| Name = "${local.resource_prefix.value}-data" | |
| Environment = local.resource_prefix.value | |
| }, { | |
| git_commit = "d68d2897add9bc2203a5ed0632a5cdd8ff8cefb0" | |
| git_file = "terraform/aws/s3.tf" | |
| git_last_modified_at = "2020-06-16 14:46:24" | |
| git_last_modified_by = "nimrodkor@gmail.com" | |
| git_modifiers = "nimrodkor" | |
| git_org = "bridgecrewio" | |
| git_repo = "terragoat" | |
| yor_trace = "0874007d-903a-4b4c-945f-c9c233e13243" | |
| }) | |
| } |
Ensure bucket ACL does not grant READ permission to everyone
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_1
Description
Unprotected S3 buckets are one of the major causes of data theft and intrusions. An S3 bucket that allows **READ** access to everyone can provide attackers the ability to read object data within the bucket, which can lead to the exposure of sensitive data. The only S3 buckets that should be globally accessible for unauthenticated users or for **Any AWS Authenticate Users** are those used for hosting static websites. Bucket ACL helps manage access to S3 bucket data.We recommend AWS S3 buckets are not publicly accessible for READ actions to protect S3 data from unauthorized users and exposing sensitive data to public access.
Benchmarks
- NIST-800-53 AC-17
Calculating...
🪄 Smart Fix -
Fix based on 75% past actions in this repo| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.logs | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
Dependent Resources
Calculating...
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
Calculating...
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.data | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
Dependent Resources
Calculating...
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.operations | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
Dependent Resources
Calculating...
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
Calculating...
| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.logs | ID: caswalker_AWS_1638221712530
Dependent Resources
Calculating...
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.Dependent Resources
Calculating...
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.data | ID: caswalker_AWS_1638221712530
Dependent Resources
Calculating...
| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.logs | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| git_repo = "terragoat" | ||
| yor_trace = "9a7c8788-5655-4708-bbc3-64ead9847f64" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.data_science | ID: BC_AWS_GENERAL_56
Description
TBA| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.data_science | ID: caswalker_AWS_1638221712530
| yor_trace = "0e012640-b597-4e5d-9378-d4b584aea913" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Ensure data stored in the S3 bucket is securely encrypted at rest
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.operations | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.logs | ID: caswalker_AWS_1638221712530
| git_repo = "terragoat" | ||
| yor_trace = "9a7c8788-5655-4708-bbc3-64ead9847f64" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Ensure data stored in the S3 bucket is securely encrypted at rest
Resource: aws_s3_bucket.data_science | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
| git_repo = "terragoat" | ||
| yor_trace = "0874007d-903a-4b4c-945f-c9c233e13243" | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_56
Description
TBA| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket_object" "data_object" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket Object is encrypted by KMS using a customer managed Key (CMK)
Resource: aws_s3_bucket_object.data_object | ID: BC_AWS_GENERAL_106
How to Fix
resource "aws_s3_bucket_object" "object" {
bucket = "your_bucket_name"
key = "new_object_key"
source = "path/to/file"
+ kms_key_id = "ckv_kms"
# The filemd5() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the md5() function and the file() function:
# etag = "${md5(file("path/to/file"))}"
etag = filemd5("path/to/file")
}
Description
This is a simple check to ensure that the S3 bucket Object is using AWS key management - KMS to encrypt its contents. To resolve add the ARN of your KMS or link on creation of the object.| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.financials | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.financials | ID: caswalker_AWS_1638221712530
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.data_science | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.operations | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| resource "aws_s3_bucket" "data" { | |
| resource "aws_s3_bucket" "data" { | |
| # bucket is public | |
| # bucket is not encrypted | |
| # bucket does not have access logs | |
| # bucket does not have versioning | |
| bucket = "${local.resource_prefix.value}-data" | |
| acl = "private" | |
| force_destroy = true | |
| tags = merge({ | |
| Name = "${local.resource_prefix.value}-data" | |
| Environment = local.resource_prefix.value | |
| }, { | |
| git_commit = "d68d2897add9bc2203a5ed0632a5cdd8ff8cefb0" | |
| git_file = "terraform/aws/s3.tf" | |
| git_last_modified_at = "2020-06-16 14:46:24" | |
| git_last_modified_by = "nimrodkor@gmail.com" | |
| git_modifiers = "nimrodkor" | |
| git_org = "bridgecrewio" | |
| git_repo = "terragoat" | |
| yor_trace = "0874007d-903a-4b4c-945f-c9c233e13243" | |
| }) | |
| } |
Ensure bucket ACL does not grant READ permission to everyone
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_1
Description
Unprotected S3 buckets are one of the major causes of data theft and intrusions. An S3 bucket that allows **READ** access to everyone can provide attackers the ability to read object data within the bucket, which can lead to the exposure of sensitive data. The only S3 buckets that should be globally accessible for unauthenticated users or for **Any AWS Authenticate Users** are those used for hosting static websites. Bucket ACL helps manage access to S3 bucket data.We recommend AWS S3 buckets are not publicly accessible for READ actions to protect S3 data from unauthorized users and exposing sensitive data to public access.
Benchmarks
- NIST-800-53 AC-17
🪄 Smart Fix -
Fix based on 75% past actions in this repo| yor_trace = "29efcf7b-22a8-4bd6-8e14-1f55b3a2d743" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Ensure data stored in the S3 bucket is securely encrypted at rest
Resource: aws_s3_bucket.operations | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.data_science | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.logs | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.data | ID: caswalker_AWS_1638221712530
| yor_trace = "29efcf7b-22a8-4bd6-8e14-1f55b3a2d743" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.operations | ID: BC_AWS_GENERAL_56
Description
TBA| git_repo = "terragoat" | ||
| yor_trace = "0874007d-903a-4b4c-945f-c9c233e13243" | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Ensure data stored in the S3 bucket is securely encrypted at rest
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
| yor_trace = "0e012640-b597-4e5d-9378-d4b584aea913" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| versioning { | |
| enabled = true | |
| } | |
| } |
Ensure AWS S3 object versioning is enabled
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_16
Description
S3 versioning is a managed data backup and recovery service provided by AWS. When enabled it allows users to retrieve and restore previous versions of their buckets.S3 versioning can be used for data protection and retention scenarios such as recovering objects that have been accidentally/intentionally deleted or overwritten.
Benchmarks
- PCI-DSS V3.2.1 10.5.3
- FEDRAMP (MODERATE) CP-10, SI-12
| git_repo = "terragoat" | ||
| yor_trace = "9a7c8788-5655-4708-bbc3-64ead9847f64" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
S3 buckets are not encrypted with KMS
Resource: aws_s3_bucket.data_science | ID: BC_AWS_GENERAL_56
Description
TBA| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 Bucket does not have public access blocks
Resource: aws_s3_bucket.financials | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| git_repo = "terragoat" | ||
| yor_trace = "9a7c8788-5655-4708-bbc3-64ead9847f64" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Data stored in the S3 bucket is not securely encrypted at rest
Resource: aws_s3_bucket.data_science | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 Bucket does not have public access blocks
Resource: aws_s3_bucket.data_science | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 bucket cross-region replication disabled
Resource: aws_s3_bucket.operations | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AWS access logging not enabled on S3 buckets
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 Bucket does not have public access blocks
Resource: aws_s3_bucket.data | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| git_repo = "terragoat" | ||
| yor_trace = "0874007d-903a-4b4c-945f-c9c233e13243" | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| versioning { | |
| enabled = true | |
| } | |
| } |
AWS S3 object versioning is disabled
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_16
Description
S3 versioning is a managed data backup and recovery service provided by AWS. When enabled it allows users to retrieve and restore previous versions of their buckets.S3 versioning can be used for data protection and retention scenarios such as recovering objects that have been accidentally/intentionally deleted or overwritten.
Benchmarks
- PCI-DSS V3.2.1 10.5.3
- FEDRAMP (MODERATE) CP-10, SI-12
| yor_trace = "0e012640-b597-4e5d-9378-d4b584aea913" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
S3 buckets are not encrypted with KMS
Resource: aws_s3_bucket.financials | ID: BC_AWS_GENERAL_56
Description
TBA| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AWS access logging not enabled on S3 buckets
Resource: aws_s3_bucket.logs | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 Bucket does not have public access blocks
Resource: aws_s3_bucket.logs | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| git_repo = "terragoat" | ||
| yor_trace = "0874007d-903a-4b4c-945f-c9c233e13243" | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
S3 buckets are not encrypted with KMS
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_56
Description
TBA|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 bucket cross-region replication disabled
Resource: aws_s3_bucket.data_science | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
terraform build demo
Resource: aws_s3_bucket.data | ID: 807152304871829504_AWS_1643881781204
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 bucket cross-region replication disabled
Resource: aws_s3_bucket.financials | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 bucket cross-region replication disabled
Resource: aws_s3_bucket.logs | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| yor_trace = "0e012640-b597-4e5d-9378-d4b584aea913" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Data stored in the S3 bucket is not securely encrypted at rest
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 bucket cross-region replication disabled
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| yor_trace = "29efcf7b-22a8-4bd6-8e14-1f55b3a2d743" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
S3 buckets are not encrypted with KMS
Resource: aws_s3_bucket.operations | ID: BC_AWS_GENERAL_56
Description
TBA|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 Bucket does not have public access blocks
Resource: aws_s3_bucket.operations | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| # bucket does not have access logs | ||
| # bucket does not have versioning | ||
| bucket = "${local.resource_prefix.value}-data" | ||
| acl = "public-read" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| acl = "public-read" |
Bucket ACL grants READ permission to everyone
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_1
Description
Unprotected S3 buckets are one of the major causes of data theft and intrusions. An S3 bucket that allows **READ** access to everyone can provide attackers the ability to read object data within the bucket, which can lead to the exposure of sensitive data. The only S3 buckets that should be globally accessible for unauthenticated users or for **Any AWS Authenticate Users** are those used for hosting static websites. Bucket ACL helps manage access to S3 bucket data.We recommend AWS S3 buckets are not publicly accessible for READ actions to protect S3 data from unauthorized users and exposing sensitive data to public access.
Benchmarks
- NIST-800-53 AC-17
| git_repo = "terragoat" | ||
| yor_trace = "0874007d-903a-4b4c-945f-c9c233e13243" | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Data stored in the S3 bucket is not securely encrypted at rest
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
| yor_trace = "0e012640-b597-4e5d-9378-d4b584aea913" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| versioning { | |
| enabled = true | |
| } | |
| } |
AWS S3 object versioning is disabled
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_16
Description
S3 versioning is a managed data backup and recovery service provided by AWS. When enabled it allows users to retrieve and restore previous versions of their buckets.S3 versioning can be used for data protection and retention scenarios such as recovering objects that have been accidentally/intentionally deleted or overwritten.
Benchmarks
- PCI-DSS V3.2.1 10.5.3
- FEDRAMP (MODERATE) CP-10, SI-12
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All S3 Buckets must have a tag with key Classification and have versioning enabled
Resource: aws_s3_bucket.financials | ID: 807152304871829504_AWS_1643813941482
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All S3 Buckets must have a tag with key Classification and have versioning enabled
Resource: aws_s3_bucket.data | ID: 807152304871829504_AWS_1643813941482
| @@ -0,0 +1,143 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AWS access logging not enabled on S3 buckets
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
| yor_trace = "29efcf7b-22a8-4bd6-8e14-1f55b3a2d743" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Data stored in the S3 bucket is not securely encrypted at rest
Resource: aws_s3_bucket.operations | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AWS access logging not enabled on S3 buckets
Resource: aws_s3_bucket.operations | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
terraform build demo
Resource: aws_s3_bucket.financials | ID: 807152304871829504_AWS_1643881781204
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket_object" "data_object" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket Object is encrypted by KMS using a customer managed Key (CMK)
Resource: aws_s3_bucket_object.data_object | ID: BC_AWS_GENERAL_106
How to Fix
resource "aws_s3_bucket_object" "object" {
bucket = "your_bucket_name"
key = "new_object_key"
source = "path/to/file"
+ kms_key_id = "ckv_kms"
# The filemd5() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the md5() function and the file() function:
# etag = "${md5(file("path/to/file"))}"
etag = filemd5("path/to/file")
}
adfds