- download terraform
mkdir c:\terraform
[environment]::setenvironmentvariable("Path", $env:Path + ";C:\terraform", "Machine")
cd c:\terraform
invoke-webrequest -usebasicparsing https://releases.hashicorp.com/terraform/1.0.11/terraform_1.0.11_windows_amd64.zip -out terraform_1.0.11_windows_amd64.zip
Expand-Archive .\terraform_1.0.11_windows_amd64.zip -DestinationPath .
- init a new repofor the splunk config
mkdir -force c:\terraform\splunk
cd c:\terraform\splunk
terraform init
- https://registry.terraform.io/providers/hashicorp/aws/latest/docs
- https://learn.hashicorp.com/collections/terraform/aws
- https://github.com/disney/terraform-aws-kinesis-firehose-splunk
- after reviewing the process for several days, it became clear that it is extremely confusing. We are all lucky to have access to a module produced by either the a security or devops team at/for Disney. This is certainly what we'll use.
- build an aws cred file
mkdir -force "%USERPROFILE%\.aws\"
echo [default] > "%USERPROFILE%\.aws\credentials"
echo aws_access_key_id=AKIAIOSFODNN7EXAMPLE >> "%USERPROFILE%\.aws\credentials"
echo aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY >> "%USERPROFILE%\.aws\credentials"
echo region=us-east-1
[string]::Join( "`n", (gc ~/.aws/credentials)) | sc ~/.aws/credentials
- get a HEC token and write it to a file
Set-Content C:\terraform\aws_kinesis_splk\hecplaintext.txt ([byte[]][char[]] "[HEC token]") -Encoding Byte
-
create a Symmetric KMS key on AWS and secure it for use by your IAM user.
-
encrypt the HEC token with the KMS key you just created and save the output
- https://awscli.amazonaws.com/AWSCLIV2.msi
- https://docs.aws.amazon.com/cli/latest/reference/kms/encrypt.html
aws kms encrypt --key-id ab123456-c012-4567-890a-deadbeef123 --plaintext fileb://C:\terraform\aws_kinesis_splk\hecplaintext.txt --output text --query CiphertextBlob
- use the terraform module for kinesis firehose creation, creating a .tf with the following contents (having updated the various directives).
https://registry.terraform.io/modules/disney/kinesis-firehose-splunk/aws/latest
#you must include the provider here
provider "aws" {
region = "us-east-1"
}
module "kinesis_firehose" {
source = "disney/kinesis-firehose-splunk/aws"
region = "us-east-1"
arn_cloudwatch_logs_to_ship = "arn:aws:logs:us-east-1:<aws_account_number>:log-group:/test/test01:*"
name_cloudwatch_logs_to_ship = "/test/test01"
hec_token = "AQICAHjJZIjnPvjfwo3qWGZHBmfEjf3zMnSEzsw98bGVy09PJQH1YnzXIdIry+lO9/y5xgSjAAAAaTBnBgkqhkiG9w0BBwagWjBYAgEAMFMGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMa2J06h2stCkHvctuAgEQgCYjnYGyTKqAksUvorxgzVQSVQNGdd5oEa/99wz/wZVj6auUtLVXHQ==" #not actually a HEC token.
kms_key_arn = "arn:aws:kms:us-east-1:<aws_account_number:key/<kms_key_id>"
hec_url = "<Splunk_Kinesis_ingest_URL>"
s3_bucket_name = "<mybucketname>" #this name must be unique across all of AWS! If it is not unique, you will get an error that either you are not authorized, or that the region the bucket is stated to be in is the wrong region.
}
- as needed:
[string]::Join( "`n", (gc C:\terraform\aws_kinesis_splk\splk_kinesis.tf)) | sc C:\terraform\aws_kinesis_splk\splk_kinesis.tf
- run terraform init, refresh, plan, apply
start-transaction
terraform init
terraform refresh
terraform plan
#review the output, as this will be what's applied to the resources
terraform apply
#this will install the kinesis app to the targeted splunk instance
stop-transaction
#test as you see fit, and then you can run `terraform destroy` if you want to destroy all items
- review all items that were created
- Since we're trusting item creation to module, let's leverage the terraform registry list of resources and the
terraform apply
output to review each item:- aws_cloudwatch_log_group.kinesis_logs
- aws_cloudwatch_log_stream.kinesis_logs
- aws_cloudwatch_log_subscription_filter.cloudwatch_log_filter
- aws_iam_policy.cloudwatch_to_fh_access_policy
- aws_iam_policy.kinesis_firehose_iam_policy
- aws_iam_policy.lambda_transform_policy
- aws_iam_role.cloudwatch_to_firehose_trust
- aws_iam_role.kinesis_firehose
- aws_iam_role.kinesis_firehose_lambda
- aws_iam_role_policy_attachment.cloudwatch_to_fh
- aws_iam_role_policy_attachment.kinesis_fh_role_attachment
- aws_iam_role_policy_attachment.lambda_policy_role_attachment
- aws_kinesis_firehose_delivery_stream.kinesis_firehose
- aws_lambda_function.firehose_lambda_transform
- aws_s3_bucket.kinesis_firehose_s3_bucket
- aws_s3_bucket_public_access_block.kinesis_firehose_s3_bucket
- now that the log stream is onboarded, if you review the events, you will see that the log contents aren't formatted properly for HEC. This is because the lambda function from that module doesn't format as JSON. Instead, you could use the function described (here)[https://www.splunk.com/en_us/blog/tips-and-tricks/how-to-ingest-any-log-from-aws-cloudwatch-logs-via-firehose.html], available for review in (this repo)[https://github.com/ptdavies17/CloudwatchFH2HEC].