DevSecOps has finally become popular within the wider IT industry in 2019. I started as a web developer in 2001, learned about testing automation, system deployment automation, and "infrastructure as code" in 2012, when DevOps has becoming a popular term. DevOps became common after the release of The Phoenix Project in Jan 2013. It has taken 7 years for security to become integrated within the devops methodology. The following is a list of concepts I go through with project owners, project managers, operations, developers, and security teams, to help establish how mature their devops and security automation is, and to help them increase that maturity over time.
PII and public facing = high
PII and internal facing = medium
no PII and public facing = medium
no PII and internal facing = low
- public facing
- uses PII data
- costs more than $5,000 / month
- Use, a shared platform
- Ask, for platform support
- Buy, 3rd Party Hosted
- Train, your engineers
- Hire, new engineers
- None
- Ad-hoc, manually
- Quarterly, manually
- Monthly, semi-automated
- Weekly, fully-automated
Level 5 includes:
- tests
- reports
- alerts
- dashboards
This list is a combination of multiple other security and devops maturity models. It is ordered in a way which enables each aspect to build upon the previous aspects. The specific ordering can be modified slightly, but will lead to missing coverage.
- Inventory
- Financial
- Access
- Build
- Install and Patch
- Configuration
- Secrets
- Deploy
- Backup and Restore
- Reliability
- Metrics and Logging
- Alerts
- Response Playbooks
- Automated Remediation
- Threat Modeling
- unaccounted resources will be exploited without notice
- unaccounted resources will not be patched
- unaccounted resources will be used at maximum cost
- AWS Security Hub
- AWS skew (opensource)
- NCC Group, aws-inventory (opensource)
- Duo Security / cloudmapper (opensource)
- Azure Resource Inventory
- Google Cloud Asset Inventory
The security teams should work with the Financial, Ops, and Dev teams (in that order) to establish how many cloud provider accounts are being paid for, who owns them, and who has access
- unaccounted resources can be exploited financially, ex: bitcoin mining
- AWS Trusted Advisor
- VMWare Cloudhealth
- CloudDYN
- Cloudcheckr
- Cloudability
- ManageEngine
- FlexNet Manager
The security teams should work with the Finance, Ops, and Dev teams to establish cloud account and resource ownership. A good rule of thumb is: "It you are paying for it, you own it, and you need to maintain it."
- insecure default passwords, or old passwords , or shared passwords, or reused passwords, will be exploited
The security teams should work with the Ops and Dev teams to establish who has access to cloud accounts and resources. Any form of access should be known and managed in a centralized system. Within AWS for example, this would mean every engineer has their own dedicated IAM User account, and is allowed to use IAM STS Assume Role to manage resources in multiple AWS accounts. Passwords should be complex and long, and rotated regularly. Engineers should be given password managers.
- old services without updates will be exploited
- mitigation will be time consuming
- Jenkins (opensource)
- TravisCI
- CircleCI
- Atlassian Bamboo
- Jenkins, Ansible plugin (opensource)
- Puppet Pipelines
- Chef Habitat
I have found Jenkins to be the most robust build platform. It has the highest number of plugins and integrations. For my open source projects, I use TravisCI for simple scoped builds to test Ansible roles, building Docker images, and building VM images. Avoid calling the command line directly at all costs. Also avoid using shell scripts. Both of them are difficult to maintain, debug, and are unreliable.
- insecure components will be attacked
- mitigation will be time consuming
- Gradle - task runner, Java (opensource)
- Grunt - task runner, node.js (opensource)
- Fabric - task runner, Python (opensource)
- Hashicorp Packer - image builder (opensource)
- Ansible Molecule
- Chef Kitchen
I've used all 3 task runners mentioned above. I've found Gradle to be the most mature of the three, though I prefer the simple Javascript syntax of Grunt. I have seen very few competitors to Hashicorp's Packer. I use Ansible roles, and therefor use Molecule.
- insecure defaults, and insecure configurations, will be attacked
- auditing will be time consuming to manually track configurations
- mitigation will be time consuming
- Manager
- Server config
- Augeas - service config retrieval (opensource)
- osquery - server config monitoring (opensource)
- DevSec Hardening Framework - Chef, Puppet, Ansible for CIS Benchmarks (opensource)
- Serverspec - BDD, internal infrastructure testing (opensource)
- Infrataster - BDD, external infrastructure testing (opensource)
- gauntlt - BDD, security infrastructure testing (opensource)
- continuum security / bdd-security
- F-Secure mittn
- Chef Inspec
- Docker images
- 20 Docker Security Tools
- Top 10 Open Source Tools for Docker security
- Docker Bench Security - security audit Docker hosts
- anchor-cli - Docker image security scanner
- Clair - Docker image security scanner
- Dagda - Docker image security scanner
- Banyan Collector - Docker image security scanner
- OpenSCAP - security scanner
- Vuls - security scanner
- Sysdig Falco - Docker image and Host security monitor
- Cilium - Docker network security scanner
- Amazon AWS
Focus on what is already deployed and collect how it is configured, using Augeas. Then, setup alerts on when a configuration has changed, using osquery w/ the Augeaus plugin. The DevSec Hardening Framework is a collection of Chef, Puppet, and Ansible scripts, to help ensure compliance such as the CIS Benchmark. Use Severspec / Goss (config), Infrataster (e2e), and Gauntlt (e2e security) to perform BDD testing. Use one of the Docker image scanners. Use the AWS security scanners.
- insecure default passwords, or old passwords, or shared passwords, or reused passwords, will be exploited
- insecure passwords will be time consuming to update and rotate
- maxvt/infra-secret-management-overview.md
- Jenkins Credentials Manager (opensource)
- AWS Secrets Manager
- Hashicorp Vault
- Ansible Vault
- Puppet Hiera
- Chef Data Bags
I have a gist for doing this within Jenkins.
- insecure cloud service configurations will be attacked
- cloud service auditing will be time consuming to manually track configurations
- cloud service mitigation will be time consuming for all untracked configurations
- Hashicorp Terraform (opensource)
- Hashicorp Sentinel
- AWS Cloudformation
- Azure Resource Manager
- Google Deployment Manager
I've worked with over a dozen development teams, all using AWS. The AWS Cloudformation tool is not usable for medium to large scale projects. There are multiple resource limits, which cannot be increased by talking with an AWS Account Manager. The Cloudformation template format is tedious to maintain, confusing to debug, and crashes in unexpected ways. However I have had very few issues using Hashicorp Terraform, which also allows for multi-cloud support.
- data will be exfilled
- data may be deleted irrecoverably
- AWS RDS Backup
- AWS RDS Backup, docs
- Azure SQL Automated Backups
- GCP - MySQL, Overview of Backups
- GCP - Postgres, Overview of Backups
This is more left to the Ops team, with some cross-training for Devs, and monthly checkins from the Security teams. Customers and internal clients will ask about "backup and restore strategy" during the start of a project, and before a go-live.
- DDoS attacks
- AWS Auto Scaling
- AWS Auto Scaling, docs
- Azure Autoscale
- Azure Autoscale, docs
- GCP, Load Balancing
- GCP Autoscaler, docs
This is more left to the Ops team, with some cross-training for Devs, and monthly checkins from the Security teams. Customers and internal clients will ask about "backup and restore strategy" during the start of a project, and before a go-live.
- unmonitored assets will be attacked, without alerts
- unmonitored assets will be difficult and time consuming to remediate
- Elastic Metricbeat (opensource)
- Elastic Packetbeat (opensource)
- Elastic Heartbeat (opensource)
- Prometheus (opensource)
- Pingdom
- Icinga
- Nagios
- SignalFX
- Datadog
- New Relic
- Wavefront
- Elastic Filebeat (opensource)
- rsyslog (opensource)
- fluentd (opensource)
- greylog (opensource)
- Splunk
- LogRhythm
- Loggly
- LogDNA
- Sumologic
- Elastic Auditbeat (opensource)
- RITA - Real Intelligence Threat Analytics (opensource)
- Carbon Black CB Defense
- Threastack
- Alert Logic
- Crowdstrike
- Darktrace
- Malware Bytes
- FireEye
- TrendMicro
- Symantec Endpoint Protection
I've used almost every product in this list. Splunk is the most powerful, but also the most expensive. Surprisingly fluentd is used natively within Microsoft Azure, Google GCP, and by Kubernetes, despite the parent company Treasure Data no longer offering enterprise support. I'm a huge fan of the Elastic ELK stack. I've been very impressed by Carbon Black's CB Defense.
- unmonitored assets will be attacked, without alerts
- unmonitored assets will be difficult and time consuming to remediate
Resist the urge to turn on alerts for everything. Only critical and high findings should have automated alerts. As those alerts become less frequent, and the response more automated, then you should slowly include medium severity finding alerts.
- threat responses will be ad-hoc, inconsistent, and slow
- create Response Playbooks
In my experience, between multiple projects and companies, the playbook threats will be the same, ex: "Unauthorized Access from a Foriegn IP Address." The response will be very different, and will be a reflection of the companies social structure.
- vulnerable and exploited assets will be time consuming to remediate
This is a very new space, with lots of marketing hype and confusing, even from experienced DevOps + Security practicioners. My general advice is that the more homogenous and generic an environment, the easier automation will be. If there are custom Docker images, custom compiled languages, custom plugins, etc. then these automation tools will not know how to interact and manage them.
- lack of visibility into risk
- owners
- support engineers
- architecture diagrams
- dataflow transitions
- data classifications
- malicious user stories
For your first pass, just go through the DevSecOps Maturity Model, and get a scoring of where a given team / project is in their maturity. Wherever they have scored below a 5 in the model, should be where they should focus. The intention should not be to reward teams which are more mature, but to put more focus on the most immature teams, track their progress, and reward growth. Going from a 70pts to 75pts is not as impressive as going from a 15pts to 25pts.
https://blog.sonatype.com/a-devsecops-maturity-model-in-7-words
https://tech.gsa.gov/guides/dev_sec_ops_guide/
https://www.owasp.org/index.php/OWASP_DevSecOps_Maturity_Model
https://cloud.google.com/free/docs/map-aws-google-cloud-platform