Securing CI/CD pipelines
28 Nov 2022
Here are some considerations to keep in mind when deciding where to store sensitive information as you secure a CI/CD pipeline:
- Plan to fail, and practice recovering from failure. Do “fire drills” and force yourself to change important passwords in nonproduction and production systems as often as monthly, to make sure all staff who might have to do it in an emergency are well-rehearsed.
- Log data points that reveal evidence of an important secret being used to “do stuff.” Make reports and dashboards that surface anomalous behavior (and make sure they’re still good assumptions at least yearly) so you can distinguish “normal important people doing important stuff” from “it seems we might have a hacker or a malicious employee.” Actually read and investigate all anomaly reports.
- Limit the scope of what knowing any given secret enables you to do. (Note: the tradeoff is that you’ll probably end up with a lot more accounts/roles/users for your infosec team to manage, but following the “principle of least privilege” will almost certainly be worth the effort if – when! – secrets get compromised.)
- Limit the conditions under which knowing a secret actually allows you to do the thing it’s allegedly allowed to do. Examples:
- Restrict access to a powerful database-user secret by IP address.
- Can “sessions” be “short-lived” and require some sort of “renewal” to minimize the duration in which any sort of “replay attack” would be effective?
- Consider adding a real-time-human-secret component to kicking off the use of other secrets.
- At an Oracle user group, I heard Oracle Cloud offers a setting that forbids anyone to log into the operating system of the server hosting a database as
root
ororacle
(which has superpowers to any Oracle system running on such a server) without first logging into the operating system as, say,admin_voter_1
, initiating a “May I?” request, and then having 2 more operating system users, say,admin_voter_2
andadmin_voter_3
, vote “Yes.” That way, humans can still become the user they need to become during planned maintenance events, but a single human’s compromised LastPass (as long as, of course, they don’t just know & keep all 3 “voter” passwords in it) can’t let an attacker escalate toroot
ororacle
within the operating system of the server running the database without other humans being aware of the attempt. - Perhaps certain CI/CD build/deploy pipelines or their components can’t be kicked off without a human typing a secret interactively.
- At an Oracle user group, I heard Oracle Cloud offers a setting that forbids anyone to log into the operating system of the server hosting a database as
- Definitely keep secrets out of version-controlled collections of source code. Even consider taking your hosting platform (e.g. GitHub) up on its offer to scan your codebase for accidental secrets and let you know if one makes it through.
- Cloud tools like Azure Key Vault, AWS Secure Secrets Manager, AWS Parameter Store, parts of the Azure App Service, the “environment” settings of a Netlify project, etc. are made for storing passwords you intend to let privileged running machines fetch the value of as needed.
- This would be a good time to consider refactoring the way things are built so that “SSH into a privileged running machine and execute ad-hoc shell scripts that dump important secrets to system output, fetched using the instance of the AWS CLI running on the machine” isn’t doable, lest you turn your “privileged running machine” into a big old “read everything in AWS Secure Secrets Manager” attack surface.
- Storing secrets in locked-down long-running server operating system environment variables, locked-down non-shared long-running server operating system filesystems, locked-down databases, etc. is still way better than storing them in version-controlled source code or shared filesystems, and might be the best you can do in some cases.
Further reading: