Whether it’s package hijacking, dependency confusing, typosquatting, continuous integration and continuous delivery (CI/CD) compromises, or basic web exploitation of outdated dependencies, there are many software supply chain attacks adversaries can perform to take down their victims, hold them to ransom, and exfiltrate critical data.
It’s often more efficient to attack a weak link in the chain to reach a bigger target, like what happened to Kaseya or SolarWinds in the last couple of years. Attackers can implant an RCE (remote code execution) or harvest developers’ credentials to escalate privileges and perform malicious actions stealthily.
Besides, they may only have to compromise a single package to distribute malware to a large range of users and organizations, because the current supply chain is insanely complex and interconnected.
Of course, developers cannot be held responsible for all vulnerabilities, but they usually have privileged accounts and even direct access to sensitive documents and pipes, which makes them increasingly attractive targets.
To help developers protect against supply chain hacks, the U.S. National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), and the Office of the Director of National Intelligence (ODNI) recently released a comprehensive guide to help them secure their code and processes.
Stopping Malicious Code Injections
According to the guide, threat actors still use public vulnerability disclosures but, rather than waiting for them, “they proactively inject malicious code into products that are then legitimately distributed downstream through the global supply chain.”
Dev teams often struggle with updates and time-consuming DevOps (development and operations), so they automate CI/CD pipelines for automated deployments and tests, but the process is sometimes misconfigured and often lacks security checks.
Another popular technique can consist of compromising a package that is only used by developers (e.g., devDependencies in Node) to harvest their credentials such as AWS keys.
The new U.S. guidance identifies common threat scenarios during the software life cycle:
- An adversary intentionally injects malicious code, or a developer unintentionally includes vulnerable code within a product.
- Vulnerable third-party source code or binaries is incorporated within a product either knowingly or unknowingly.
- Weaknesses within the build process are exploited to inject malicious software within a component of a product.
- A product within the delivery mechanism is modified, resulting in injection of malicious software within the original package, update, or upgrade bundle deployed by the customer.
The document lists concrete measures to reduce the risk:
- Generate architecture and design documents.
- Gather a trained, qualified, and trustworthy development team.
- Create threat models of the software product.
- Define and implement security test plans.
- Define release criteria, and evaluate the product against it.
- Establish product support and vulnerability handling policies and procedures.
- Assess the developers’ capabilities and understanding of the secure development process, and assign training.
- Document and publish the security procedures and processes for each software release.
How to Secure Code
Writing secure code involves procedures like code reviews and security tests, regardless of the programming language, even if some of them like Rust prioritize safety by default.
The guide highlights the prevalence of both intentional and unintentional injections of malicious code in attacks.
Engineers and developers can be compromised in seemingly harmless situations like dissatisfaction or outside influence. The lack of training can also explain nasty design flaws, which are pretty hard to detect and can lead to zero-day attacks that can remain unpatched for months.
Besides, programmers like to implement special parameters and other debugging features to ease the troubleshooting or the setup. Unfortunately, it’s not uncommon that these “hacks” end up in production for convenience, or someone simply forgets to remove them after use.
The guide invites technical teams to apply the following mitigations:
- Implement a well-balanced authenticated source code check-in process, such as good practices with GIT repositories and multi-factor authentication (MFA).
- Perform automatic static and dynamic security/vulnerability scanning.
- Conduct nightly builds with security and regression tests.
- Map features to requirements like restricting dev packages and deleting unused dependencies.
- Prioritize code reviews, and review critical code.
- Implement secure software development/programming training.
- Harden the development environment via methods such as VPN, MFA, “jump-host,” and threat modeling for each environment.
How to Improve the Build Process
Whether it’s for the individual developer or the production build environment, it’s recommended to validate the security of the software before it gets delivered and distributed to end users. Teams can leverage various tools and techniques. For example:
- Implementing indirect controls like vulnerability scans, pentests, watermarks, data loss prevention (DLP), and integrity checks
- SBOMs (Software Bill of Materials) and digital signatures to validate deliveries
- Rapid iterative cycles (agile development)
- Access logs for all pipelines
- Encrypting secrets
- Least privilege principle
- Network segregation
- On-premises deployment
- Version control
- A/B testing in CI/CD pipelines
Best Practices for Version Control
The document provides guidelines for the protection of the source code.
Firstly, access and validation start with good source code management (SCM) principles to track modifications to a source code repository.
Dev teams should also enable notifications to be alerted when a new threat, version or update is found. Major versioning platforms like GitLab or GitHub provide such features, but the guide recommends to go further and keep “a log of all developers and the components they download.”
MFA should be enabled “for all access” to the repository, and teams can leverage basic Git branching to keep things organized:
- Developers work in the development branch.
- Leads promote software to a QA (quality assurance) branch after code review and approval.
- QA teams test the software from the QA branch.
- If approved, the branch can be merged in production.
The guide recommends restricting access to the production branch to “a small set of build and team members” and implementing lockdown procedures after each release to secure the builds.
Developers should also sign commits. It’s not explicitly mentioned in the guide, but some attacks rely on stolen keys to push commits. In this case, the unauthorized modifications will be attributed to a legitimate user.
It’s not uncommon for developers to use temporary keys to set up environments. If they don’t remove the keys after usage, an attacker could find them after gaining access to the server.
Another attack may consist of faking a legitimate maintainer’s identity by creating a fake package and configuring Git with the maintainer’s information (e.g., typosquatting).
Developers can sign commits with GPG (Gnu Privacy Guard) keys or libraries like Gitsign. It’s not bulletproof, but this additional layer of security is relatively easy to set up.
Read next: Top Vulnerability Management Tools