Supply Chain Attacks Are Default

Supply chain attacks are now a common entry point. Learn how to reduce risk, limit impact, and secure dependency workflows.

Key Takeaways:

Supply chain attacks are inevitable
Installing dependencies means running code
Control and visibility are non-negotiable

Supply Chain Attacks Are Becoming The Default Entry Point

Supply chain attacks are no longer edge cases. They are becoming the default entry point into modern software systems.  Over the past month, the software ecosystem has experienced multiple high-impact supply chain attacks affecting widely used libraries such as LiteLLM (PyPI) and Axios (npm). These incidents did not expose a single bug or a one-off failure, but rather  a structural weakness in how modern software is built, distributed, and trusted.

For engineering teams, the question has shifted. It is no longer how to avoid exposure entirely, but how to operate in an environment where compromise is expected and how to control its impact. The practical question is no longer how to avoid exposure entirely, but how to reduce the likelihood of compromise and, more importantly, how to limit the blast radius when it inevitably occurs.

Mechanisms of Compromise 

Across recent incidents, attackers followed a pattern that is both simple and highly effective. They first gained access to a maintainer account, which immediately granted them the ability to publish new versions to trusted package registries. From there, they released malicious updates that appeared entirely legitimate from the outside.

What made these attacks particularly effective was not just the distribution mechanism, but the execution model. In both ecosystems, the malicious code did not require explicit invocation. In Python, it leveraged interpreter startup behavior through mechanisms such as .pth files. In Node.js, it relied on lifecycle hooks like postinstall, which run automatically during dependency installation.

The result is a fundamental inversion of expectations. Engineers believe they are simply fetching code when they run pip install or npm install, assuming that any malicious behavior would require explicitly running a script. In reality, these commands can execute arbitrary code as part of the installation or environment initialization process. At the same time, trust in these packages is inherited from the identity of the publisher rather than verified at the artifact level. Once a maintainer account is compromised, that trust is transferred wholesale to the attacker.

From Pilots to Production-Grade Security

The core issue is not a lack of security tools, but a mismatch between how engineers think dependency systems behave and how they actually behave under adversarial conditions. That mismatch shows up most clearly in three areas: dependency management, install-time execution, and visibility into dependency graphs.

Dependency Management Is Too Permissive

In many systems, dependencies are treated as a moving target. Version ranges are left open and patch updates are pulled automatically. This creates an environment where a single malicious release can be introduced into a system without any explicit decision or review. In practice, this means that a seemingly harmless patch version can introduce arbitrary code into a build pipeline, which then propagates through CI and into production. No individual step appears suspicious, yet the system as a whole has accepted and executed untrusted code.

The way to counter this is not to eliminate dependencies, but to make their evolution explicit and controlled. Exact version pinning, combined with hash verification, ensures that what is installed is exactly what was reviewed. Separating dependency upgrades from deployment pipelines introduces a natural checkpoint where changes can be inspected and validated. In more sensitive environments, introducing allowlists or internal mirrors further constrains what can be pulled into the system. 

Implicit Trust in Install Time Execution

The more fundamental issue lies in how installation itself is treated. Modern ecosystems allow arbitrary code execution during install and import phases, yet most engineering workflows treat these steps as safe and routine. This creates a dangerous asymmetry. Developers think of installation as retrieval, while attackers design it as execution. 

Addressing this requires a shift in both mindset and practice. Installation should be treated as the execution of untrusted third-party code. In practical terms, this means running installs inside isolated, ephemeral environments rather than long-lived or privileged systems. It also means reducing or disabling automatic execution hooks where possible, auditing startup behaviors in Python environments, and constraining network access during installation so that unexpected outbound communication is blocked.

Lack of Visibility into Dependency Graphs

A final but equally important gap is visibility. Most teams have a reasonable understanding of their direct dependencies, but very limited insight into the full dependency graph. Transitive dependencies, newly introduced packages, and structural changes in the graph often go unnoticed.

In the Axios incident, the malicious payload was introduced through a dependency that existed solely to execute code during installation. Without explicit monitoring of dependency graph changes, such an addition is almost invisible. Improving this requires treating the dependency graph itself as a first-class artifact. Generating and maintaining a software bill of materials provides a baseline inventory of what is actually in use. Monitoring changes to lockfiles and dependency trees turns unexpected additions into detectable events rather than silent transitions. Software composition analysis tools can add an additional layer of protection by flagging known malicious or compromised packages.

A More Realistic Operating Model 

The underlying assumption that third-party code is trustworthy by default no longer holds.  As long as installation is treated as a safe operation and dependency changes remain implicit, that assumption will continue to be exploited. Engineering teams that adapt will not do so by reducing their reliance on external code, but by changing how they control it, how they execute it, and how they observe it. 

The systems that remain secure will be the ones designed with the expectation that any dependency can become hostile at any time and that installation itself is an act of execution, not retrieval. This requires a shift in engineering posture:

  • Treat every dependency as potentially hostile
  • Treat installation as execution
  • Design for containment

At Factored, this mindset is already the standard.

What This Means Going Forward

The fastest teams will operate with tighter control loops. They reduce latency between signal and action.

Factored’s specialized AI, ML, and data engineers continuously evolve best practices. 

  • Clear ownership of dependencies. 
  • Explicit change management. 
  • Continuous system visibility.

These are no longer best practices. They are the baseline for building scalable, high-performing AI systems.

Covering 100% of U.S. time zones, becoming a natural extension of your team

Elite engineers ready for flexibility, scalability, and measurable impact.
Build IP that belongs to you
Proven work with the Fortune 500
Start Building

Continue Reading

Resume Parsing Evaluation Gap
Measure what matters across systems
Multilingual Data Quality
2× data consistency across 50+ langs
Snowflake Select Partner
High-impact data and ML solutions