Is Your Software Supply Chain More Vulnerable Than You Think? JFrog 2025 Insights
The JFrog 2025 Software Supply Chain Report reveals exploding complexity, rising malicious packages and AI models, secret leaks, misconfigurations, and tool overload, urging DevOps teams to tighten governance, broaden scanning, and treat AI models as dependencies to mitigate hidden risks.
I recently read JFrog's 2025 Software Supply Chain Report, compiled from platform data, CVE trends, security research, and a survey of 1,400 developers and security professionals.
Here are the key take‑aways for DevOps practitioners, especially those responsible for CI/CD and software delivery.
Software supply chain has really changed
The report opens with alarming numbers:
64% of enterprise development teams use seven or more programming languages , and 44% use ten or more.
A typical organization introduces 458 new packages per year (about 38 per month).
Docker Hub and Hugging Face continue exponential growth of images and models.
npm remains a hotspot for malicious packages, while malicious models on Hugging Face have grown 6.5× .
Do you really know which packages and models you have pulled?
Risk is soaring, not just from vulnerabilities
In 2024, more than 33,000 CVEs were disclosed worldwide, a 27% increase over 2023, but this is only the tip of the iceberg.
The report highlights a worrisome trend: “vulnerability ≠ risk,” and risk is arriving from many directions:
Secret leakage : JFrog found 25,229 secret tokens in public repositories, of which 6,790 are active .
XZ Utils backdoor : attackers masqueraded as OSS maintainers, embedded a backdoor for years, affecting OpenSSH.
AI model poisoning : some Hugging Face models execute malicious code (Pickle injection) on load.
Misconfiguration costs : Microsoft Power Pages leaked massive user data due to permission errors; a Volkswagen subsidiary exposed location data of 800,000 electric vehicles.
These issues are often unrelated to “does this package have a CVE?” – many teams simply don’t realize that problems can arise elsewhere.
AI explosion, risk upgrades in sync
This year Hugging Face added over one million models and datasets, while malicious models grew 6.5× . Organizations are increasingly incorporating AI models into their workflows, but:
37% of enterprises rely on manual whitelists to filter model usage.
Many AI models use the Pickle format, which executes code on load, creating a hidden attack surface.
Open‑source models with trojaned behavior (“wolf‑in‑sheep’s‑clothing”) appear on platforms.
Models are becoming the new “dependency” and should be included in supply‑chain governance and security scanning.
Security practice reality: more tools, more anxiety
The report also observes a paradox:
More security tools make it harder to see the truth.
73% of enterprises use seven or more security tools , yet only 43% scan both code and binaries .
71% allow developers to pull packages directly from the public internet.
52% both allow public pulls and rely on automated rules to track origins.
CVSS scores are increasingly inflated; JFrog found that 88% of Critical CVEs are not actually applicable .
As a frontline DevOps engineer, I see more tools but no real peace of mind.
What can we do?
The report does not offer a “silver bullet,” but it provides practical advice, which I expand upon:
Govern from the source : stop letting developers pull packages freely from the internet; use internal proxy repositories such as Artifactory, Nexus, or Harbor.
Scan beyond code : include binary scanning, container image scanning, and SBOM generation in CI pipelines.
Introduce applicability assessment : don’t rely solely on CVE scores; evaluate whether a vulnerability truly affects your environment.
Treat AI models as dependencies : build model whitelists, scan model security, and even produce model SBOMs.
Limit new “anonymous” packages : avoid adding a library just because it’s trending; the XZ backdoor incident is a stark reminder.
In closing
The 2025 software supply chain is larger, faster, more complex, and more fragile than ever.
Security is no longer about “whether there is risk,” but about “where the risk hides.” A single careless PyPI package or an AI model downloaded by an engineer can become the entry point.
Are your “dependencies” trustworthy? Does your “scan” really catch problems? Is your “policy” keeping pace with change?
I hope this summary sparks discussion and helps you improve your own DevOps, AI platform, or everyday build environment.
🧡 Follow for better technical practices.
DevOps Engineer
DevOps engineer, Pythonista and FOSS contributor. Created cpp-linter, commit-check, etc.; contributed to PyPA.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.