🎉 HiddenLayer has been named a Gartner® Cool Vendor in AI Security! To us, this recognition underscores the growing importance of securing AI systems in an era where they are increasingly woven into the fabric of industries worldwide. We are proud of this recognition, as Gartner evaluates vendors based on their innovation, transformative potential, and real-world impact. We believe being named a Gartner Cool Vendor reaffirms that our approach and solutions are on the right path. A huge thank you to our team, partners, and clients for supporting this journey. We’re committed to continuing our mission of making AI security a priority. Download the report here: https://lnkd.in/gMujJU5b Read our press release here: https://lnkd.in/grkFGaFZ #AIsafety #AIsecurity #GartnerCoolVendor #Cybersecurity #AIresilience #Innovation #HiddenLayer #Gartner #AITRiSM
HiddenLayer
Computer and Network Security
Austin, TX 10,594 followers
The Ultimate Security for AI Platform
About us
HiddenLayer is the leading provider of Security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise’s AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft’s Venture Fund, Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
- Website
-
https://hiddenlayer.com/
External link for HiddenLayer
- Industry
- Computer and Network Security
- Company size
- 51-200 employees
- Headquarters
- Austin, TX
- Type
- Privately Held
- Founded
- 2022
- Specialties
- Security for AI, Cyber Security, Gen AI Security, Adversarial ML Training, AI Detection & Response, Prompt Injection Security, PII Leakage Protection, Model Tampering Protection, Data Poisoning Security, AI Model Scanning, AI Threat Research, and AI Red Teaming
Locations
-
Primary
Austin, TX, US
Employees at HiddenLayer
-
Tom Whiteaker
Co-Founder and Partner, IBM Ventures Investments
-
Charlie Kawasaki, CISSP
Innovator in AI, Cybersecurity and Networking
-
Ozzie Mendoza
Securing AI/GenAI Models | Protecting Revenue & Profit Streams from Emerging Cyber Threats
-
Hiep Dang
Vice President of Strategic Technical Alliances at HiddenLayer
Updates
-
🎄 Happy Holidays from HiddenLayer 🎁 As the year comes to an end, we want to take a moment to thank our incredible team, customers, and partners who made 2024 a year of growth, learning, and impact. Your support and collaboration drive our mission to secure the future of AI, and we couldn’t do it without you. From all of us at HiddenLayer, we hope you have a wonderful holiday season and a happy New Year! We can't wait to see what 2025 has in store.
-
📖 Are You Securing AI or Using AI for Security? The Difference Matters When it comes to AI and cybersecurity, two terms are often confused but address very different challenges: Security for AI and AI Security. Security for AI focuses on protecting AI systems—such as models, training data, and deployment environments—from threats such as adversarial attacks, data poisoning, or model theft. On the other hand, AI Security leverages AI technologies to enhance traditional cybersecurity—detecting threats, monitoring anomalies, and strengthening defenses across IT environments. While both are critical, many organizations focus solely on AI Security, overlooking the unique vulnerabilities AI systems face. This gap can lead to serious consequences, like intellectual property theft or compromised AI outcomes. Understanding this distinction is the first step toward a safer, more resilient future. 👉 Read our blog to learn more: https://lnkd.in/gB_k2EkK #Cybersecurity #AI #AIThreats #SecurityForAI #AISecurity #AIRisks
-
💡 Navigating OWASPs 2025 Top Risks for LLMs As large language models (LLMs) continue to reshape industries and drive innovation, understanding their risks is critical to deploying them responsibly. The OWASP LLM Top 10 outlines the most significant security and privacy risks these models face, providing a roadmap for both developers and security teams to mitigate potential vulnerabilities. From data poisoning to prompt injections and model theft, this framework highlights the unique challenges LLMs introduce into the cybersecurity landscape. These risks emphasize the importance of embedding security for AI measures into every stage of development and deployment. If you work with or rely on LLMs, understanding these risks isn’t optional—it’s essential. You can check out the OWASP LLM Top 10 here 👇 https://lnkd.in/eFT4DunN This post is part of our Between the Layer series. Tune in weekly as we share industry insight and thought leadership topics on #Security4AI. #AI #MachineLearning #AISecurity #LLM #GenAI #GenAIApps #AIRisks
-
📩 Stay Ahead in Security for AI with HiddenLayer We recently launched our Innovation Hub, the go-to destination for cutting-edge research, expert guides, and education on all things security for AI Want to stay updated? Signing up for our newsletter is simple and ensures you’ll never miss an update from the forefront of innovation. Subscribe and join the conversation shaping the future of secure AI. 🔗 https://lnkd.in/gBgaCxJy #AI #AIEducation #SecurityForAI #AISecurity #GenAI #SecureAI #AIThreats #AIResearch
-
🚨 AI Supply Chain Attack Alert HiddenLayer researchers investigated the recent AI supply chain attack targeting Ultralytics, the developers of YOLO – a widely used vision model for tasks like object recognition, image segmentation, and classification. The attack compromised the Ultralytics Python package on PyPi, allowing malicious actors to deploy a coin miner on victims' machines. Learn about the Ultralytics Python package supply chain attack, how it unfolded, its impact, and critical remediation steps to secure your systems against similar threats 👇 https://lnkd.in/gS6R3Y6x #Cybersecurity #AI #SupplyChainSecurity #MachineLearning #SupplyChainAttacks #AIVulnerability
-
💡 In Case You Missed It: Automated Red Teaming for AI Explained Webinar Last week we hosted our webinar “Automated Red Teaming for AI Explained”. In this discussion, we explored the value of automated and manual red teaming, answered important questions on how to implement, and introduced HiddenLayer’s solution Automated Red Teaming for AI — an extension to our AISec Platform geared to safeguarding GenAI systems and stay ahead of emerging threats. Missed the webinar? Or just want to relive it? You can watch the webinar recording below 👇 https://lnkd.in/g7jGxW3J #RedTeaming #AI #LLM #ML #AIRedTeaming #PenTesting #GenAI
-
🎉 The HiddenLayer family is growing 🎉 We're excited to have the talented individuals join us as we work diligently to #protectyouradvantage. Please give a warm welcome to Earvin Polaco, Elizabeth Vary, Emma Schwarz-Dovey, Laura Wilkins-Henrique, Ozzie Mendoza, Simon Carter
-
🚨 New Blog: Insights from AI System Reconnaissance 🚨 Honeypots are not just decoys—they’re valuable tools for uncovering attacker behavior. At HiddenLayer, our SAI team recently deployed honeypots mimicking exposed MLOps platforms to observe real-world threat activity. Our latest blog dives into findings from a honeypot designed to mimic an exposed ClearML server. What we discovered shows the growing interest of threat actors in machine learning infrastructure. It’s crucial to remember that these findings emphasize the dangers of misconfigured platforms, not the ClearML platform itself. We commend ClearML for providing detailed security guidance and encourage its proper implementation to minimize vulnerabilities. 💡 AI systems are critical assets—let’s secure them like they are. Read the full blog here: https://lnkd.in/gdu3e4_p #AICybersecurity #MLOps #AIThreatIntel #Cybersecurity #AI #AISecurity #SecurityForAI #ClearML
-
⏰ TODAY: Automated Red Teaming for AI Explained Join us at 1 pm CST for a holistic red teaming for AI webinar. Our expert panelists will cover everything you need to know about red teaming your AI systems. This is a webinar you don’t want to miss. There's still time. Register now 👇 https://lnkd.in/gXzAzptg #AI #RedTeaming #AIRedTeaming #AIThreat #PenTesting