🛡 Protecting Privacy in the Age of AI: Introducing LLM-Guard Are you working with large language models? Concerned about accidentally exposing sensitive data? Check out LLM-Guard, an open-source library that helps catch and redact personally identifiable information (PII) in LLM inputs and outputs! Here's a quick 4-step guide to using LLM-Guard: 1. Easy Installation: Get started with a simple pip install llm-guard 2. Import Key Components: Bring in the Vault, Anonymize scanner, and model of your choice to detect PII 3. Set Up Your Prompt: Define the text you want to analyze, potentially containing sensitive info 4. Scan and Protect: Use the Anonymize scanner to detect and redact PII, then retrieve the sanitized data from the Vault. You can remove the identified PII from the prompt by simply passing in the prompt to the 𝘴𝘤𝘢𝘯() 𝘮𝘦𝘵𝘩𝘰𝘥. 👇 𝘴𝘢𝘯𝘪𝘵𝘪𝘻𝘦𝘥_𝘱𝘳𝘰𝘮𝘱𝘵 , 𝘪𝘴_𝘷𝘢𝘭𝘪𝘥 , 𝘳𝘪𝘴𝘬_𝘴𝘤𝘰𝘳𝘦 = 𝘴𝘤𝘢𝘯𝘯𝘦𝘳.𝘴𝘤𝘢𝘯(𝘱𝘳𝘰𝘮𝘱𝘵) The result? Peace of mind knowing that sensitive details like SSNs, phone numbers, and credit card info are automatically caught and protected! 🔒 #LLM #AI #Security #LLMOps #AIEthics
Shamal De Silva’s Post
More Relevant Posts
-
The war on drugs is a great example of how #AI can be used for good. Instead of chasing down individual leads, which can take a lot of time and resources — AI tools can help investigators see the bigger picture hiding inside datasets. Check out my article in Forensic Magazine to see what other vulnerabilities the technology can uncover. ➡ https://lnkd.in/dmMkqefy
To view or add a comment, sign in
-
In OWASP's 2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps, Prompt Injection remains at the top of the list. Suggested Prompt Injection mitigations indicate addressing Prompt Injections deterministically in AI systems will remain challenging. Not a surprise. AI Security Engineers need to continue to Think Backwards from the system Threat Model. Understand Key Failure Indicators (#KFI). Protect Confidentiality, Integrity, Availability and Intent (#CIAI). Understand pros and cons of guardrail options . Use well-architected AI guardrails that provide effectiveness, adaptability, and agility. Continue to thoroughly evaluate system behavior with adversarial testing. - Enforce Trust Boundaries (https://lnkd.in/giq3NDfd) - Protect AI System Intent (https://lnkd.in/g3DYsD4Z) - Use Well-architected Guardrails (https://lnkd.in/gDRis8R7) #OWASP, #AISecurity, #security
To view or add a comment, sign in
-
🛤 What are guardrails in AI, and why do they matter? Guardrails are the protections that ensure AI systems operate safely and ethically: 🔐 Prevent unauthorized access to sensitive data 🛡 Safeguard against malicious intent 🎯 Ensure AI delivers accurate insights based only on authorized data Xinying Yu, Evisort’s Head of Data Science and AI, shares how her team’s implementation of strict guardrails played a pivotal role in helping Evisort earn the ISO/IEC 42001 Responsible AI certification. Learn more: 🔗 https://lnkd.in/ga29V6Nw Schellman | Evisort #AI #Guardrails #DataSecurity #ISO42001
To view or add a comment, sign in
-
Great list from Rob. I would add the new MITRE ATLAS framework. ATLAS stands for adversarial threat landscape for artificial intelligence systems. it has the same rigger and thought leadership as ATT&CK! https://atlas.mitre.org/ #GenAI #AILM #CyberSecurity #Resiliency
Leader in AI security | Pioneer and veteran in AI | SIG Senior principal expert | AI Act security standard co-editor | ISO/IEC, OWASP, ENISA | Results: ISO/IEC 5338, owaspai.org, opencre.org and the AI readiness guide
Explore the forefront of AI security with OWASP® Foundation's latest open-source contributions and community-driven insights: ✏ New webinars and podcasts, such as the recent re:invent security discussion at https://lnkd.in/e4GTuGaZ For a list see owaspai.org/media/ ✏ The brilliant contribution to the OWASP AI Exchange by Matthew Adams to help visualise the complex relations between AI threats and controls: https://lnkd.in/eRR7wKp4 with video https://lnkd.in/eSa5PzwU - based on the AI Exchange Navigator at https://lnkd.in/eXdXWfmd ✏ The contribution by Disesdi Susanna Cox to the Federated learning section at the AI Exchange: https://lnkd.in/emidWvhp ✏ Dennis Charolle contributing an overview of legal considerations for different geos: https://lnkd.in/etjjcgTN ✏ The recent report by The Alan Turing Institute highlights the important role that OWASP is playing, including creating liaison relations, like John Sotiropoulos is doing with the AI Safety Institute consortium, and yours truly with CEN and CENELEC for the AI Act. Report: https://lnkd.in/eW3YFVdj ✏ Yours truly and the CEN/CENELEC JTC21/WG5 put together 58 pages as copyright-free input from the OWASP AI Exchange to the ISO/IEC 27090 standard on AI security. ✏ The work of Sandy Dunn on the LLM Governance Checklist at https://lnkd.in/e-849PNx ✏ The LLM top 10 is continuously doing great awareness work - see their page at OWASP Top 10 For Large Language Model Applications ✏ We anticipate new books coming out from John Sotiropoulos and Steve Wilson ✏ During the OWASP Global Application conference in Lisbon in June, Software Improvement Group organizes a full-day training about the AI Exchange: https://lnkd.in/eaFtG_Mu #ai #aisecurity
AI Security Controls Graph
https://www.youtube.com/
To view or add a comment, sign in
-
The OWASP AI Exchange is doing essential work in bringing together experts from various fields to address the complex challenges surrounding AI security. I'm very happy to have made a significant contribution to the project by helping visualise the intricate relationships between AI threats and controls, as mentioned in Rob van der Veer's post. I look forward to continuing to collaborate with the OWASP AI Exchange team, and the broader community to advance AI security. If you're interested in learning more about the project or getting involved, I recommend checking out the resources mentioned in Rob's post. #AISecurity #OWASP #OpenSource
Leader in AI security | Pioneer and veteran in AI | SIG Senior principal expert | AI Act security standard co-editor | ISO/IEC, OWASP, ENISA | Results: ISO/IEC 5338, owaspai.org, opencre.org and the AI readiness guide
Explore the forefront of AI security with OWASP® Foundation's latest open-source contributions and community-driven insights: ✏ New webinars and podcasts, such as the recent re:invent security discussion at https://lnkd.in/e4GTuGaZ For a list see owaspai.org/media/ ✏ The brilliant contribution to the OWASP AI Exchange by Matthew Adams to help visualise the complex relations between AI threats and controls: https://lnkd.in/eRR7wKp4 with video https://lnkd.in/eSa5PzwU - based on the AI Exchange Navigator at https://lnkd.in/eXdXWfmd ✏ The contribution by Disesdi Susanna Cox to the Federated learning section at the AI Exchange: https://lnkd.in/emidWvhp ✏ Dennis Charolle contributing an overview of legal considerations for different geos: https://lnkd.in/etjjcgTN ✏ The recent report by The Alan Turing Institute highlights the important role that OWASP is playing, including creating liaison relations, like John Sotiropoulos is doing with the AI Safety Institute consortium, and yours truly with CEN and CENELEC for the AI Act. Report: https://lnkd.in/eW3YFVdj ✏ Yours truly and the CEN/CENELEC JTC21/WG5 put together 58 pages as copyright-free input from the OWASP AI Exchange to the ISO/IEC 27090 standard on AI security. ✏ The work of Sandy Dunn on the LLM Governance Checklist at https://lnkd.in/e-849PNx ✏ The LLM top 10 is continuously doing great awareness work - see their page at OWASP Top 10 For Large Language Model Applications ✏ We anticipate new books coming out from John Sotiropoulos and Steve Wilson ✏ During the OWASP Global Application conference in Lisbon in June, Software Improvement Group organizes a full-day training about the AI Exchange: https://lnkd.in/eaFtG_Mu #ai #aisecurity
AI Security Controls Graph
https://www.youtube.com/
To view or add a comment, sign in
-
Protect AI Acquires SydeLabs to Red Team Large Language Models https://lnkd.in/dfaiHRst #AISecurity #AITech365 #automatedattacksimulation #GenAI #LLMsecurity #news #ProtectAI #Security #SydeLabs
To view or add a comment, sign in
-
As new technologies are adopted large-scale new #threats try to exploit them. In the era of AI, a deep dive into #AI #Safety and #Security is required to avoid any critical dangers related to the use of such technologies, either for enterprises or final users.
Leader in AI security | Pioneer and veteran in AI | SIG Senior principal expert | AI Act security standard co-editor | ISO/IEC, OWASP, ENISA | Results: ISO/IEC 5338, owaspai.org, opencre.org and the AI readiness guide
Explore the forefront of AI security with OWASP® Foundation's latest open-source contributions and community-driven insights: ✏ New webinars and podcasts, such as the recent re:invent security discussion at https://lnkd.in/e4GTuGaZ For a list see owaspai.org/media/ ✏ The brilliant contribution to the OWASP AI Exchange by Matthew Adams to help visualise the complex relations between AI threats and controls: https://lnkd.in/eRR7wKp4 with video https://lnkd.in/eSa5PzwU - based on the AI Exchange Navigator at https://lnkd.in/eXdXWfmd ✏ The contribution by Disesdi Susanna Cox to the Federated learning section at the AI Exchange: https://lnkd.in/emidWvhp ✏ Dennis Charolle contributing an overview of legal considerations for different geos: https://lnkd.in/etjjcgTN ✏ The recent report by The Alan Turing Institute highlights the important role that OWASP is playing, including creating liaison relations, like John Sotiropoulos is doing with the AI Safety Institute consortium, and yours truly with CEN and CENELEC for the AI Act. Report: https://lnkd.in/eW3YFVdj ✏ Yours truly and the CEN/CENELEC JTC21/WG5 put together 58 pages as copyright-free input from the OWASP AI Exchange to the ISO/IEC 27090 standard on AI security. ✏ The work of Sandy Dunn on the LLM Governance Checklist at https://lnkd.in/e-849PNx ✏ The LLM top 10 is continuously doing great awareness work - see their page at OWASP Top 10 For Large Language Model Applications ✏ We anticipate new books coming out from John Sotiropoulos and Steve Wilson ✏ During the OWASP Global Application conference in Lisbon in June, Software Improvement Group organizes a full-day training about the AI Exchange: https://lnkd.in/eaFtG_Mu #ai #aisecurity
AI Security Controls Graph
https://www.youtube.com/
To view or add a comment, sign in
-
This is your last chance to register for our #new course TOMORROW, September 17th from 1-3 pm ET exploring the #BidenAdministration’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “AI EO”). In this course you will: ➡️ Receive updates and insights on implementation efforts across the government. ➡️ Discuss emerging regulation and policy for the government’s acquisition and use of AI systems, and for protecting against new risks from AI development and AI-related security threats and threats to privacy and civil rights. & the Learning Objectives Include: 1️⃣ The government’s implementation of the AI EO’s mandates. 2️⃣ Insights into emerging and upcoming AI policy and regulatory actions. 3️⃣ Understand the AI EO’s approaches to protecting privacy, civil rights, and consumer protections in the age of #AI. With this session, you can also receive CLE credits in select states! Sign up now to join us for this robust and exciting session led by Michelle Coleman and Eric Ransom from Crowell & Moring. https://lnkd.in/eVbQ7HaF
Unpack the Executive Order on AI: Federal Actions and Future Directions Event
https://fpf.org
To view or add a comment, sign in
-
Is AI impacting security reconnaissance and bug bounties? Will AI be used by malicious actors in security research? from Ben Sadeghipour in Snyk https://lnkd.in/gZZiYDxi #ai #recon #bugbountry #security
How AI Impacts Reconnaissance and Bug Bounties
https://www.youtube.com/
To view or add a comment, sign in
-
You probably have a company that produces loads of data everyday. You've probly heard about this thing called AI but you have no clue how to leverage the power of AI with your data in a safe an secure way. Join our workshop on Nov 16 2pm EST where we will show you how you can start to leverage AI to take your business to a whole other level. RSVP on Meetup: https://lnkd.in/dMfcYpYX . #dataprivacy #artificialintelligence
Are you concerned about data privacy when using AI models? Join the Jamaica Artificial Intelligence Association (JAIA) for an informative virtual workshop on how businesses can leverage powerful AI tools, like GPT4-All and Retrieval-Augmented Generation (RAG) while maintaining data privacy and security. 📅 Date: November 16, 2024 ⏰ Time: 2-4 pm 🤝RSVP on Meetup: https://lnkd.in/eP7-Mwip
To view or add a comment, sign in
ML Engineer at SpatialChat | Researcher
5moAwesome 👏