As a sponsor of the GFMI hosted event in New York today and tomorrow, co-founder and CTO of TrojAI, James Stewart, Ph.D., introduced key concepts associated with "Understanding Model Risk Amidst the Rise of Adversarial AI." His presentation included: - Understanding new challenges posed by adversarial AI - Exploring a dual-strategy framework - Penetration testing to uncover vulnerabilities - Targeted monitoring of model inputs - Lessons learned from enterprise deployments
Chris Weir, CSL’s Post
More Relevant Posts
-
Don't miss your chance to dive into the world of generative AI in security tools! Our on-demand webinar with Allie Mellen, principal analyst at Forrester, is available for a limited time. Discover how generative AI is transforming security tools, what's working, and what's ahead. 👉 Watch the replay now so you don't miss out: Webinar: Generative AI In Security Tools — Innovations & Impact.
To view or add a comment, sign in
-
Risks and controls for AI and ML systems by Cybernetica!Share and follow Threat Intelligence Lab!
To view or add a comment, sign in
-
The 1st algorithmic bias bounty program from Human intelligence started on 15th of May. Team participation is possible, different skill levels are welcomed to join. They will use the data that they collected in their Red Teaming challenge. Who is joining?
US Science Envoy, Artificial Intelligence | CEO, Humane Intelligence | Investor | Board Member | Startup founder |TIME 100 AI | ex- Twitter, ex- Accenture
There's a lot of excitement this week about GenAI evaluations and safety launches by government groups and regulators. Just a reminder that while corporate and government evaluations are necessary, equal representation of technical civil society and independent third parties are critical for assurances that these tests are designed for a wider range of voices and in the public interest. Reposting HumaneIntelligence's Generative AI Red teaming report, and linking to our algorithmic bias bounty, which is enabling community-driven evaluations and best practices for identifying and mitigating harm in AI models. Our report channeled the input of 2,200 people to evaluate model performance at-scale for bias and discrimination, misinformation/misdirection, cybersecurity and factual errors and hallucinations. Our bounty program builds on that dataset to build a proactive method of identifying if a prompt will lead to a malicious outcome. Info on the bounty program in comments.
To view or add a comment, sign in
-
🔊 Join Ali Nicholl at IOTICS as he talks with Al Bowman about building and deploying responsible AI in Defence and National Security 🎧 This episode is for you if you would like to learn more about the application of AI and machine learning in high-stakes applications, where the data is difficult, the operating environments are difficult and the consequences of getting it wrong an incalculable. Listen now ➡️ https://lnkd.in/eud7mx9m ⬅️
To view or add a comment, sign in
-
"Description I speak with Ismael Valenzuela, VP of Threat Research and Intelligence at Blackberry Cylance. We discuss: Modern Threat Intelligence The shifting attention of attackers GenAI attacks How defenders are adapting to AI attacks And many other topics"
A Conversation With Ismael Valenzuela About AI and Threat Intelligence
omny.fm
To view or add a comment, sign in
-
In light of this week's Seoul AI summit, the second global AI summit, I encourage folks to check out the the Generative AI Red teaming report that we published at HumaneIntelligence. This report channels the input of 2,200 people to evaluate model performance at-scale for bias and discrimination, misinformation/misdirection, cybersecurity and factual errors and hallucinations. This report also serves as the basis for our current algorithmic bias bounty challenge, which creating a probability estimation model that determines whether the prompt provided to a language model will elicit an outcome that demonstrates factuality, bias, or misdirection.
US Science Envoy, Artificial Intelligence | CEO, Humane Intelligence | Investor | Board Member | Startup founder |TIME 100 AI | ex- Twitter, ex- Accenture
There's a lot of excitement this week about GenAI evaluations and safety launches by government groups and regulators. Just a reminder that while corporate and government evaluations are necessary, equal representation of technical civil society and independent third parties are critical for assurances that these tests are designed for a wider range of voices and in the public interest. Reposting HumaneIntelligence's Generative AI Red teaming report, and linking to our algorithmic bias bounty, which is enabling community-driven evaluations and best practices for identifying and mitigating harm in AI models. Our report channeled the input of 2,200 people to evaluate model performance at-scale for bias and discrimination, misinformation/misdirection, cybersecurity and factual errors and hallucinations. Our bounty program builds on that dataset to build a proactive method of identifying if a prompt will lead to a malicious outcome. Info on the bounty program in comments.
To view or add a comment, sign in
-
Something I wanted to share for a few days now... since the glorious lift-off of open-source LLMs there was (and still is) an entire rainbow of experimentation, right in front of our eyes. Model merging is most fascinating... The idea is simple. You create new models by merging layers of other ones. You can average layers, you can decide which ones to merge, and you can even decide to have more layers than you started with. This is an increase in experimental complexity. What about hacking your way through the jungle of complexities with a tool called Evolutionary Algorithms? This is what Sakana is about. What a fine idea! https://lnkd.in/ev5f8P_k
Sakana AI
sakana.ai
To view or add a comment, sign in
-
Identifying the use case for introducing AI into the loop is one of the hardest steps for deploying artificial intelligence. Once you've understood where you have live systems and what the AI can do to help, then it's just a case of getting a bunch of clever people to build and integrate the models. I've been reflecting on what this means for civil infrastructure, where the availability of live systems and streaming data is vanishingly small. Much of the way we manage and deliver civil assets uses retrospective data capture methods with human subjectivity (or, if I'm being polite, engineering judgement) being the way we distill complex real world conditions into information systems, which, at best, are records management tools and rarely support real-time activity. I'm still working through what this means for deploying AI-in-the-loop at scale in infrastructure. My team are finding that we need to integrate digital maturity into our offering alongside the AI models. Excel as the engineer's Swiss army knife has become the enemy of progress. I'm working on a framework for identifying use cases and assessing AI readiness. Please share any existing resources to help me avoid reinventing the wheel.
🔊 Join Ali Nicholl at IOTICS as he talks with Al Bowman about building and deploying responsible AI in Defence and National Security 🎧 This episode is for you if you would like to learn more about the application of AI and machine learning in high-stakes applications, where the data is difficult, the operating environments are difficult and the consequences of getting it wrong an incalculable. Listen now ➡️ https://lnkd.in/eud7mx9m ⬅️
To view or add a comment, sign in
-
Generative AI: opportunities and threats. Join us @ the Credit Risk Conference of ICAP CRIF in Athens on Wednesday, November the 6th, to learn more about it…
To view or add a comment, sign in
-
Join us for a live webinar on March 18 as our threat experts discuss the leading trends in external defense from the past year 📈 You'll walk away with an understanding of new attack vectors, such as generative AI, and how to stay one step ahead of threat actors. Register here: https://lnkd.in/euaWuYJM Lorri Janssen-Anessi, Rom Eliahou, and George Aquila
To view or add a comment, sign in