SplxAI has raised $2m in a pre-seed round led by Inovo.vc, with contributions from Runtime Ventures and South Central Ventures. The company, led by CEO Kristian Kamber, aims to accelerate product development for their platform that secures generative AI systems by identifying potential vulnerabilities, including automated pentesting for complex AI attack scenarios. Learn more in this article from FinTech Global. https://lnkd.in/g-TJgURe #AI #ArtificalIntelligence
AI VCLink’s Post
More Relevant Posts
-
Investing in AI means investing in security and trust 🔐 With all the outstanding potential that #ArtificialIntelligence offers for companies and society, as a #SecurityTech leader, we are always looking to proactively mitigate potential risks. This is why G+D Ventures recently invested in the promising AI startup Blockbrain. Blockbrain helps companies make use of their internal knowledge while safeguarding it to the fullest extent. The platform optimizes AI-supported processes by intelligently capturing, structuring, and harnessing existing and new knowledge. The no-code solution enables companies to integrate tailor-made automation solutions into their day-to-day work without any programming knowledge. By partnering with early-stage TrustTech startups that use generative AI technologies to tackle the most pressing challenges of cybersecurity and data protection, we are strengthening our commitment to promoting and protecting trust in digital ecosystems. We are excited to work with Blockbrain on their mission to revolutionize knowledge management through the use of cutting-edge technology and AI. 🤝 #ArtificialIntelligence #SecurityTechnology #KnowledgeManagement #TrustTech Michael Tagscherer
To view or add a comment, sign in
-
We proudly led a $2M pre-seed round into SplxAI 🇭🇷, a game-changer in GenAI security. Kristian and Ante, aim to tackle critical security issues in Conversational AI. I've never seen such strong support from AI founders, data scientists, and cybersecurity pros. The problem is clear, the need is massive, and the timing is now. SplxAI's platform is truly the "shovel" in the enterprise AI gold rush. It blew me away when they showed how they're able to crack the chatbot of a well-funded US company. They were able to extract sensitive data within minutes. That demonstration was a wake-up call: the vulnerabilities in AI systems are not just theoretical; they're real and urgent. Why does this matter? The application of GenAI apps is booming, with over 1.5 billion people interacting with chatbots globally. Yet, security is often overlooked. SplxAI's platform helps companies find vulnerabilities before they cause reputational damage or data breaches. If you're a CISO, CTO, or working on scaling Conversational AI apps in your company, you should pay attention to that. The round is joined by our friends at Runtime Ventures (David), South Central Ventures (Vedran) and prominent founders, and prominent angels David, Elad More info about the round: https://lnkd.in/ekpwFVVS Congrats to the entire Splx team! #AIsecurity #ConversationalAI #chatbots #startups #venturecapital
To view or add a comment, sign in
-
The future is rapidly approaching. This series will explore three key themes: 𝐓𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐓𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲, Humanity, and Resilience. Drawing from past learnings and current innovations, leading startup experts share their most compelling insights. Let's grow! 𝗡𝗲𝘄 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 𝗠𝘂𝘀𝘁 𝗦𝗼𝗹𝘃𝗲 𝗮 𝗥𝗲𝗮𝗹 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 Never assume that because your new technology solves a problem in 𝘺𝘰𝘶𝘳 mind, it will be embraced by your target audience. It’s easy to operate off assumptions about how you think things work. You might even validate them based on industry trends, but that’s not enough. To create valuable technology, you must take time to deeply understand your customer’s challenges and complexities to develop solutions that truly resonate. Christopher Lind is the host of Future-Focused, navigating the intersection of business, technology, and human experience. 𝗠𝗮𝗸𝗶𝗻𝗴 𝗔𝗜 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗮𝗻𝗱 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 AI is not a universal solution but a strategic tool. Rather than chasing AI trends, startups should focus on targeted applications that solve specific problems. The key lies in thoughtful integration: purposefully selecting where AI can truly boost efficiency and provide measurable value. Success derives from a smart, selective implementation, not blanket technological adoption. Christina Ross is the Founder & CEO of Cube, a cloud-based FP&A platform that helps companies hit their numbers without sacrificing their spreadsheets. 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗶𝗻𝗴 𝗦𝘁𝗮𝗿𝘁𝘂𝗽 𝗩𝗲𝗿𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗮 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗪𝗼𝗿𝗹𝗱 Generative AI increasingly creates, consumes, and interprets digital content, taking more online actions on our behalf each day. While this technology enriches our lives, it also threatens our livelihoods and mental well-being as we struggle to distinguish reality from deepfakes and authorized agents from unauthorized bots. To secure our digital worlds, we need robust, post-quantum verification systems designed for both humans and agents built on next-generation biometric and blockchain technology. Brett Wilson is the Managing Partner at IOmergent, helping growing companies handle their cyber security capabilities. 𝗔𝗜 𝗥𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻: 𝗛𝗼𝘄 𝗠𝗶𝗰𝗿𝗼-𝗧𝗲𝗮𝗺𝘀 𝗪𝗶𝗹𝗹 𝗕𝘂𝗶𝗹𝗱 𝗕𝗶𝗹𝗹𝗶𝗼𝗻-𝗗𝗼𝗹𝗹𝗮𝗿 𝗘𝗺𝗽𝗶𝗿𝗲𝘀 AI will enable a new class of startups run by very few people yet capable of reaching a billion-dollar scale. Developers and software engineers will likely be among the first professionals who are displaced. Go-to-market savvy will become the most critical skill, though aspects of this will also be affected by AI. Ammon Brown is the Founder & CEO of Dossy, which puts professional and personal relationship management on autopilot. What insights are missing? Drop your thoughts into the comments! 👇 #leaders #founder #adapt #startups
To view or add a comment, sign in
-
AI in Action: September 23, 2024 This Week's Focus: Hot AI Startups In today’s edition: - Cavela: Make Anything - OffDeal: AI-native Investment Bank for Small Businesses - Glean: Work AI for All - World Labs: Pioneering the Future of Spatial Intelligence AI - Osavul: AI-Powered Security Against Information Threats https://lnkd.in/dcGYMym5
AI in Action: September 23, 2024
aiinaction.substack.com
To view or add a comment, sign in
-
Here are some recent updates from the world of artificial intelligence: 1-Google’s DeepMind CEO on AI Funding and Hype: DeepMind’s chief, Demis Hassabis, has commented on the surge in funding for AI, noting that it brings both hype and potential grifting along with it1. As the field continues to grow, it’s essential to maintain a balanced perspective. 2-AI Startup Cognition Labs Seeks High Valuation: Cognition Labs, a startup founded in November, is in talks with investors to achieve a valuation of up to $2 billion. Earlier this year, they raised $21 million at a valuation of $350 million and turned down offers at $1 billion2. The AI investment landscape remains dynamic. 3-Navigating Cyber Risks in the Age of AI: Organizations are increasingly dealing with cyber risks related to AI. Strategies include understanding regulatory changes, optimizing insurance coverage, and ensuring data transparency3. Remember, the AI field is constantly evolving, and these updates provide a glimpse into the exciting developments happening worldwide! 🌟🤖
To view or add a comment, sign in
-
OpsBerry AI (YC S23) has raised a seed round and $3.8M in total funding to enable companies of all shapes and sizes to protect, secure, and govern their human and non-human identity sprawl across their Cloud, SaaS, and IT Stack. ~$4,810,000!!!!!!!!! - That's the average cost of an identity breach for a company, which doesn't include lost customers, brand reputation, customer trust, and loss of productivity. OpsBerry AI is purpose-built to proactively protect companies from identity breaches. It seamlessly combines AIOps best practices with generative AI so that companies can secure and govern their identity sprawl while staying compliant with little effort. From startups to large enterprises, customers use OpsBerry to save hundreds of hours with AI-driven investigations, real-time behavioral detection, powerful automation, and one-click access reviews. Founded by Casey Wilcox and Carlos Feliciano, OpsBerry AI aims to democratize the "shared responsibility" security model, reduce friction, and empower everyone to action through best practice observability and generative AI. We appreciate all the support throughout our journey!
To view or add a comment, sign in
-
As Chairman and the earliest backer of Kin AI, I am excited to share that Kin has secured 3M USD in seed funding from cyber•Fund and Seier Capital! 🎉 When we started Kin, we wanted to be the solution to a big question: What if AI could truly support professional growth while letting users own their data? The response has been incredible - with thousands of new users embracing our vision of privacy-first, emotionally intelligent AI in just two months... Along with other signals confirming we're on the right track 🙂 This investment will help us: Expand into the US market Develop more advanced support features for modern workers Grow our exceptional team We've seen firsthand how preventive support can transform outcomes. That's why we're building Kin - to provide personalized, ethical AI that enhances emotional intelligence while keeping user data private and secure. We're immensely grateful to our investors and community. The future of personal AI will be self-sovereign, empathetic, and truly personal. Let's keep going! 🚀 ❤️ https://lnkd.in/dZEUK--U #PersonalAI #PrivacyFirst #EmotionalIntelligence #KIN
Personlig AI har fået 15.000 brugere på to måneder. Nu har startuppet bag også fået en millioninvestering fra tunge investorer - TechSavvy
techsavvy.media
To view or add a comment, sign in
-
Many thanks (and much concern) over Illya Sutskever and Daniel Levy (both formerly of Open AI) and their will and drive to launch their own company, SSI—Safe Superintelligence. "Sutskever will continue to focus on safety at his new startup. "SSI is our mission, our name, and our entire product roadmap, because it is our sole focus," an account for SSI posted on X. "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures." Sutskever is starting the company with Daniel Gross, who oversaw Apple's AI and search efforts, and Daniel Levy, formerly of OpenAI. The company has offices in Palo Alto, California, as well as Tel Aviv." Questions remain about Illya's optimistic views on AI and his desire to chase AGI or ASI—as though it is a foregone conclusion that we can achieve either. He has also said in his TED Talk that AI is simply a brain inside a computer. But I think that's too close to anthropomorphism (using a lens of optimism) or rather perplexing since we cannot know if AI thinks like us and it lacks our capacity for consequences (due to the absence of lived experiences). There is always the concern about whether a person is high enough on the moral high ground to do groundbreaking work. Everyone has flaws and can make mistakes. Still, it's hard to come out against a company focusing on safety that was started by those who know what insiders are up to (things that the public doesn't know). It's hard to know whether this is a case of someone who needs to be stopped or someone trying to navigate around others who could harm us. But, I'd like someone to be on the right side of this, the human side. I really don't know if we've found them. But sometimes, even I need a bit of optimistic hope that good will find a way. #Ethics #Innovation #Technology #ArtificialIntelligence #Safety #SuperIntelligence #Security Gary Marcus Luciano Floridi Murat Durmus
OpenAI co-founder Ilya Sutskever announces his new AI startup, Safe Superintelligence
cnbc.com
To view or add a comment, sign in
-
No matter how noble their aims, the approach is fundamentally flawed. We shouldn’t be building super-AI for people. That’s unsustainable and irresponsible. We must build it with people from a foundation of data dignity and embedded mechanisms for representative governance. This includes consent, credit, and compensation to creators. This AI isn’t a consumer product; it’s our digital public square that was first ceded to for-profit corporations and their web platforms and now to their supercomputers and AI. Artificial General Intelligence is the wrong way forward. We should empower people to create AI agents that they control, representing their interests. Then, we can build human+digital learning ecosystems with these AI agents. Community knowledge graphs should be built by data collectives that operate like nonprofit credit unions. This approach promotes regenerative augmented dialogical intelligence and provides a societal framework for supercharged cognitive systems.
Practical Ethics Instructor | Advocate for Ethical AI | 2024’s 19 Women to Watch in AI | Author, "AI Ethics in Action" (2025) | Humanitarian Award Recipient
Many thanks (and much concern) over Illya Sutskever and Daniel Levy (both formerly of Open AI) and their will and drive to launch their own company, SSI—Safe Superintelligence. "Sutskever will continue to focus on safety at his new startup. "SSI is our mission, our name, and our entire product roadmap, because it is our sole focus," an account for SSI posted on X. "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures." Sutskever is starting the company with Daniel Gross, who oversaw Apple's AI and search efforts, and Daniel Levy, formerly of OpenAI. The company has offices in Palo Alto, California, as well as Tel Aviv." Questions remain about Illya's optimistic views on AI and his desire to chase AGI or ASI—as though it is a foregone conclusion that we can achieve either. He has also said in his TED Talk that AI is simply a brain inside a computer. But I think that's too close to anthropomorphism (using a lens of optimism) or rather perplexing since we cannot know if AI thinks like us and it lacks our capacity for consequences (due to the absence of lived experiences). There is always the concern about whether a person is high enough on the moral high ground to do groundbreaking work. Everyone has flaws and can make mistakes. Still, it's hard to come out against a company focusing on safety that was started by those who know what insiders are up to (things that the public doesn't know). It's hard to know whether this is a case of someone who needs to be stopped or someone trying to navigate around others who could harm us. But, I'd like someone to be on the right side of this, the human side. I really don't know if we've found them. But sometimes, even I need a bit of optimistic hope that good will find a way. #Ethics #Innovation #Technology #ArtificialIntelligence #Safety #SuperIntelligence #Security Gary Marcus Luciano Floridi Murat Durmus
OpenAI co-founder Ilya Sutskever announces his new AI startup, Safe Superintelligence
cnbc.com
To view or add a comment, sign in
-
🔮 CalypsoAI CEO and Founder Neil Serebryany sees ground-breaking changes ahead in the AI security space. 📅 In the year since they appeared en masse in the business landscape, large language models (LLMs) have moved far beyond being technological curiosities or "tech flexes" 💪 ; they are revolutionary tools with diverse applications and unlimited utility across industry sectors. As their adoption becomes more widespread, LLMs stand to eclipse currently held notions of innovation and efficiency. In this guest post on VMblog, Neil highlights a few of the more important transformations he expects to see in the new year, including: 👉 Data science will become increasingly democratized thanks to foundation models (LLM usage). 👉 The typical enterprise will deal with more than 50 models on a routine basis (and some companies will have hundreds of models in use across their enterprise). 👉 We'll see more and more enterprise use cases for LLMs. 👉 We'll see a model with a one million token context window developed, which will enable the next generation of extremely high context tasks to be executed and operationalized via LLM. Follow the link below to read the full post. #aisecurity #llmsecurity #aigovernance
The One Where The Tech Flex Was Followed By A Quantum Leap
vmblog.com
To view or add a comment, sign in
686 followers