Resaro

Resaro

IT Services and IT Consulting

Full Assurance with Approved Intelligence

About us

Resaro was founded on the belief in AI's profound potential to enhance our world beyond imagination. But with every leap forward in innovation, there comes the need for safeguards. Resaro offers independent, third-party assurance of mission-critical AI systems with unparalleled breadth and depth. We accelerate responsible, safe, and robust AI adoption for enterprises, through technical testing and evaluation of AI systems against emerging regulatory requirements. For enterprises, this means informed trust as they purchase third-party AI solutions, and for AI vendors, a stamp of approval. Our mission is to enable an AI market worthy of trust. We are a premier member of IMDA’s AI Verify Foundation, and are committed to enhancing open-source tools that are globally interoperable as an enabler of trust in AI.

Website
https://resaro.ai
Industry
IT Services and IT Consulting
Company size
11-50 employees
Headquarters
Singapore
Type
Privately Held
Founded
2023
Specialties
Artificial Intelligence, Machine Learning, Data Science, AI Safety, AI Red Teaming, AI Stress Testing, and Algorithmic Testings

Locations

Employees at Resaro

Updates

  • Resaro reposted this

    Defense and Security are vital areas for AI, ensuring national safety and resilience. AI supports defense by detecting risks, optimizing resources, and enabling critical operations. In security, it safeguards against cyberattacks, misinformation, and system vulnerabilities. As challenges become more complex and the mass of information/data increasing, AI plays an essential role in keeping people safe and prepared. 🛡️      Given the importance of Defense and Security in AI, ⭕️KI Park has launched an expert group in this field, with a kick-off event on the 3rd of December. Opened and led by Dr. Florian Schütz (KI Park) and Lars Ruth (Deloitte), the event brought together experts from Aleph Alpha, Deloitte, Resaro, German Federal Ministry of Defence | Bundesministerium der Verteidigung and many more. These organizations shared their insights into leveraging AI for defense and security. Here are some key takeaways from the day:      🛡️ Sovereignty is crucial for developing AI systems that maintain secure and independent operations.   🔍 Transparency and reliability ensure AI solutions can be trusted and effectively governed.   🌍 AI is key to addressing challenges like cybersecurity, cyber information warfare, and integrating operations across land, air, sea, and cyber domains.   🖼️ Detecting deepfakes requires strategies that leverage both real-world and synthetic datasets to combat misinformation and digital threats.   🚁 Practical applications of AI include digital situational awareness tools and integration into defense systems like drones.   🤝 Partnerships and trustful collaboration in startup ecosystems, are essential for driving innovation and deploying technologies fast.      If you’re interested in joining the discussion, this expert group meetings will be recurring. Reach out to Dr. Florian Schütz Schütz at groups@kipark.de to inquire about participation! 📩    Olly Salzmann, Alexander G., Rupprecht Rittweger, Valentin Dewenter, April Chin, Gary Ang

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Resaro reposted this

    View profile for April Chin, graphic

    AI assurance leader | Pioneering AI for good with approved intelligence

    Lewis and Bono: One Last Lap. https://lnkd.in/gWaEFm-X I've watched this video so many times this week 🥲 There's something precious about going through thick and thin together, achieving the impossible and persevering through the hard times. Also chuckled when Bono said to Lewis: "I don't know why you hate testing. That is so much fun." The stereotype of testing is it's boring, mundane, behind-the-scenes, forgotten yet important. As Lewis said, "You can’t tell my story without including Bono’s hard work and all the hours he’s put in." And I believe the Brackley team played a big part in his journey to the top of Formula 1. We are here to redefine and bring AI testing and evaluation to new heights. Driving AI to peak performance, safety and security will require top notch testing tools and evaluation methods. We are a team of pathfinders, passionate about our mission to ensure AI is worthy of trust. There is no Resaro without our team and I'm really proud of what we have achieved as we close the year. We are moving forward as one team in Singapore and Europe. Come join us if you want to make a mark in AI testing and evaluation 💪

    This content isn’t available here

    Access this content and more in the LinkedIn app

  • View organization page for Resaro, graphic

    758 followers

    AI in medical imaging has the potential to act as a “second set of eyes,” complementing human expertise and revolutionising diagnostics. The promises of AI’s technical capabilities are impressive and hard to ignore. For instance, AI can detect subtle abnormalities that might otherwise go unnoticed by the human eye. 🩻👀 But the real challenge lies in ensuring that it delivers effectively, integrates seamlessly, and proves valuable in real-world clinical settings. In the recent PRIME-CXR initiative, National Healthcare Group (Singapore) engaged us to evaluate the feasibility of deploying an AI-powered chest X-ray (CXR) analysis solution within a triage framework for CXR assessments. Read more. #AI #healthcare #radiology #AItesting

    Evaluating the use of AI in the deployment setting of a primary healthcare triage use case

    Evaluating the use of AI in the deployment setting of a primary healthcare triage use case

    Resaro on LinkedIn

  • A big thank you to everyone who attended Responsible AI Day, which we organised in collaboration with HTX (Home Team Science & Technology Agency)! It was a fruitful morning to drive much-needed conversations on value-driven AI governance and development testing. We shared global practices on AI governance across jurisdictions and law enforcement agencies, and had a robust discussion around why testing AI early is crucial — whether it is for scaling, building stakeholder trust, or setting operational thresholds from day one. AI must be trusted to work for all of us, not against us. That is why we need to lay the groundwork now, rather than waiting to react when things go wrong. Let’s keep these conversations rolling. 🤖💬 Huge thanks to the HTX team for partnering with us on this initiative!  Kian Boon Lim, Cheryl Tan, Chun Liang, Ron Ong, Foo Rong Rong, April Chin, Miguel Fernandes

    GLUE, GOAT, NGL, ROUGE, and SMH—can you tell which of these acronyms are evaluation metrics for Large Language Models (LLMs) and which are Gen Z slang? From training HTX’s LLMs to understanding global practices on AI governance, our recent Responsible AI Day, organised by HTxAI Enterprise Group, highlighted the importance of integrating privacy, security, and governance to establish and operationalise robust responsible AI policies and practices. We were joined by Resaro, an independent third-party AI test lab, who shared global practices on AI Governance across jurisdictions and law enforcement agencies. Their presentation also touched on how AI systems can be tested to enhance performance and build user confidence. The event drew an engaged audience of about 200 participants attending both in-person and virtually. It concluded with a lively Q&A session. Find out more about what we do at HTX in the fields of data science and AI: https://lnkd.in/g3ZX7MUW #HTXsg #HTxAI #LargeLanguageModel #LLM #ArtificialIntelligence #AI p/s GLUE and ROUGE are evaluation metrics for LLMs.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Is generative AI a super productivity engine, an adversarial agent, or an information leaker and safety hazard? 😲 As April Chin, our Managing Partner and CEO (Singapore) puts it, frustratingly, it can be all of the above. In our interactions with enterprises, we see 3 competing worldviews on GenAI: Innovators pushing boundaries, Hunters focusing on security, and Custodians emphasising responsibility. As enterprises seek to progress beyond the "first inning” of using #genAI to boost productivity, we need to maximise #AI's potential while effectively managing its risks. This leap demands a sharp focus on maximising innovation and performance at the intersection of AI safety and security. 🔗 Read our full article on DigiconAsia: https://lnkd.in/gcb4YPSQ #resaro #aisafety #aisecurity

    • No alternative text description for this image
  • Resaro reposted this

    View profile for Timothy Lin, graphic

    Data Scientist | Open-Source Developer ✨

    Some personal highlights of #SFF2024 from the Trusted AI Ecosystem booth: - Shameek Kundu back in his home tuft - this time representing #AIVerify and simplifying all the jargon of LLM testing and evaluation! - Dr Vishal and Jordan Seow from H2O.ai talk about approaching LLM evaluation from a model risk management lens and spreading the goodness of conformal prediction methods - Sonya's sharing about the joint discussion whitepaper between Cyber Security Agency of Singapore (CSA) and Resaro - Tommy Yong and Alexandre's example of deploying, monitoring and safeguarding chatbot applications on DataRobot platform - Creating deepfakes of myself and getting them evaluated in a test lab setting - Chatting with past, present and maybe future collaborators from Monetary Authority of Singapore (MAS), Accenture, IMDA, and sharing about our work with the broader public Xuchun Li, Qiang Zhang, Sang Hao Chung, Noel Chau, Cyrus Chng, JX Wee (黃佳賢) ☁, Kevin Lee

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +2
  • 🛖 It takes a village to make real progress, because the whole is greater than the sum of its parts. This is why we’ve always seen the need to promote greater connectivity and collaboration in the responsible AI ecosystem. Partnering with AI Verify Foundation — and as a third-party tester — we’re contributing to open-source testing tools to make responsible AI accessible for enterprises, big or small. 🤝 Learn more in the full article.  “As a premier member of the AI Verify Foundation, Resaro has been a fantastic source of insights around AI assurance and testing. Resaro has also played an important role in advancing open-source AI testing frameworks, ensuring developers and enterprises have access to robust tools for responsible AI governance.” — Shameek Kundu, Executive Director, AI Verify Foundation

    Resaro partners with IMDA’s AI Verify Foundation to advance the development of open-source testing frameworks and toolkits for responsible AI

    Resaro partners with IMDA’s AI Verify Foundation to advance the development of open-source testing frameworks and toolkits for responsible AI

    Resaro on LinkedIn

  • The manufacturing industry is on the verge of a major shift thanks to AI-driven applications. Machines can now predict their own failures, ensuring uninterrupted production and pushing defects to the sidelines. Factories are running smoother and faster than ever before. At this year's Industrial Transformation ASIA-PACIFIC (ITAP), our Managing Partner and CEO (Singapore), April Chin, joined a panel to discuss AI standards and regulations relevant to manufacturing. The panel unpacked ISO 42001, TR 99, the EU AI Act, and more—relevant frameworks for manufacturers as AI becomes increasingly integrated into their workflows. April also touched on ‘AI Assurance,’ and how AI systems should be effective, trustworthy, reliable, and meet these standards and regulations. 🔍 Thanks to Laurence Liew (AI Singapore) for moderating, and to panellists Martin Saerbeck (AIQURIS - A TUV SUD Venture), Ng Joey Yuen (Advanced Remanufacturing and Technology Centre (ARTC)), and Kong Soon Chak (Stream Global) for the meaningful dialogue on how manufacturers can stay aligned with evolving guidelines while pushing innovation forward. Until next time! Enterprise Singapore Singapore Standards Council #ITAP2024 #manufacturing #AIstandards #AIregulations #AI

    • ITAP 2024
    • ITAP 2024
  • The adoption of both traditional and generative #AI will exacerbate existing security risks and introduce new challenges. As with existing digital systems, malicious attacks on AI can undermine user confidence in the technology, and ultimately limit the value they derive from it. 1️⃣ In collaboration with Cyber Security Agency of Singapore (CSA), we would like to announce the release of a joint discussion paper, ‘𝐒𝐞𝐜𝐮𝐫𝐢𝐧𝐠 𝐀𝐈: 𝐀 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐯𝐞 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐲’. The paper explores the complex and evolving threats associated with AI, emphasising the necessity of a multi-stakeholder approach to effectively safeguard AI systems. Get the paper now: https://lnkd.in/gUqqeab4 2️⃣ We are also pleased to support the release of CSA’s first edition of the ‘𝐆𝐮𝐢𝐝𝐞𝐥𝐢𝐧𝐞𝐬 𝐚𝐧𝐝 𝐂𝐨𝐦𝐩𝐚𝐧𝐢𝐨𝐧 𝐆𝐮𝐢𝐝𝐞 𝐨𝐧 𝐒𝐞𝐜𝐮𝐫𝐢𝐧𝐠 𝐀𝐫𝐭𝐢𝐟𝐢𝐜𝐢𝐚𝐥 𝐈𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞 𝐒𝐲𝐬𝐭𝐞𝐦𝐬’. Developed with AI and cybersecurity practitioners, the resources set our best practices in securing AI systems and provide practical measures for system owners to mitigate AI security risks. Link to the guide: https://lnkd.in/eJwpiaX4 #SICW2024 #resaro #cybersecurityweek #digitaltrust #globalinnovation #digitalfuture

  • Last week’s AI Verify community meetup was packed with great discussions! 🤝✨ Shameek Kundu opened the session by sharing AI Verify Foundation's vision, highlighting collaborative knowledge sharing within the trusted AI ecosystem. Chen Hui Ong shared updates on Project Moonshot — an open-source toolkit by the Foundation aimed at tackling generative AI risks, which includes red teaming and benchmarking LLMs. Afterward, Janet Chiew conducted a live demo of Project Moonshot. Jordan Seow from H2O.ai offered insights into lessons we can draw from Banking Model Risk Management, while Han Yang Quek and Marcus from Ensign InfoSecurity examined the topic of AI Model Testing via Adversarial AI. Li Ming Tsai from Red Hat highlighted open source's crucial role in enhancing security, trust, transparency, and accessibility while maintaining neutral governance. April Chin from Resaro touched on what AI assurance is, and the various approaches that enable enterprises to shift from blind trust to informed trust. If you are curious to know more about AI assurance, feel free to drop us a message at contact@resaro.ai. Incredible to see everyone come together to advance the future of responsible AI. Thank you for joining us, and see you next time! #weareResaro #buildingtrustworthyAI #responsibleAI

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image

Similar pages