In November of 2024, the Department of Homeland Security (DHS) unveiled a new Framework covering the responsibilities of key stakeholders guiding the secure and responsible use of Artificial Intelligence (AI) in the critical infrastructure sectors. In this blog post, Unissant’s Damodara (DP) B. shares 5 key takeaways from this new Framework. #AI #ArtificialIntelligence #CriticalInfrastructure #DHS #MachineLearning #DigitalTransformation #Cybersecurity
Agree that the inclusion of "civil society" as a key participant in the process stands out in this new framework. Thanks for breaking down these key points, DP!
Great advice!! Thanks for the key points DP. Agree on all fronts!!
Great insights, DP (Damodara (DP) B. ) ! The new roles and responsibilities framework from DHS is definitely a game-changer. It's impressive to see how it emphasizes agility and collaboration. Looking forward to seeing how this will drive innovation in our industry. Thanks for sharing!
IT Services Mgmt | Life-Health Sciences | Integration | Infrastructure | MS SME | CI/CD | Incident, Problem Systems | Change Mgmt | ITSM(ITIL-4), ITOM, IaaS | Results-Driven, Solution-Focused | DoD/MHS | EHR
6dGreat article, however I’m curious particularly with; “3. Human-centric, Responsible AI” Where and how do we draw the “line in the sand” for unbiased, ethical, and moral decision-making in AI, particularly in health and life sciences? Balancing human-centric values while leveraging secure-by-design principles seems crucial, but with high-risk contexts like critical infrastructure within the health industry, how can we ensure AI models consistently prioritize fairness, transparency, and helpfulness without unintended consequences? I’d like to hear perspectives on integrating AI/ human oversight. Perhaps we’re not for from the “3 laws” as in iRobot !? 🤖 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law. 3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.