Friday, May 17, 2024

US Issues Stark Warning on AI Risks to Critical Infrastructure

Must read

In a bid to safeguard the nation’s vital infrastructure, the federal government is now offering a playbook to help companies navigate the treacherous landscape of cybersecurity, including the brewing dangers lurking in artificial intelligence.

The recommendations, issued by the Cybersecurity and Infrastructure Security Agency, highlight the need for enhanced safeguards as AI increasingly integrates into essential sectors such as energy, transportation and healthcare. Experts in the field are closely examining the guidelines, offering insights and additional recommendations to bolster the nation’s defenses against potential AI-related disruptions and attacks.

“AI systems are vulnerable to hackers primarily because they are software applications built by engineers,” Chase Cunningham, vice president of security market research at G2, told PYMNTS. “They can harbor flaws in their source code, often incorporate open-source components with their own vulnerabilities, and typically operate on cloud infrastructures, which, despite their advances, remain susceptible to security threats.

AI is not only a threat but is also revolutionizing how security teams combat cyber threats, streamlining their processes for greater speed and efficiency. Through analyzing extensive data and detecting intricate patterns, AI automates the initial phases of incident investigation. These innovative methods enable security professionals to commence their tasks with a comprehensive grasp of the situation, accelerating response times.

A Growing Threat

The guidelines stress a comprehensive approach, urging operators to comprehend the dependencies of AI vendors and catalog AI use cases. They also advocate for critical infrastructure owners to establish protocols for reporting AI security threats and assess AI systems for vulnerabilities regularly.

The guidelines outline opportunities within AI in operational awareness, customer service automation, physical security and forecasting. However, the document also cautions about potential AI risks to critical infrastructure, encompassing AI-enabled attacks, targeting AI systems, and possible flaws in AI design and execution that could result in malfunctions or unforeseen repercussions.

“Based on CISA’s expertise as national coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk,” CISA Director Jen Easterly in a statement.

The rise of AI has brought about both new attack methods and chances for more deceptive hacking tactics, Schellman CEO Avani Desai told PYMNTS. For instance, there’s been a surge in highly automated and effective phishing campaigns and other black hat applications. Additionally, AI has raised concerns regarding the rightful ownership and appropriate use of intellectual property.

“Because AI must be trained on large datasets for it to be effective, and many of the sources can include [personally identifiable information (PII)], medical, and other sensitive and potentially private information, users of generative AI can also input sensitive information into these tools, which raises privacy concerns,” Desai said.

Some experts say the new federal guidance doesn’t go far enough. Cyber defenses must be much more collaborative, Asaf Kochan, president and co-founder of Sentra, a cybersecurity company, told PYMNTS.

“This means that everyone must do their part,” he said. “Businesses working in critical infrastructure must take steps to protect themselves and their customers from AI cybercrime, meaning they should adopt comprehensive security solutions that can keep pace with AI-generated threats and that run on modern equipment.”

Keeping Infrastructure Safe

To improve AI security, businesses should focus on critical defenses such as rigorous testing of open-source components, implementing code signing, and employing software bill of materials (SBOM) and provenance verification, Kodem Security CEO Aviv Mussinger told PYMNTS. Continuously monitoring for vulnerabilities is essential to avoid potential security threats and ensure robust protection for AI systems.

“The surge in AI-generated code transforms software development, necessitating more agile and integrated security measures,” he said.

To stay safe in today’s fast-paced digital world, organizations can keep their SBOMs and vulnerability exploitation (VEX) lists up to date using DevSecOps, Mussinger said. This means they can ensure security and compliance while keeping pace with rapid development. This approach also tackles the challenges posed by AI, providing thorough protection in a constantly changing threat environment.

“AI systems must be designed with security in mind,” he said. “An AI system that is secure by design reduces the risk of downstream threats once the system is built and running. Secure by design principles must be incorporated into each development lifecycle phase. This is a best practice for the development of any mission-critical system and not just AI systems.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Latest article