The Catch-22 of Cybersecurity in the Age of AI

April 18, 2024Legal Alerts

Although artificial intelligence (“AI”) improves how businesses interact with customers, process sales, manage inventory and more, it also heralds new and unique cybersecurity risks. These risks can lead to unprecedented legal liabilities. Companies must understand how AI works, consider how AI is deployed throughout their business structure and take proactive steps to minimize security risks.

AI is generally any technology that utilizes an algorithm to process data, form new inferences and make predictions beyond the original data set given to it by humans. The use of AI to make hiring decisions, offer 24/7 customer relations chatbots, ensure regulatory compliance, vet supply chains and manage data is widespread.[1] The AI arms race is underway as innovators across different sectors look for the next significant advantage over their competitors.

However, the market demand to integrate AI also introduces new public-facing service endpoints, creating more cyberattack opportunities. Faster data processing means a higher rate of errors. Worse, fraudsters can use AI to find weaknesses, penetrate security and access and extract data in less time. AI-powered cyberattacks are more accessible to execute, and more precise, because AI can spot patterns in a business’s security and extract data faster than humans.[2] To name just a few examples:

  • AI accelerates brute force password cracking, bypasses CAPTCHA verification and can adjust tactics in real time.[3]
  • Third-party LLM plugins like ChatGPT can launch injection attacks, may contain potentially dangerous natural language ambiguities and often impose weak security policies, leading to improper user data harvesting, redirection of the authentication flow, misinformation and more.[4]
  • Through deep fakes and other AI that replicate a person’s face, likeness, body, speech or writing patterns, fraudsters can organize highly targeted, compelling spear phishing attacks: impersonating a trusted source to penetrate a company’s cybersecurity defenses and promulgate a mass hacking campaign.[5] AI-powered spear phishing attacks often lack many telltale grammatical mistakes, brevity and other apparent signs that employees use to spot and report phishing emails.[6] Existing deepfake detectors exhibit high error rates, as well as gender and racial bias.[7]

In sum, the rapid speed of AI development risks missing flaws that make businesses vulnerable. Unprepared companies will soon face a plethora of legal risks:

  • Cyberattacks that compromise personally identifiable information may trigger expensive disclosure requirements under federal or state law depending on who is affected, the number of affected individuals and what information is compromised. Data breaches are likely to become more expensive because an AI can access more of a victim’s database in less time.
  • The Federal Trade Commission (FTC), the federal agency responsible for enforcing similar laws, recently signaled its intent to reduce consumer data breach notice fatigue and increase follow-through on post-breach remedies such as credit monitoring.[8]
  • Failure to comply with standards for cybersecurity systems may lead regulators to impose fines.
  • The failure to take appropriate precautions against cyberattacks may constitute a breach of the fiduciary duties of supervision, care and loyalty.
  • Misuses of AI may violate social norms, alienate customers and other stakeholders and lead to reputational damage. For example, Amazon’s facial identification feature has been accused of gender bias,[9] and Google has faced backlash for helping the federal government analyze drone footage through AI.[10]

These vulnerabilities are all the more dangerous to systems that lack AI-based countermeasures. Businesses should integrate AI into their cybersecurity systems to help prevent, detect and respond to data breaches. AI’s ability to quickly process data allows for comprehensive vulnerability scans, early detection of fraud by analyzing patterns in emails as they enter a user’s inbox, reviews of historical data to locate the root cause of an incident, streamlining communications to appropriate team members and categorizing and triaging incidents based on their potential harm to the organization.[11] Additionally, companies should:

  1. Maintain communication with their IT/cybersecurity team and conduct at least annual reviews of cybersecurity policies and weaknesses, which include reviewing what data is being used to train any AI the company utilizes.
  2. Understand the technology underlying AI, its potential for failure, regulators’ expectations, what public statements about AI use are credible and how AI fits into more extensive governance and corporate culture problems.
  3. Regular training for executives and employees on the company’s use of AI, spear phishing and other cybersecurity risks in the age of AI.
  4. Review cyber insurance policies for gaps in coverage that AI may create. The rise in cyberattacks has caused cyber insurance premium hikes, and most companies remain insufficiently covered.[12] Even “[f]or businesses with cyber insurance coverage, AI risks may not fit neatly into the language of those policies, and insurers may argue that exclusions or gaps in policy wordings apply.”[13] Thus, companies should also consider retaining AI-specific insurance policies.
  5. Retain a data incident response vendor and legal counsel with expertise in AI before an incident occurs.

For questions regarding this evolving landscape, or if you require assistance updating your cybersecurity policies to account for AI risks, contact Richik Sarkar, Evan Yahng or your Dinsmore attorney.


[1] See Alfred R. Cowger, Jr., Corporate Fiduciary Duty in the Age of Algorithms, 14 J. of Law, Tech. & The Internet 138, 158-59 (2023); Deciphering AI Risk Insurance: Beyond Cyber Coverage, ProgramBusiness (Aug. 31, 2023).

[2] Rachel Curry, Companies want to spend more on AI to defeat hackers, but there’s a catch, CNBC (Oct. 3, 2023) (https://www.cnbc.com/2023/10/03/companies-spending-more-on-ai-to-defeat-hackers-but-theres-a-catch.html).

[3] Steve Alder, Editorial: Why AI Will Increase Healthcare Data Breaches, The HIPAA Journal (Oct. 12, 2023) (https://www.hipaajournal.com/editorial-why-ai-will-increase-healthcare-data-breaches/#:~:text=Due%20to%20a%20lack%20of,increase%20in%20healthcare%20data%20breaches).

[4] Umar Iqbal et al., LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins, (Fed. Trade Comm’n PrivacyCon, 2024) (https://arxiv.org/pdf/2309.10254.pdf).

[5] Josianne El Antoury, How Insurance Policies Can Cover Generative AI Risks, Law360 (Oct. 4, 2023).

[6] Alder, Editorial: Why AI Will Increase Healthcare Data Breaches.

[7] Mehrdad Saberi et al., Robustness of AI-Image Detectors: Fundamental Limits and Practical Attack (ICLR, 2024) (https://arxiv.org/pdf/2310.00076.pdf); Yan Ju et al., Improving Fairness in Deepfake Detection (2024) (https://openaccess.thecvf.com/content/WACV2024/papers/Ju_Improving_Fairness_in_Deepfake_Detection_WACV_2024_paper.pdf).

[8] Commissioner Lina Khan, Federal Trade Commission, Remarks at the Federal Trade Commission PrivacyCon, March 6, 2024.

[9] James Vincent, Gender and racial bias found in Amazon’s facial recognition technology (again), The Verge (Jan. 25, 2019) (https://www.theverge.com/2019/1/25/18197137/amazon-rekognition-facial-recognition-bias-race-gender).

[10] Matthias Holweg, et al., The Reputational Risks of AI, CAL. MANAGEMENT REV. (Jan. 24, 2022).

[11] Matthew White & Justin Daniels, Using AI in Cyber Incident Response Demands a Total Safety Check, Bloomberg Law (Nov. 14, 2023) (https://news.bloomberglaw.com/us-law-week/using-ai-in-cyber-incident-response-demands-a-total-safety-check).

[12] Nirmal Kumar J, How AI will boost cyber insurance industry growth, Tata Consulting Servs. (https://www.tcs.com/what-we-do/industries/insurance/white-paper/ai-overcoming-cyber-insurance-industry-challenges).

[13] El Antoury, How Insurance Policies Can Cover Generative AI Risks.