The cybersecurity and privacy field is currently in the midst of an “AI moment,” where every vendor, expert, and article about the future of cybersecurity describes how artificial intelligence (AI) is transforming operations and will continue to change the way cybersecurity and privacy professionals work. However, it seems that AI has taken on the life of a mysterious fix-all solution for any problem we may face — from cyber threats, to institutional confusion, and workforce shortages. What gets lost in this noise is the two AI discussions we should all actually be having — when AI can actually be useful and when AI can do more harm than good.
When AI isn’t the cure in privacy and security
Professionals face a myriad of challenges in healthcare security and compliance and the unfortunate reality is that most of these challenges are ill-suited to be solved by AI. AI is not a best practice solution in three critical areas: culture change, creative solutions in investigations, and human ethical judgement.
First, culture change is all about human behavior, which is incredibly complex, and requires creativity and judgement. While AI can provide accountability and leverage priorities in ways that may save time, culture change requires a human connection and intuitive knowledge of what an organization needs. AI can inform, but is not the solution when it comes to long-term vision and culture modifications.
AI also has limitations when it comes to insight and creativity in investigations. While AI is fully capable of providing natural language descriptions of the facts, why those facts come together to present a suspicious case, and all the information one needs to decide whether or not something constitutes a breach, a final call on where this information leads you should be left up to discretion by an actual person.
Lastly, we must remember that privacy and security jobs are filled with hard choices that impact everyone involved. The uniquely human ethical judgement of each privacy and security officer allows them to weigh all considerations to reach the fairest conclusion — something AI can’t do on its own.
When AI is just what the doctor ordered
However, there are three important ways that AI can help with cybersecurity and privacy management — in challenges involving confusion, scale, and repetition.
Healthcare privacy is complicated due to so much open access to health system information and tens of thousands of users who can view patient data. It’s nearly impossible to differentiate between appropriate and inappropriate access. AI is a perfect tool in these cases, as it can automatically combine and analyze all factors, complete with the context in which they’re occurring.
It’s impossible to manually look through the tens of millions of transactions to patient data every day. That’s where AI can help — it’s possible to leverage these technologies to review every single access, and place it in its appropriate context. Ingesting and reviewing these access points, privacy and security professionals can train AI programs to get smarter over time and only send alerts that need the attention of the privacy and security team.
A major pain point to health systems is not only the scale but the repetitive nature of these transactions to patient data. There’s a huge amount of repetitive, time-consuming work that goes into investigating cybersecurity or privacy incidents. AI and automation is perfect for these tasks — characterizing user behavior, summarizing the facts of a case, and figuring out the details are all tasks AI is well suited to perform: it is all about gathering data, processing it, and presenting it for review.
Doing an AI Privacy/Security Checkup
Privacy and security professionals should reflect on the pain points of their team and organization, and see how they match up to the abilities and limitations of AI. If the main problems are high volumes of false positives and not having enough time to get your work done, an AI-based solution could be invaluable.