"They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety."
AI, or artificial intelligence, is at the forefront of most organizations and government entities are no exception, including that of the Department of Homeland Security. One does not have to look far to understand how concepts like AI are impacting the direction our national security is headed. For example, a report titled “Artificial Intelligence – Using Standards to Mitigate Risks” was published last year by the Public-Private Analytic Exchange Program. This group, launched by the Department of Homeland Security (DHS), Office of Intelligence & Analysis and the Office of the Director of National Intelligence brought together government and private sector experts to discuss AI usage and how to mitigate risks associated with this technology.
What is less clear is where does data privacy fit in the context of national security. As citizens of the United States and of the Internet, we regularly see stories about personal data being stolen or used without our awareness/consent, which in turn prompts inquiry and investigation. The EU (European Union) has gone so far as to implement GDPR (General Data Protection Regulation), which was designed to protect data and privacy for all people within the EU and European Economic Area. With the current emphasis on personal privacy, where do personal rights intersect with national security?
Per the DHS website, there are five homeland security missions:
1. Prevent terrorism and enhancing security
2. Secure and manage our borders
3. Enforce and administer our immigration laws
4. Safeguard and secure cyberspace
5. Ensure resilience to disasters
In support of these, the federal government have passed laws, such as the 2001 Patriot Act, which allows law enforcement greater surveillance of those suspected of terrorist-related crimes, facilitation of information sharing amongst government agencies and other homeland security activities. As one might expect, any gains in national security must be tied to increased ability to gather and derive intelligence from data. However, these laws and reports, such as the “Artificial Intelligence – Using Standards to Mitigate Risks”, do not focus upon data privacy at an individual, enterprise or governmental level. So where is the correlation between AI and data privacy?
Data privacy issues are present throughout the lifecycle of AI. For example, the process in which data is initially collected and aggregated with other sources of data may be done without controls in place to determine if some sort of privacy line has been crossed. For example, bots might be employed to ‘scrape’ the Internet for personal information, some of which should not be accessible to the public. Secondarily, once this data is consumed by machine learning or AI solutions, the intelligence gathered may be greater than the sum of its parts. Another way of putting it is if one of the characteristics of AI is to simulate natural intelligence to solve complex problems then one method this is accomplished is by finding correlations between data that will help inform and predict. Individually, these pieces of data may not generate much value but in a wider context with these relationships between data sources identified there may be a much richer insight gleaned. Given the reach and scope of the federal government, as well as the ubiquitous nature of the Internet, it is safe to say that their ability to access multiple sources of unique and shared data is tremendous.
So, where does the line between the greater good of homeland security and data privacy lie? Is there one? While the federal government has enacted numerous sector-based laws, some of which touch upon privacy (e.g. Health Information Portability and Accountability Act, Fair Credit Reporting Act, etc.) there is no single piece of federal legislation that principally focuses on data protection and privacy, like the EU’s GDPR. Most states have adopted laws protecting their resident’s personally identifiable information, but this doesn’t address how data privacy and homeland security on a national level coexist.
This is not to suggest that DHS and other intelligence agencies shouldn’t continue leveraging AI to help secure our country. If for no other reason threat actors and nation states are adopting this technology for their own purposes, often at odds with the United States. For example, it is thought that AI is being used to make polymorphic attacks even more effective, increasing the speed and efficacy in which identifiable attack attributes change to avoid detection by advanced cybersecurity tools. Another example is that of a spear phishing experiment conducted by security firm ZeroFox. Their AI tool sent spear phishing tweets to over 800 people at a rate of 6.75 tweets a minute, capturing 275 people. Their human counterpart was only able to send 129 users malicious tweets at a rate of 1.075 tweets a minute, ensnaring only 49 individuals. It’s examples like these that make it imperative that we use tools like AI to protect our national security.
Until the United States defines its position on data privacy and protection, other legitimate and pressing national needs, such as homeland security will continue to move forward in the advanced use and application of data-driven technologies, such as AI and machine learning. As we continue to push the private sector to be privacy-centric with their services and technology so too must the US government, starting with a clear message about data privacy and protection.