Security

The importance of security in AI systems

AI systems should be designed and used in a way that ensures their security, including protection against cyber-attacks and unauthorized access. Data security has increasingly become an important element of any AI system since these systems usually depend on large quantities of data to make decisions. The data may be sensitive information, including financial records, health records, and personal details. Protecting such data helps to prevent unauthorized misuse and access. Embedding security in AI systems also helps to augment the security of computer networks and data.

Description of how lack of security can lead to harmful outcomes

Data breaches pose adverse consequences. They compromise individual data and the organization whose data is breached. Besides the reputational and financial damage, data breaches can also lead to a lack of trust in AI systems and their capacity to safeguard sensitive information. Since machine learning (ML) and AI systems are designed to yield outputs after consuming and analyzing large quantities of data, various unique challenges arise. MIT Sloan made a summary of these challenges by organizing vulnerabilities across five categories: system, human factor, data, software, and communication risks. Failing to address these security challenges in AI systems could lead to different types of attacks that can be used to manipulate and attack machine learning models. These attacks can be used in various ways to achieve a malicious objective, including causing damage, hiding something, and degrading faith/trust in a system. Attacks to the model could be;

  • Manipulation and data poisoning attacks: Data poisoning is attributed to the tampering of the raw data utilized by the AI/ML model. A critical issue associated with data manipulation is that it is difficult to change AI/ML models after the identification of erroneous inputs.
  • Model poisoning attacks: As a result of tampering with the existing algorithm, attackers can affect the algorithm’s decisions.
  • Model disclosure attacks: Occur when an attacker provides designed inputs and examines the ensuing outputs produced by the algorithm.
  • Stealing Models: This can allow attackers to get sensitive data that was utilized for training the model, utilize the model for financial benefit, or affect its decisions. An example is when a bad actor understands the factors considered when something has been flagged as malicious. They can find a way of avoiding the markers.

Examples of harmful outcomes resulting from lack of security in AI systems

Knight Capital: An example of a harmful outcome is Knight Capital’s story that lost $460 million within 45 minutes because of a bug in the firm’s trading algorithm. This plunged the company on the verge of bankruptcy and ultimately being acquired by its rival within a short period. While the issue was unrelated to adversarial behaviors in this specific case, it significantly illustrates the potential effect of an algorithm’s error and the failure of embedding security into an AI system.

Tesla Autonomous Cars: Attackers may want to cause damage by trying to disrupt the AI system’s operation. For example, an attack can make an autonomous vehicle disregard stop signs by attacking its AI systems so it incorrectly recognizes a stop sign as a different symbol or sign. Thus, failing to embed adequate safety and security measures in AI can wreak havoc. For example, Tesla recalled more than 50,000 cars in the US since the AI behind its self-driving feature was too aggressive, making the vehicle roll past stop signs. As a result, the US National Highway Traffic Safety issued a safety recall for vehicles that were running the 2020.40.10 firmware version.

Recommendations for ensuring security in AI systems

Encryption: It entails converting data into a code to deter unauthorized access. Data encryption can enable AI systems to ensure that even in case of a breach, the data remains unreadable and protected from those without the appropriate decryption key. Data at rest and data in transit can be encrypted, offering various layers of protection.

AI Security Compliance Policy Solution: There is a need to create AI Security Compliance as a major public policy mechanism to safeguard against AI attacks. Every country should come up with compliance programs to minimize the vulnerability of attacks on AI systems and mitigate against the adverse effects of successful attacks. These programs can encourage stakeholders to embrace various best practices to secure their systems and make them stronger against AI attacks.

Implementation stage compliance requirements: They focus on ensuring that stakeholders take appropriate precautionary steps as they develop and use their AI systems. It includes securing soft assets and improving detection systems that can provide warnings when attacks are being developed/formulated. They are as follows:

  • Securing soft assets: AI system operators need to secure assets that can be used for crafting AI attacks, including models and datasets, and improving the systems’ cybersecurity where the assets are stored. Critical apps that deploy AI should embrace a set of best practices in order to harden these assets’ security.
  • Improve the detection of intrusion and attack formulation: Intrusion detection systems should be improved to better detect when assets have been compromised and behavioral patterns that indicate an adversary formulating an attack. Policymakers need to encourage better intrusion detection for the systems that hold these crucial assets. After a system operator detects that an intrusion has taken place that can jeopardize the system or attack being developed, there is a need for them to immediately switch into mitigation mode. Thus, it is crucial for system operators to have a predetermined plan specifying the exact actions that should be implemented if there is a system compromise and immediately put the plan into action.
  • Develop response plans: It is important to ascertain how AI attacks are most likely to be utilized and develop response plans for such scenarios. Ideally, stakeholders should determine how AI attacks could be used against their AI system, and develop response plans for mitigating their impact. While determining the most likely attacks, stakeholders need to evaluate existing threats and design appropriate response plans.
  • Vulnerability Mapping: It is vital to create maps that show how one system or asset’s compromise impacts all other AI systems through rapid shared vulnerability mapping.