Ethical Artificial Intelligence Frameworks — Security and safety

Law and Ethics in Tech
6 min readAug 19, 2023

--

This ethical framework addresses the critical aspects of safety and security in artificial intelligence systems. These systems are designed to enhance human well-being, as emphasised in the Montreal Declaration of responsible artificial intelligence. Ensuring that AI systems do not cause harm is of utmost importance, particularly when used in safety-critical domains. As for the Beijing artificial intelligence principle, it proposes the following principle:

Control Risks: Continuous efforts should be made to improve the maturity, robustness, reliability, and controllability of AI systems, so as to ensure the security for the data, the safety and security for the AI system itself, and the safety for the external environment where the AI system deploys.

The Asilomar AI principle asserts that AI systems should prioritize safety and security throughout their entire lifespan, with verifiability as necessary and feasible. This requires a comprehensive evaluation and scrutiny of systems incorporating emerging technologies, from initial concept and design to testing, deployment, and ongoing maintenance.

As for model robustness, according to Vector Institute, it refers to the consistency of a model’s performance when exposed to new data compared to the data it was trained on. It is desirable for a model to exhibit minimal performance deviation. Robustness is crucial for several reasons. Firstly, trust in a tool relies on reliable performance, and unpredictable behavior can erode that trust. Secondly, deviations from expected performance can signal important issues such as security attacks, unaccounted phenomena, biases, or significant changes in data.

Credit: DLR (CC BY-NC-ND 3.0) https://www.dlr.de/en/ki

Consider the evolving landscape of technologies like self-driving vehicles, smartphones, and IoT devices, which bring great benefits but also pose safety and security challenges. For instance, the potential consequences of a hacked self-driving vehicle or compromised smart speakers raise concerns about the protection of individuals and their privacy. With each new technological advancement, similar risks must be addressed and mitigated to safeguard users and society as a whole. As an example, researchers found out that an AI system would now be able to guess the password by the sound of keys pressed on a laptop (with 90% accuracy rate). For more details, https://amp.theguardian.com/technology/2023/aug/08/ai-could-identify-passwords-by-sound-of-keys-being-pressed-study-suggests

Source of safety and security issues:

  • abnormal systems behaviour: Abnormal system behavior is identified as a source of risk, which refers to deviations from expected operations. Indicators of compromise (IOCs) are highlighted as traces that may indicate a system has been compromised, such as the presence of unauthorised software, unknown network ports or protocols, and unauthorised account usage. These behaviors can suggest potential security threats or negligence issues. It is emphasised that abnormal behavior should not be considered definitive proof of an attack and requires critical thinking to assess the risks it poses to the organisation.
  • bad actors: Bad actors are intentionally malicious and distinct from accidental or unintentional threats. Understanding the nature, characteristics, motives, and targets of bad actors helps in assessing the risk to security and safety. Different types of bad actors include script kiddies, professional hackers, cyber criminals, hacktivists, cyber terrorists, and state-sponsored hackers. By profiling bad actors, we can better understand the specific types of threats they pose and the potential impact of their actions, enhancing overall security measures.
  • adversarial machine learning: malicious input is introduced during the training process to manipulate the decisions made by AI models. Adversarial attacks have various goals, such as causing sub-optimal performance, evading detection, or compromising safety. The anatomy of such attacks involves considering the influence on the model, security violations, and attack details. Specific attack types mentioned include evasion attacks, where malicious content is concealed to fool anti-spam systems, and poisoning attacks, where training data is manipulated to compromise the model’s decision-making. Adversarial attacks pose a significant risk to safety and security, and detecting and mitigating them can be challenging.
  • cyber attacks: There are several types of attacks, such as denial of service (DoS) and distributed denial of service (DDoS) attacks, which disrupt or overload servers, rendering services inaccessible. Malware, including viruses, worms, Trojan horses, spyware, and ransomware, is another prevalent form of attack that can compromise data-driven systems. Snooping or passive wiretapping allows attackers to monitor networks and gather sensitive information. War driving involves hijacking Wi-Fi signals to gain unauthorized access to networks, while zero-day exploits target system vulnerabilities that are unknown or unaddressed by developers. These attacks present significant risks to security and require proactive measures to mitigate their impact.

The impact of AI systems on society is evident in cases like Facebook’s involvement with fake news, where the platform’s failure to address the dissemination of harmful content contributed to a genocide in Myanmar. Social network platforms aim to captivate users by catering to their interests, which generates revenue through advertising and AI-driven algorithms. However, if these platforms prioritize personal beliefs over accuracy, they risk exacerbating the spread of misinformation. To avoid such pitfalls, robust controls must be established to ensure responsible and ethical use of artificial intelligence.

Mitigating measures (industry best practice)

  • critical AI systems standard: the focus is on strategies for mitigating safety and security risks, particularly in critical AI systems that have a significant impact on health and safety. The concept of security by design is emphasized, ensuring that protections are incorporated from the early stages of the project life cycle. It is important to follow rigorous standards for critical AI systems to address safety and security risks effectively. These standards provide guidance, best practices, and tools for securing the system, maintaining compliance, and keeping users safe. As critical AI systems involve decisions of great consequence, standards may address issues such as when to involve human intervention and the process for transferring control from AI to humans. To do so, the NIST’s Artificial Intelligence Risk Management Framework (AI RMF 1.0) could be used as good guide for the implementation of the basic framework.
  • baseline AI system behaviour: Establishing baselines for AI system behavior is an effective method for mitigating safety and security risks. A baseline represents the normal operating state of a system and serves as a reference point. It enables the system to be rolled back to its ideal configuration in the event of a compromise, eliminating traces of the incident. Baselines also aid in identifying deviations from expected behavior, allowing for prompt detection of potential security issues. Common configurations for baselines include system versions, network bandwidth, computation time, user activity, and performance indicators. Additionally, systems should be equipped to respond to disruptions in baselines, alerting relevant personnel and taking measures to isolate the system from further harm.
  • response team: Security incidents are common in the business world, and setting up an incident response team is a proactive measure to deal with them effectively. These teams, often known as CSIRT (Computer Security Incident Response Team)s, consist of cybersecurity experts who are trained to identify and respond to various security incidents such as network intrusions and data breaches. CSIRTs may include managers, investigators, security specialists, help desk staff, and crisis communicators, among others. Best practices for establishing a CSIRT include regular retraining for team members, conducting tabletop exercises to simulate emergency scenarios, ensuring effective communication within the team, and following established incident response processes. Organizations that cannot afford an in-house CSIRT may opt to outsource this function to external contractors, but it is crucial to designate a team early to enable prompt response to adverse events.
  • Protection of data: The security of data is a critical concern for organizations, particularly those involved in data-driven technologies. Protecting data requires strategies that focus on confidentiality, integrity, and availability (CIA). Encryption is a key method for ensuring the confidentiality of data in storage, while access control and proper storage environments also play a role. Establishing baselines for system behavior helps identify deviations and maintain data integrity. For data in transit, encryption protocols like SSL/TLS and SSH provide confidentiality and message integrity, while maintaining availability requires load balancing, redundancy, and protection against DDoS attacks. Designing a comprehensive security architecture involves considering both data in storage and data in transit.

Once again, safety and security are paramount when it comes to artificial intelligence, focusing on the principle of non-maleficence. Developers and designers of AI systems must go beyond assessing foreseeable risks and employ their creativity and imagination, as the principle of capability caution acknowledges the unpredictability of the future. It is crucial to anticipate potential risks and challenges that may arise, ensuring that AI systems are designed to mitigate harm and promote the well-being of individuals and society. By embracing a proactive and forward-thinking approach, developers can help build a safer and more trustworthy AI ecosystem.

To my readers: how do you handle hallucination of AI systems? Do you see any other mitigating measures that you would like to share with me?

--

--

Law and Ethics in Tech
Law and Ethics in Tech

Written by Law and Ethics in Tech

Private lab specialising in emerging tech (AI & Blockchain). Ensuring ethical practices and promoting responsible innovation.

Responses (3)