Skip to content Skip to footer

Explainable and Responsible AI

This literature review was written by Eden Kollçinaku and has been submitted to University of New York Tirana.

Abstract

This paper focuses at explaining two branches of artificial intelligence with varying degrees of emphasis, Explainable Artificial Intelligence (XAI) and Responsible AI (RAI), what they entail, why they are significant and what difficulties are encountered when applying them. The intricacy of the AI systems requires transparency and ethics. The review also looks at the evaluation of methods in XAI, the ethical framework governing the RAI and the relationship between the two fields. Lack of common metrics and failure to provide readable measures while maintaining some levels of security are among the greatest risks acknowledged in it. The paper also explains how companies are practicing responsible AI and explores examples of these in different fields. Through highlighting the existing and future research directions on AI, this review suggests directions for the construction of the more powerful, open and moral AI systems.

1. Introduction

1.1 Overview of AI

AI has become both a phenomenon and revolution in several fields; it constitutes a complex spectrum of technologies that allow computers to solve problems once requiring human skills (Barbero Arrieta et al., 2020; Goebel et al., 2018; Gohel et al., 2021). Such competencies involve a capacity to learn from experience, to discern patterns and relationships, to reason and to make decisions, and to analyze challenging problems (Gohel et al., 2021). The field of AI has experienced somewhat exponential growth in recent years, especially owing to innovations in the subdiscipline of ML and further specifically in the branch of DL (deep learning).

While developers of AI systems have more automated systems to work with today and more often rely on complex algorithms and petabytes of data in their work, questions about the interpretability and regulation of their actions have become paramount for academic and public discussions (Barredo Arrieta et al., 2020; Peters, 2023). Subsequently, the recent introduction of complex models that contain inferential choices which are difficult to understand and interpret, even by the designers who develop them has given rise to many questions concerning the effects of employing such systems in important sectors.

1.2 Explainable Artificial Intelligence: Unveiling the “Black Box”

Thus, Explainable AI (XAI) has come as an important subject to look into (Gohel et al., 2021). XAI is conceived to address the question of making artificial intelligence systems more understandable to end-users and help them better understand the rationale for AI-driven decisions (Barredo Arrieta et al., 2020). The focus on explainability in artificial intelligence is not an academic concern, but a must as artificial intelligence and its algorithms affect decisions made for the individuals and the society.

1.3 Responsible AI: A Framework for Ethical AI

Further to that, the idea of Responsible AI has emerged as a more extensive concept that takes into consideration additional ethical dimension and societal consequences of artificial intelligence (AI) solutions (Benjamins et al., 2019; Radanliev & Santos, 2023). Responsible AI’s objective is at making sure that designing and using of artificial intelligence systems is promoting value, respecting rights, and benefiting the society (Barredo Arrieta et al., 2020; Cheng et al., 2021; Radanliev & Santos, 2023). It is, therefore, the key that enhances the achievement of Responsible AI because it allows evaluating the AI system for bias, consequence, fairness, responsibility, and alignment with the principles (Barredo Arrieta et al., 2020; Benjamins et al., 2019; Peters, 2023).

1.4 Purpose of the review

This paper seeks to give a review of Explainable AI and Responsible AI in pursuit of identifying the nature and demand for Explainability of Artificial Intelligence systems. Specifically, it will address:

1. The reasons for raising the significance of the XAI and Responsible AI.

2. The problems for which authors of AI contextualized ‘explainability’ view is easy to identify or measure.

3. The different types of stakeholders that are part of this process and how they may have different needs when it comes to being explained to by AI.

4. A survey of the methods and techniques to deploy to realize XAI.

5. Opportunities and risks that are associated with the extension of XAI and the concept of Responsible AI. In providing this review the aim is to help progress the discussion on XAI and Responsible AI by providing a depth analysis of its status. Finally, the objective shall be to create a future where AI systems are not only strong and relevant but explainable, regulated and oriented by and aligned with human and societal objectives.

2. Theoretical Foundations, Importance, and Challenges in Explainable AI (XAI)

2.1 Theoretical Foundations of XAI

Explainable AI or XAI has become a major research topic due to the factors of complicated and lengthy deep learning systems that have been described as ‘black boxes’ (Gohel et al., 2021). Although, there is a long history of developing AI systems with explainability in mind, for example, incorporating reasoning architectures for explanation in early expert systems like MYCIN (Barredo Arrieta et al., 2020), the recent advance of AI solutions has brought this problem back into the spotlight.

However, there is one main concern of XAI research that is the absence of axiomatic formulization of ‘explainability’. This is also true about what constitutes a ‘good’ or ‘sufficient’ explanation of AI system’s decision making, thus underlining the complexity of the problem (Benjamins et al., 2019).

Several influential works have shaped the foundations of XAI:

The DARPA XAI program, which began in 2016, made a major contribution to the promotion of the agenda, stating that AI is essential in important areas of activity (Goebel et al., 2018).

The now-famous paper “Why Should I Trust You?” LIME: Local Interpretable Model-Agnostic Explanations were first proposed by Ribeiro et al. (2016) as a method for interpreting the output of any classifier with local interpretability (Barredo Arrieta et al., 2020).

More and more people are beginning to realize that it is necessary to integrate knowledge from social sciences including psychology as well as cognitive science for the creation of XAI (Barredo Arrieta et al., 2020). This translates into an interdisciplinary goal of developing XAI techniques that are aligned with human users from how people explain/understand/trust.

2.2 Importance of Explainability in AI

The importance of XAI extends beyond technical considerations, encompassing ethical, legal, and societal implications:

Trust and Transparency: XAI is important in enhancing the credibility of AI systems because it was developed to explain how AI came to that decision in specific areas that have provisions affecting individuals (Peters, 2023).

Ethical and Responsible AI: Explainability is one of the fundamental principles for building and deploying Responsible AI systems. It makes it possible to have the preliminary idea of possible bias, guarantees equity, and being responsible for the AI systems (Cheng et al., 2021).

Regulatory Compliance: New rules including GDPR have introduced a specific right to explanation for decisions made by AI systems, the violation of which makes XAI essential to avoid (Radanliev & Santos, 2023).

Improved Human-AI Collaboration: This paper shows how XAI can improve decisions by offering the user insights that can help them better deal with a problem. It can also introduce knowledge into the learning process by revealing abstractions in data and enable system optimization by pointing weaknesses in codes and enhancing the precision of models (Barredo Arrieta et al., 2020).

2.3 Challenges in XAI

Despite its importance, XAI faces several significant challenges:

Lack of Standardized Metrics: Given that there is no current common consensus as to what is meant by “explainability measures” and which of these is “better”, such assessment becomes a challenge because it is not easy to evaluate XAI methods (Benjamins et al., 2019).

The “Transparency Paradox”: Transparency is always a tricky topic to manage when other factors such as privacy, security, and robustness must also be met. Some authors have mentioned that high levels of transparency may increase AI systems’ exposure to attacks and misuses of explanations (Cheng et al., 2021).

Complexity of Explanations: Striking the measures of explanation is essential, but the concept of finding the right level of difficulty of the explanation that would still be sufficient to give to the learner is ideal. Conclusions should be made specific to stakeholder groups and their requirements and knowledge level but are still consistent with what the actual AI system is doing (Goebel et al., 2018).

Dynamic Nature of AI Systems: XAI is further complicated by the fact that in most ML systems, there is an architecture of a constantly updating and improving system that requires new explanations as it undergoes change (Benjamins et al., 2019).

Bridging the “Responsible AI Gap” in Industry: There is a common issue of how business organizations and practitioners can implement ethical standards and best practices of XAI into practical steps for the accountable exploitation of AI applications (Cheng et al., 2021).

Despite the great promise that XAI has for improving the current levels of transparency, auditability, and credibility of AI systems, it is equally important to formulate effective interventions for the considered complex problems for the successful implementation of XAI and for the proper further evolution of AI infrastructure.

3. Socially Responsible AI Algorithms

3.1 Defining Social Responsibility in AI

Besides, AI is already and for such a long time present in our lives one of the challenges that appear in the course of the growth of AI as an advanced forms of technologies and applications is to define a purpose and the appropriate direction for the development and usage of the new forms of AI so that they would contribute to the development of the individual and the society. Which means the socially responsible AI is a concept much closer to the context rather than the pure technology level of the system design, which is the RI approach that looks at the individual, community, and society impact of AI (Cheng et al., 2021) (Radanliev & Santos, 2023).

Key considerations for socially responsible AI systems include:

Human Values at the Forefront: Returning ethical principles and human values through out the overall processes of developing AI.

Addressing Societal Expectations: The task involves, on the one hand advocating for the positive impact of AI, in terms of relation between AI systems on the one hand and culture on the other hand, and on the other hand addressing the negative aspect of the subject.

Focus on Marginalized Groups: Special emphasis is made to the results of the AI implementation on the most vulnerable population which contains minimized minorities and other marginalized populations who are more influenced by the negative effects and unnoticed biases of the model.

3.2 Ethical Principles

The three pre-understood ethical requirements that are used to establish a socially acceptable Artificial Intelligence are namely, Fairness Privacy and Accountability.

Fairness: Preventing AI systems from both generating and perpetuating unfair social risks (Cheng et al., 2021). This involves addressing:

•             Data Bias: It involves minimizing bias that may exist in the training data.

•             Algorithmic Bias: minimising the influence of bias that AI algorithms bring into the system.

•             Metrics: Evaluation metrics that give insight to the fairness

Privacy: Challenges to privacy of the individual in the context of data-oriented AI (Radanliev & Santos, 2023). Key considerations include:

•             Data Minimization: Gathering only relevant information.

•             Data Security: Strengthen measures of protection.

•             Transparency and Control: Making it possible for the individual user to be informed about some of the things that are being done with their data.

•             Privacy-Preserving Techniques: Using concepts such as differential privacy.

Accountability: This involves allocation of responsibilities for AI systems to reduce responsibility ineffectiveness (Barredo Arrieta et al., 2020). This involves:

•             Transparency: This includes explaining how an apparatus of decision-making use of AI works.

•             Auditability: Allowing third parties to assess AI systems.

•             Responsibility Attribution: Who is responsible for making AI decisions?

3.3 Challenges and trade-offs

Implementing socially responsible AI algorithms presents several challenges:

Balancing Performance with Responsibility: Balancing the AI solutions in terms of effectiveness and the onerous tasks that it will impose may raise challenges (Radanliev & Santos, 2023).

The Transparency Paradox: Now, sometimes more openness produces desirable effects but can negatively affect either security or privacy (Radanliev & Santos, 2023).

Complexity and Explainability: One of the persistent issues, even into the present, is the challenge of achieving a right level of model complexity while maintaining interpretability, especially for state-of-the-art architectures such as deep neural networks (Barredo Arrieta et al., 2020).

Bridging the Gap Between Principles and Practice: The practical implementation of ethical norms in organizations remains an intricate topic for most institutions (Cheng et al., 2021).

Evolving Nature of AI: Such speedy progress suggests the need to refresh ethical benchmarks and types of regulation from time to time (Cheng et al., 2021).

To make AI socially responsible, the problem extends ethical principles at each stage of AI life cycle. It means further collaboration of AI scientists, policymakers, industry people, and the public so that AI could turn into a good for society and could help make the world a better place to live. These are the recognized challenges that need to be resolved alongside learning how to work together to meet the goal of governing AI and achieving the proper advancement of the field.

4. Technical Methods for Explainable AI

To give an instance, Explainable AI (XAI) has come up with numerous technical approaches to increasing the understandability of the complex AI systems. With respect to the classification type, these methods can be said to be divided into the post-hoc and intrinsic explanation approaches.

4.1 Post-hoc Explanation Methods

They are performed after model training to give information or reasons into model decision making process, not necessarily transparent about its end-to-end process (Barredo Arrieta et al., 2020). These techniques are efficient when used with deep learning models or any models their decisions cannot be easily explained. Key post-hoc approaches include:

•             Example-Based Explanations: This approach entails sampling from the training data which are most similar to the input being explained, displays examples to the users to enhance their understanding about the how the model works (Barredo Arrieta et al., 2020).

•             Local Explanations: These methods are centered on the exploration of model behavior in space around a given prediction. LIME (Local Interpretable Model-agnostic Explanations) is another commonly used method that constructs another model, simple and easy to interpret, around a specific input instance to mimic the behavior of the black box model locally (Gohel et al., 2021).

•             Feature Relevance Explanations: the things stated above measure the importance of features or the impact they have on a model’s decision-making process in numerical terms. Strategies such as SHAP (SHapley Additive exPlanations) provide full information breakdown of feature contribution on particular outputs (Cheng et al., 2021).

•             Visualization Techniques: Saliency maps, partial dependence plots, and decision boundaries, for instance, help convey the nature of relationships between inputs and model outputs to humans well (Barredo Arrieta et al., 2020).

4.2 Intrinsic Explainability

Since it naturally separates the output of explainability from the model, the intrinsic explainability approach is currently widely used, and it aims at creating models that require interpretation to be an inherent feature, that is, to emphasize transparency in the structure of the model and its functionality (Cheng et al., 2021). Examples include:

•             Linear/Logistic Regression: These models suppose a linear association between the input features and the output predictions, and these models have easily understandable parameters that represent the importance of the input features (Barredo Arrieta et al., 2020).

•             Decision Trees: Decision trees collapse data according to a set of rules that produce a comprehensible tree form with split branches from the root to the extraordinary branches known as leaves with each path signifying a decision rule (Gohel et al., 2021).

•             Rule-Based Learners: These models require knowledge to be represented in If-Then format, thus rendering their decision-making mechanism completely transparent (Cheng et al., 2021).

4.3 Model Transparency vs. Interpretability

While related, transparency and interpretability are distinct concepts in XAI:

Specifically, transparency, or the efficiency of knowing why the model was making a particular decision, is commonly performed through model architectures with fewer layers and fewer parameters.

When it comes to machine learning, the interpretability of the model means the level to which a human being can understand the correlations between the input placed on the model and the output provided by the model.

It is even more important when the stakes are high, that is, when prediction errors can have grave consequences—for example, in healthcare, finance, and criminal justice (Benjamins et al., 2019).

4.4 Current Tools and Technologies

Several tools and technologies have been developed to support XAI efforts:

•             LIME (Local Interpretable Model-agnostic Explanations): An explained post hoc technique used in black box models to estimate the prediction of the case by using an easily understandable model in the vicinity of the model.

•             SHAP (SHapley Additive exPlanations): An approach for interpreting one’s predictions using game theory ideas to calculate the contribution of features (Cheng et al., 2021).

•             Counterfactual Explanations: These point out the variations of input characteristics that would lead to a change in the prediction, which can be useful (Gohel et al., 2021).

•             Visualization Tools: Some of these library’s encodings include saliency maps, partial dependence plot, decision boundary visualization among others.

•             Explainable AI Platforms: There are several programs available within these platforms for developing building and deploying AI models, as well as to explain the AI model in one ‘single-click’ along with the use of different techniques and tools. The field of XAI has a wide variety of technical approaches for improving the understanding and explainability of AI solutions. These approaches will go hand in hand with the development of AI and its penetration into various spheres as they also become enablers of trust regarding the creations of AI architects and designers as the products are effectively launched into society.

5. Responsible AI by Design

5.1 Conceptual Frameworks for Responsible AI

Using the concept of ‘Ethics by Design’ and, instead of discussing ethical ideas in the product’s development, it is about incorporating them into each phase of generating an AI tool (Radanliev & Santos, 2023). Key components of this approach include:

•             Transparency and Explainability: Developing Social-AI models that can address the explainability issues on the behalf of AI consumers and investors.

•             Fairness and Non-discrimination: AI systems’ ability to avoid enshrine itself new bias by having fair treatment towards individuals and groups in society.

•             Privacy and Security: Preventing the loss of privacy of an individual together with the non-disclosure of information in the development of Artificial Intelligence systems.

•             Accountability and Responsibility: Working clarification of who is responsible for what AI systems do.

Imbedded in these aspects are basic postulates concerning the axiomatic and ontological status of people. This involves:

•             Engaging Stakeholders: Engaging developers, users, and interested communities with the purpose of considering multiple viewpoints during design and implementation of AI systems.

•             Promoting AI Literacy: Educating the public about the capabilities, limitations, and potential impacts of AI to build informed decision-making.

5.2 Practical Application: How Companies Adopt Responsible AI

Companies are increasingly implementing responsible AI principles (Benjamins et al., 2019) through different practical measures:

  • Developing methods for responsible AI, such as Telefónica’s “Responsible AI by Design” initiative.
  • Establishing ethical review boards to assess AI systems’ impacts.
  • Training programs for employees on responsible AI principles and practices.
  • Using tools to mitigate bias in AI systems.
  • Providing transparency about AI systems’ decision-making processes to users.
  • Creating mechanisms for users to provide feedback on AI systems.

5.3 Case Studies

While comprehensive case studies with specific outcomes are limited, several domains are highlighted as areas where responsible AI principles are being applied (Radanliev & Santos, 2023):

Healthcare: Advantages of using AI in the provision of patient care were identified, however, it drew concerns on patient privacy and data consent issues and a possibility of AI bias in diagnosis.

Finance: Any application of AI to financial markets, such as automated handling of trades or detection of fraud, raises the standards for expressiveness while at the same time eliminating algorithmic bias.

Criminal Justice: It has been portrayed that incorporation of bias has been exercised in its use, especially in applications such as predictive policing and risk assessment, and violation of people’s rights.

Defense: The ethical concerns of AI in defense have been an area of concern in the defense sector, especially in the concepts like in the use of lethal autonomous weapon systems, where it is believed it should be taken under the principles of international law.

Human Resources: Compliance with privacy and bias in AI application in recruitment and performance management focuses on the need of employment-related decisions.

All these examples point to the need for the proactive, iterative addressing of AI and its responsible deployment, pointing to the fact that ethical considerations must be integrated throughout every stage of the AI life cycle.

When it comes to AI system engineering, Responsible AI by Design is an important new paradigm. When ethical issues are incorporated at the beginning of the AI design process and the methods of responsibility are incorporated throughout the AI development process, businesses can strive to send intelligent systems that are both technologically sound and adherence to the ideals of individuals and cultures. Thus, further improvement and evolution of these approaches and their uses are likely to remain a core direction in the further evolution of AI.

6. Combining Explainability and Responsibility

6.1 Intersection of XAI and Responsible AI

Explainable AI (XAI) and Responsible AI (RAI) are closely interconnected concepts in the field (Benjamins et al., 2019). While RAI looks into a broader set of principles and practices aimed at ensuring AI’s ethical and responsible design, development, and deployment, XAI focuses specifically on making AI systems more transparent and understandable (Gohel et al., 2021).

The transparency provided by XAI is essential to achieve several key objectives of RAI:

  • Building trust in AI systems
  • Identifying and removing biases
  • Ensuring fairness
  • Complying with regulations

By providing insights into AI’s decision-making processes, XAI allows everyone involved to understand why and how an AI system reaches an outcome, making the identification and correction of potential issues easier.

6.2 Implications for Responsible AI Development

The integration of XAI and RAI has several important steps for the development of AI systems:

Ethical Imperative: XAI is not just a technical challenge but an ethical must for ensuring that AI systems are developed and used in a responsible way.

Holistic Approach: Combining XAI and RAI shows the need for a holistic approach to AI development that considers technical capabilities and ethical implications.

Stakeholder Engagement: XAI enables better engagement with different stakeholders, by making AI systems more understandable and accessible.

Continuous Improvement: The insights provided by XAI can feed back into the development process, enabling continuous improvement to better align with responsible AI principles.

XAI is crucial for achieving the objectives of Responsible AI, ensuring that AI systems are developed and used in a way that benefits individuals and society as a whole (Barredo Arrieta et al., 2020).

7. Gaps and Opportunities

Explainable AI and Responsible AI are concepts that have gained significant attention recently. However, despite their importance, several gaps remain. One fundamental challenge is the lack of consensus on the definition of explainability in AI. While some definitions have been proposed, a universally accepted one remains open, holding back the development of standardized insights for evaluating XAI (Barredo Arrieta et al., 2020).

Even when a definition is agreed upon, quantifying explainability is still challenging. The absence of widely accepted data for measuring the level of interpretability achieved by different XAI approaches makes it hard to compare the methods, slowing the progress in the field. Additionally, researchers face the challenge of balancing transparency and performance, as it comes at the cost of accuracy or other performance data.

Another critical issue identified in the literature is the “transparency paradox.” While transparency is generally desirable for RAI, disclosure of information about an AI system could make it vulnerable to attacks or help deepen misuse. Finding the right balance between transparency and security concerns poses a significant challenge for XAI researchers.

A notable limitation in the current body of research is the dearth of concrete, practical examples and detailed case studies. While potential use cases such as medical diagnosis and financial decisions are mentioned, there is a lack of detailed examples that show specific XAI techniques, challenges encountered, and outputs.

There is a need for more holistic approaches to XAI, going beyond isolated techniques toward integrated frameworks that consider explainability. Such frameworks should address different sides of explainability: interpretability, transparency, and accountability.

Additionally, as AI becomes more integrated into large-scale applications, developing XAI and RAI methods that can handle vast amounts of information without becoming computationally expensive is very important. Research efforts should focus on developing efficient algorithms for XAI, enabling their practical deployment.

Future research should also explore methods for addressing the question of “why” an AI system makes certain decisions, rather than just explaining “how” it arrived at a particular outcome.

Furthermore, there is a need to include social and ethical considerations deeper into XAI and RAI. Future studies should focus on developing XAI methods that align with ethical principles that promote fairness, accountability, and transparency (Radanliev & Santos, 2023).

Finally, it is important to develop robust and objective metrics for quantifying different sides of explainability. This will require a collaborative effort from the research community to establish shared protocols and standards. Expanding real-world implementations and publishing detailed case studies of successful XAI deployments will provide valuable insights for practitioners and help close the gap between theory and practice.

In conclusion, there remains much work to be done. Addressing these gaps and opportunities will be important for realizing the full power of AI while ensuring its responsible and ethical use.

8. Conclusions

Study of the state-of-the-art identifies a significant shift in the trajectory of AI development with particular focus on XAI and RAI. These two fields are clearly intersecting and are essential for trust, liability and the ethics of artificial intelligence. XAI is recognised as a particularly important part in attaining the overarching goals of RAI as it increases the interpretability of AI resulting decisions.

From literature, it is evident that stakeholders want transparency with regard to development of AI to enhance trust, identification of bias and adjustment where necessary and compliance to the law. Nonetheless, the problem with explainability is that the determination of what it is has not been well defined, let alone finding out how much it has succeeded. Some of them are related to the lack of clear methodological framework of XAI that does not allow to carry out the right assessment of the most effective XAI techniques and further development of the approach.

In aligning with the methods used in this review, this paper provides a synthesis of the aspects highlighted, present gaps in knowledge, and indicates directions for future studies. It reveals a lack of unified definition for explainability, a general uncertainty of what metrics to use when comparing models, and questions about how best to integrate explanation with performance and security. In addition, it outlines specific directions for future studies in terms of the generalisation of XAI approaches, the improvement of RAI system’s extensibility, and the inclusion of social and ethical factors in AI systems’ design.

Thus, the conclusions of this review represent profound import for the AI’s advancement, affecting not only the subsequent scientific directions but also the stakes of industries and policies. It is therefore important that researchers concern themselves with the improvement of scalable and solid XAI techniques to meet the needs of the discovered chasms. Industry practitioners should therefore embrace a human centric approach to ensuring that XAI and RAI principles are imbedded in their AI SDLC/PSDLC. There is a need to translate state and local attention to AI so that they focus on the development of appropriate standards and rules for its application and the creation of favorable conditions for its further development.

The research towards Explainable and Responsible AI has required cooperation of both the academics, industrialists and the policymakers. To achieve a positive future with technologies powered by AI, the AI community needs to tackle all the challenges mentioned in this review. Furthermore, more real-case XAI applications should be developed and papers describing successful XAI experiences should be published to reflect a practice–theory divide.

Bibliography

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Benjamins, R., Barbado, A., & Sierra, D. (2019). Responsible AI by Design in Practice (No. arXiv:1909.12838). arXiv. https://doi.org/10.48550/arXiv.1909.12838

Cheng, L., Varshney, K. R., & Liu, H. (2021). Socially Responsible AI Algorithms: Issues, Purposes, and Challenges. Journal of Artificial Intelligence Research, 71, 1137–1181. https://doi.org/10.1613/jair.1.12814

Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., Kieseberg, P., & Holzinger, A. (2018). Explainable AI: The New 42? (A. Holzinger, P. Kieseberg, A. M. Tjoa, & E. Weippl, Eds.; Vol. 11015, pp. 295–303). Springer International Publishing. https://doi.org/10.1007/978-3-319-99740-7_21

Gohel, P., Singh, P., & Mohanty, M. (2021). Explainable AI: Current status and future directions (No. arXiv:2107.07045). arXiv. https://doi.org/10.48550/arXiv.2107.07045

Peters, U. (2023). Explainable AI lacks regulative reasons: Why AI and human decision-making are not equally opaque. AI and Ethics. https://doi.org/10.1007/s43681-022-00217-w

Radanliev, P., & Santos, O. (2023). Ethics and Responsible AI Deployment (No. arXiv:2311.14705). arXiv. https://doi.org/10.48550/arXiv.2311.14705