top of page
  • Photo du rédacteuridEU

WINTER IS (NOT) COMING: Is Explainable AI (XAI) the Answer?

Dernière mise à jour : 12 juin 2020

Written by Juliana Sibille


One month ago, I heard the term “AI winter” for the first time. Because of its link with Christmas, winter to me has always been associated with happy and cheerful times. However, to my disappointment, the scenario depicted behind the two words was far from happy. The concept of “AI winter” describes a point at which research, investment, and funding for AI goes into a period of decline, leading companies to focus their efforts elsewhere.


I then wondered: had I, in my 23 years of life, experienced an AI winter? Will I experience one in the near future? Being born at the end of the 90s’, all I seem to have ever known is the record-breaking AI “summer” which has overtaken the beginning of the 21st century. Artificial intelligence has become such a part of our everyday life that we almost forget that it is there. It is obvious that AI is hot again, but for how long?

As Sebastian Schuchmann correctly pointed out, because of the complex interplay present between AI researchers, companies, technology and the perception of AI on multiple levels, “making any prediction is hard” (Schuchmann, 2019). This piece will not try to make any bold predictions but will aim at providing the reader with a broad reflection on the current state of AI and the role played by Explainable AI in preventing a third AI winter.


This article presents the author’s reflections based on the referenced material.

I. A potential AI winter ahead?

Is machine learning’s prime time coming to an end? Except if you are clairvoyant, this question cannot be answered with a simple “yes” or “no”. There has been speculation during the last decade about a possible third AI winter as there have already been two major droughts of funding in AI research which occurred in 1974-1980 and 1987-1993 (Milton, 2018). While some experts perceive AI as “transitioning to a new phase”, others believe that the technology industry still has bright days ahead (Shead, 2020). The following chapter will try to provide the reader with an insight on both sides of the debate.

a. The issue of transparency: the right to an explanation, the black box and the reproducibility crisis

The EU has recently embarked in the “AI Race” and hopes to soon become the next AI hub. While the EU has taken the forefront in the field of data protection by passing the controversial General Data Protection Regulation (hereinafter ‘GDPR’), it is unclear whether the latter is an obstacle to Europe’s ambitions for AI. Indeed, the GDPR imposes strict restrictions on the use of personal data use, which is essential for the training of AI systems (European Commission, 2020). To put it simply, without data, there is no AI. At the center of the legislative text is the demand for more transparency which materialized in a controversial “right to explanation”. Such a right, although being highly desirable due to the augmentation of AI systems, seems to overlook a major AI limitation: the black box problem.

a.1 The right to an explanation

The controversy surrounding the GDPR does not solely lie on the obvious lack of legal certainty concerning certain essential concepts and on the constraints imposed on big tech companies, but also on the existence of one specific right: the so-called “right to explanation”.

The debate surrounding the existence of such a right has been ongoing. Some argue that, while a somewhat right to explanation might be mentioned in the GDPR preamble, it is nowhere to be found in the actual legislative text (Wachter, Mittelstadt, & Floridi, 2017). On the other hand, others are adamant that the GDPR provide for such a right (Burt, 2017; Kaminski, 2019). The debate on the existence of the right to explanation, although being very interesting, is beyond the scope of this article.

The right to explanation is directly linked to the essential principle of transparency. Transparency is a key element in future technology innovation and research efforts and is achieved by providing data subjects with process details (Datatilsynet, 2018).


As seen in the recent Loomis v. Wisconsin case, lack of transparency becomes critical when AI replace human decision-making for important decisions such as mortgage approval or jail sentencing.

Eric Loomis, a prison inmate, after receiving a jail sentence based on the use of a proprietary risk assessment method (COMPAS), argued that his due process rights had been violated as the algorithm’s author refused to disclose its methodology. This controversial case about automated decision making is probably the first of many to come. Data subjects need the power to disagree with or reject an automated decision (Heaven, 2020). Without this, the general public will push back against the technology. However, the more advanced the technology is (which usually is the case in high stakes situations), the more difficult it is to satisfy the transparency requirement.

a.2 The magic black-box and the “reproducibility” crisis

Machine-learning systems have often been compared to “black boxes” for their decision-making processes can only be explained with great difficulty. This is mainly due to the “deep” architecture of the hundreds or thousands of artificial neurons working together to arrive at a decision (Bathaee, 2018). The black box problem comes from the complexity of the multi-layered networks of neurons and may function in a manner well outside of what the programmers could foresee (Bathaee, 2018). This issue puts notions such as intent and non-discrimination into jeopardy and is extremely dangerous when a decision significantly affects the subject’s life (Rai, 2019).

The black-box issue is also directly linked with what has been called a ‘reproducibility crisis’. This term refers to the alarming number of research results that are not repeated when another group of scientists tries the same experiment (Ghosh, 2019). Networks are growing larger and more complex, with substantial data compilations and massive computing arrays that make replicating and studying these models expensive, if not impossible for all but the best-funded labs (Barber, 2019). In 2016, a survey on 1,500 scientists revealed that more than 70% of them had tried and failed to reproduce experiments by other scientists published in scientific journals (Baker, 2016). This troubling result is due to the fact that key portions of information about how the AI is trained were held back by the authors (Gershgorn, 2017). The fact that the information is proprietary is probably not the sole reason for the crisis. The fact that even the developers hardly understand how machine learning systems function is problematic and might contribute to this phenomenon.

b. Food for thought

One argument against an approaching AI winter which has been put forward by academics and experts is that there is one major difference with the previous winters: machine learning has become profitable (Kaiser, 2020). The most valuable companies in the world rely heavily on AI. The technology has not only been beneficial for the Silicon Valley giants but also to many startups. The two previous AI winters were caused by expectations of a commercial applications not being met, causing the market to collapse. This time might be different as “machine learning is no longer a speculative proposition, it is a widely-applied, commercially viable technology powering some of the most popular (and profitable) companies in the world” (Kaiser, 2020).

On a more pessimistic note, I have noticed that whenever the rising importance of the technological sector is discussed, another subject usually follows: What about the human factor? Will robots soon replace us? Are traditional professions doomed to disappear?


Already in 1967, John M. Culkin stated: “we shape our tools and thereafter the tools shape us” (Culkin, 1967). The achievements of AI are not only admired, but are also feared by many. Applications of AI have often been apprehended by the public at large and policy makers. The introduction of the GDPR, in 2016, is a reflection of the menace AI represents to the right to privacy, a core value of the European Union. The growing reliance on AI creates also certain loopholes which are far from being resolved. One major issue is the responsibility gap in case of incidents involving a deep learning AI (think for example of a car accident caused by a self-driving car or illegal use of force exercised by an autonomous weapon system/killer drone).

c. From black box to glass box?

Since the adoption of the GDPR, it has been repeated over and over again that the GDPR marks the end of AI in Europe (Chivot & Castro, 2019). I thought the same. But let’s look at it differently. Rather than stifling to innovation, the GDPR might bring an upside to the avoidance of a third AI winter. As said above, people are afraid of AI, not only for the repercussion it might have on the work market, but also for the impact that algorithmic decisions can have on their own lives. Cracking the black box and achieving AI explainability could solve this issue and allow people to trust the systems.


Although we are still at the early stages of cracking the black box open, there are promising advances. The field of Explainable AI or XAI is a burgeoning field of research. DARPA is allegedly spending millions of dollars to fund a dozen academic research teams working on this topic (Microsoft Corporation, 2019). In November 2019, Andrew Moore, the lead of Google Cloud’s AI division, declared: “The era of black box machine learning is behind us” (Kelion, 2019). He is not the only expert to share this opinion (Gholipour, 2018). Several researchers have developed approaches to make AI models explainable. As I am far from being an AI expert, the technical language used to describe these techniques closely resembles a foreign language to my ears. I will try to explain, with simple terms, two of these techniques. The first one and probably the most prominent methods in explainable machine learning is the Layer-wise Relevance Propagation (LRP) method. By propagating the prediction backward in the neural network, it can highlight the input features used to support the prediction (Montavon, Binder, Lapuschkin, Samek, & Müller, 2019). Another approach, the LIME method, can be applied to any machine learning model and is based on local interpretability (Slack, Friedler, Scheidegger, & Dutta Roy, 2019; Hulstaert, 2018). The method perturbs the input of data samples (by tweaking the feature values) and observes how the predictions change.

d. COVID-19, a boost for AI?

At the beginning of 2019, Thomas Nield wrote: “[…] I think another AI Winter is coming. In 2018, a growing number of experts, articles, forum posts, and bloggers came forward calling out these [AI] limitations. I think this skepticism trend is going to intensify in 2019 and will go mainstream as soon as 2020” (Nield, 2019). Had he known a pandemic would mobilize most of the world in 2020, Nield would probably not have laid down this bold statement. Although fear has been generated through the discussion about Covid-19 tracking apps, the use of AI and Big Data has already proven to be vital in the fight against the coronavirus. Facing this extreme sanitary situation has made us even more reliant on this technology. We are using AI not only to find a vaccine but also to control the population and to observe and predict the evolution of the pandemic (Council of Europe, 2020; Petropoulos, 2020). The concerns about contact tracing should not be overlooked but there seems to be a renewed hype around AI and its benefits which might be beneficial in avoiding a third AI winter.

II. Conclusion

When I first sat down at my desk and turned on my computer to write this article, I thought I would be depicting a dark fate for AI. I believed the requirement for explanation of automated decisions would mark the end of the ongoing “AI summer”. After having reflected more thoroughly, I now hold a more nuanced belief. Not being an expert in the field and having only had a “superficial” insight on the subject, the question whether an AI Winter is coming remains unanswered. However, I believe that some indications seem to tilt the balance towards a continued positive future for AI. With the promising developments observed in the quest for explainable AI, the right to explanation does not seem to represent a threat anymore. We are now closer than ever to address one of the biggest challenges of AI: opening the black box. Are the challenges of black box machine learning forever behind us? Probably not. Although the current explainable AI methods are still subject to potential pitfalls, they represent hope for AI’s future. The prospect of putting trustworthy and modular AI on the market is highly desirable and I feel like it is slowly, but surely, becoming a realistic one. The attention attracted by the current pandemic on AI innovation further reinforces this feeling. Although we can leave our winter sweaters in the wardrobe for now, we might ask ourselves if we should be worried. Raymond Kurzweil has predicted that the AI boom might one day reach the “Technological Singularity” – i.e. the creation of an artificial superintelligence which surpasses the total of human intelligence - by 2045 (Kurzweil, 2005). This scenario, which has been depicted in countless numbers of sci-fi movies, raises some deep philosophical and ethical questions. What is certain is that we are living in innovating times and that what lies ahead of us promises to be interesting.


Sources:

- Baker, M. (2016), 1,500 scientists lift the lid on reproducibility.

Retrieved from:

https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970

- Barber, G. (2019), Artificial Intelligence Confronts a ‘Reproducibility crisis’. Retrieved from:

https://www.wired.com/story/artificial-intelligence-confronts-reproducibility-crisis/

- Bathaee, Y. (2018), The Artificial Intelligence Black Box and The Failure of Intent and Causation. Harvard Journal of Law & Technology, 31(2), 890-938.

- Burt, A. (2017), Is there a ‘right to explanation’ for machine learning in the GDPR?

Retrieved from:

https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/

- Chivot, E. & Castro, D. (2019), The EU Needs to Reform the GDPR to Remain Competitive in the Algorithmic Economy.

Retrieved from:

https://www.datainnovation.org/2019/05/the-eu-needs-to-reform-the-gdpr-to-remain-competitive-in-the-algorithmic-economy/

- Council of Europe (2020), AI and control of Covid-19 coronavirus.

Retrieved from:

https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus

- Culkin, J. M. (1967), A Schoolman’s Guide to Marshall McLuhan. The Saturday Review, 51-53.

Retrieved from:

https://webspace.royalroads.ca/llefevre/wp-content/uploads/sites/258/2017/08/A-Schoolmans-Guide-to-Marshall-McLuhan-1.pdf

- Datatilsynet (2018), Artificial intelligence and privacy (Report).

Retrieved from: https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf

- European Commission (2020), Communication from the Commission to the European Parliament, The Council, The European Economic and Social Committee and the Committee of the Regions – A European strategy for data’ (2020) COM(2020) 66 final.

Retrieved from:

https://ec.europa.eu/info/sites/info/files/communication-european-strategy-data-19feb2020_en.pdf

- Gershgorn, D. (2017), The titans of AI are getting their work double-checked by students.

Retrieved from:

https://qz.com/1118671/the-titans-of-ai-are-getting-their-work-double-checked-by-students/

- Gholipour, B. (2019), We Need to Open the AI Black Box Before It’s Too Late. Retrieved from:

https://futurism.com/ai-bias-black-box

- Ghosh, P. (2019), AAAS: Machine learning ‘causing science crisis’.

Retrieved from:

https://www.bbc.com/news/science-environment-47267081

- Heaven, W. D. (2020), Why asking an AI to explain itself can make things worse.

Retrieved from:

https://www.technologyreview.com/2020/01/29/304857/why-asking-an-ai-to-explain-itself-can-make-things-worse/

- Hulstaert, L. (2018), Understanding model predictions with LIME.

Retrieved from:

https://towardsdatascience.com/understanding-model-predictions-with-lime-a582fdff3a3b


- Kaiser, C. (2020), There won’t be an AI winter this time – Machine learning isn’t a “Skynet or burst” proposition.

Retrieved from:

https://towardsdatascience.com/there-wont-be-an-ai-winter-this-time-332a4b6d6f07

- Kaminski, M. E. (2019) The Right to Explanation, Explained (U of Colorado Law Legal Studies Research Paper No. 18-24). Berkeley Technology Law Journal, 34(1).

Retrieved from:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3196985

- Kelion, L. (2019), Google tackles the black box problem with Explainable AI.

Retrieved from:

https://www.bbc.com/news/technology-50506431

- Kurzweil, R. (2005), The Singularity Is Near: When Humans Transcend Biology (Penguin).

- Loomis v. Wisconsin, 137 S. Ct. 2290 (2017).

- Microsoft Corporation (2019), Can AI decisions be explained?

Retrieved from:

https://digitaltransformation.foleon.com/pub/ai-and-gdpr/ai-decisions-explained/

- Milton, L. (2018), History of AI Winters.

Retrieved from:

https://www.actuaries.digital/2018/09/05/history-of-ai-winters/#_edn7

- Montavon, G., Binder, A., Lapuschkin, S., Samek, W., & Müller, K. R. (2019), ‘Layer-Wise Relevance Propagation: An Overview’ in Wojciech Samek et al. (eds.), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (Springer, 2019), 193-209.

- Nield, T. (2019), Is Deep Learning Already Hitting its Limitations? And Is Another AI Winter Coming?

Retrieved from:

https://towardsdatascience.com/is-deep-learning-already-hitting-its-limitations-c81826082ac3

- Petropoulos, G. (2020), Artificial intelligence in the fight against COVID-19. Retrieved from:

https://www.bruegel.org/2020/03/artificial-intelligence-in-the-fight-against-covid-19/

- Rai, A. (2020), Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science, 48, 137-141.

- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) L 119/1.

- Schuchmann, S. (2019), Probability of an Approaching AI Winter.

Retrieved from:

https://towardsdatascience.com/probability-of-an-approaching-ai-winter-c2d818fb338a

- Shead, S. (2020), Researchers: Are we on the cusp of an ‘AI winter’?

Retrieved from:

https://www.bbc.com/news/technology-51064369

- Slack, D., Friedler, S. A., Scheidegger, C., & Dutta Roy, C. (2019), Assessing the Local Interpretability of Machine Learning Models

Retrieved from:

https://arxiv.org/abs/1902.03501

- Wachter, S., Mittelstadt, B., & Floridi, L. (2017), Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law.

Retrieved from:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469>.

224 vues0 commentaire

Posts récents

Voir tout
bottom of page