Practitioners in eXplainable Artificial Intelligence (XAI) view themselves as addressing a range of problems they call ‘black box explanation problems’ (Guidotti et al., 2018): Problems either related to rendering a Machine Learning (ML) model transparent or to rendering its outputs transparent. Many (Páez, 2019; Langer et al., 2021; Zednik 2021) have argued that standards of explanation in XAI vary with the stakeholder. Buchholz (2023) extends this idea into a means-ends approach: Different stakeholders use different instruments of XAI to render different aspects of ML transparent, and with different goals in mind. In my talk, I shall argue for a more unified view within the context of scientific application. In particular, I suggest that we need to antecedently distinguish between two sets of aims in deploying XAI methods: proximate and ultimate aims. While the proximate aim of deploying XAI methods within the context of a scientific application may be to render either the model or its outputs understandable, the ultimate aim here is to increase one’s understanding of a given subject matter. Furthermore, building on the literature on objectual understanding (Elgin, 2017; Dellsén, 2019), and following a number of suggestions from other philosophers of science (Sullivan, 2019; Knüsel \& Baumberger, 2020; Meskhidze, 2021; Räz \& Beisbart, 2022), I ask whether the ultimate aim cannot also be pursued by means of ML but without any explanations.
Смотрите видео Florian Boge: Understanding (and) Machine Learning's Black Box Explanation Problems in Science онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь The Philosophy of Contemporary and Future Science 01 Январь 2024, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 10 раз и оно понравилось людям.