The ethics of AI as both material and medium for interaction remains in murky waters within the context of musical and artistic practice. The interdisciplinarity of the field is revealing matters of concern and care, which necessitate interdisciplinary methodologies for evaluation to trouble and critique the inheritance of ‘residue-laden’ AI-tools in musical applications. Seeking to unsettle these murky waters, this paper critically examines the example of Holly+, a deep neural network that generates raw audio in the likeness of its creator Holly Herndon. Drawing from theoretical concerns and considerations from speculative feminism and care ethics, we care-fully trouble the structures, frameworks and assumptions that oscillate within and around Holly+. We contribute with several considerations and contemplate future directions for integrating speculative feminism and care into musical-AI agent and system design, derived from our critical feminist examination.
Growing concerns of algorithmic bias and oppression [1] [2] [3] [4] [5]; dataset ownership and data access [6]; and general lore [7] around what AI troubles in our capitalist society is of increasing concern within popular discourse [8] [9]. We see this as an urgent area of concern within musical applications and contexts, which see the integration and assimilation of ‘residue-laden’ AI systems into musical praxis, musical artworks, and even as synonymous with prominent practitioners.
Within the field of musical-AI, current discourse has examined development of novel tools and architectures for creation and performance [10], artistic potentials in friction and fallacy [11], and human-AI musical interaction [12][13]. Our motivation in this paper is to address longitudinal concerns around the embedding of values, and implications for musical futures. This has previously been alluded to in the literature [14], and we bring a broadening shift towards how AI technologies are changing the landscape of musical creativity [15]. Existing work [16] in evaluating and critiquing AI technologies deployed in performance and artwork contexts has argued for context-specific approaches to system evaluation, but with little exploration into inter-contextual framings of the technology. Emerging concerns regarding AI’s influence in curating and shifting musical culture have been outlined in [15] [17], with propositions for policy interventions and the need for future research to examine alternative economic models, longitudinal study and greater diversity.
In this paper, our approach to addressing the myriad of concerns is to examine novel approaches of analysis, which contribute to the field by revealing pathways for workable and ethical practices in the design of musical-AI systems. To do so, our critical examination therefore requires the inclusion of knowledge from other disciplines [18]. We therefore draw together perspectives and methodologies from AI-ethics, Human-Computer Interaction (HCI), science and technology studies (STS), and feminism. The intention with such interdisciplinarity is to uncover and to situate the ‘matters of concern’ [19] particular to (and within) the field, as evident in our evaluation of our case study. We see interdisciplinarity in approaches to musical-AI as vitally necessary for the community to consider the implications of AI-artworks put out into wider society.
It’s now time to look at what’s beneath the murky surface of Musical-AI, and to unsettle the water. To assist in our exploration of how concerns are echo-ed between STS, HCI, and care ethics, we care-fully1 trouble dimensions of Holly Herndon’s artwork Holly+, a deep neural network that generates audio reminiscent of Herndon’s unique vocal aesthetic. We have chosen Holly+ as a precursory example of how an artist has navigated popular media discourses; collaborative approaches to identity concerns; and articulations of self-governance in the construction and presentation of the artwork. We see our care-full troubling of Holly+ as an example in taking a step forward from the initial discussions we posed in [20]. In [21] we advocated and argued for conversations on data to expand beyond a generalist and larger-society-centric viewpoint, to elicit domain-specific conversations in the field of musical AI. Holly+ is both a starting point for the conversations that need to be had, and an example of an artwork that is actively excavating these issues and walking across the borders.
Our contributions here are multifold. First, we contribute with an example of an interdisciplinary analysis, drawing from 3 methodologies in STS, HCI and AI ethics: speculative feminism [22][23]; matters of concern [19] and care [24]; and feminist data ethics [25]. Second, our interdisciplinary analysis reveals ‘matters of concern’ [19] in the case study, which we argue have implications on the musical-AI community in issues of data, management and legacy. This contribution is augmented by our inclusion of a Knowledge Map [26], to help visibilise the ‘matters of concern’ and the potential connections between concerns. Third, we outline considerations and future directions for embedding speculative feminism and care into musical-AI design. The stance that we occupy is a first step towards provoking change within our community, guided by our concerns as designers, artists, developers and users of the selfsame systems and technologies we critique.
To provide a structural outline of this paper: in the forthcoming Background section, we provide a brief summary of theoretical perspectives and core concepts critical to this paper’s inquiry. Chief amongst these are: compounding stances of fact, concern and care; a glimpse into speculative feminist perspectives in STS and HCI; and an overview of current practices in feminist AI-ethics. Drawing upon this theoretical grounding, we then progress to our critical examination of Holly+. Extrapolating from our critique, we close by outlining important considerations for inviting speculative feminism into wider discourses on musical-AI and speculate on future directions.
This paper draws heavily from a breadth of theoretical intersections and disciplines—forming the interdisciplinary foundation of our analysis—and which we will take a moment to address now.
As we examine various dimensions within Holly+, Bruno Latour’s notion of ‘matters of fact’ and ‘matters of concern’ provide assistive concepts [19]. Latour establishes a relation between fact and concern as an act of positioning the objective in relation to the “whole scenography” of its contextual environment. When we consider an AI-agent or AI-system structure as further constituting the objective (matters of fact) in relation to wider contexts (matters of concern), this necessitates a deliberate and care-full troubling [19]. de la Bellacasa unsettles matters of fact and matters of concern [27] with ‘matters of care’ [24]. Care is defined as engaging with the becoming of matters of fact and concern: an intentional seeking out of the histories and values present in systems. It is through seeking out the histories and contextual entanglements of technologies that care is enacted. We see the contribution of these theories as assistive in attending to the positioning of Holly+ in relation to its intersecting contextual environments, and its becoming. We attend to one of these intersecting environments in our following discussion of feminism in STS.
Looking towards the what of what feminism is, this helps to formulate the why and to establish what feminism can do for musical AI in addressing the inequities, questionable practices and working cultures that are being developed within the music-technology community. This is especially pertinent to musical-AI, which often inherits and adopts models, algorithms and approaches which may carry residues of inequity and bias. Feminist principles can be of benefit here. Speaking broadly, feminism is a series of socio-political movements [28] [29] [30] which seek to address systems of oppression within society, which can encompass gender, political, economic, personal, and social inequalities [31] [32] [33] [34] [35]. Notable work in STS in this regard is the work of Donna Haraway [22] [23] [36], offering speculative feminist narratives of socio-cultural co-construction and fusion of synthetic and organic bodies. Of specific relevance to this paper is their Implosion Analytical Method [37] [38], which critically examines various dimensions of an artefact.
Looking into broader communities, there is ongoing discourse into the formation and implementation of feminist perspectives [39] [40] [41] in data ethics. Specifically, we will now draw attention to Carroll et al [42], and Gray and Witt [25], who examine critical concerns pertaining to issues around data sovereignty, AI ethics, care, and feminist research ethics within AI.
Carroll et al [42] formulate a care-centric data practice, building upon critical concerns pertaining to data sovereignty and self-management within Indigenous communities from Oceania, the United States and Canada. They offer a set of principles—CARE2— to complement an existing approach to data management 3 [43]. They articulate the objectives of the CARE principles as constituting people-; purpose-; and data-centric concerns. In our analytical approach, we specifically work with these as lenses for our analysis.
Gray and Witt [25] formulate a preliminary roadmap for integrating a feminist data ethics of care framework within the field of AI. They argue that the ambiguity around mainstream understandings of AI-ethics lends itself to ‘fuzzy’ definitions, enabling systematic failure in responsibility which in turn implicitly reinforces gender-power imbalances. Of particular note to this paper is their focused attention to both the actors (the ‘who’) and the practicalities (the ‘how’) in bringing feminist approaches and methods as a remedy-of-sorts to the principle-to-practice gap. They frame this as ‘making interventions’ into the economy of machine learning. They propose 5 interventionist principles for feminist data ethics and care. These encompass: 1) diversity with regards to representation and participation in the machine learning economy; 2) critique of positionality; 3) foregrounding human(s) throughout a machine learning pipeline; 4) ensuring the implementation of accountability and transparency measures; and 5) equitable distribution of responsibility. It is these 5 principles that we have identified as assistive lenses for our critique of Holly+.
Gray [44] expands upon their earlier paper with Witt, articulating perspectives on how the development and advancement of AI ethics will not see significant, positive change until all stakeholders take on the responsibility of engaging with ethical work and practices throughout the entirety of the economy of machine learning. They highlight that the current landscape is “dominated by a heteropatriarchal class of men”, referring to the work by Chang in [45]. Gray underlines the burning need for people working within technology fields to radically change the existing culture of these fields, which they propose as key to “build[ing] capacity for care throughout the entire machine learning economy”.
In this section, we draw together theory and methods of speculative feminism, care-ethics, and feminist data-ethics and utilise these as critical evaluative tools in analysing the artwork Holly+ by Holly Herndon. We proceed by first providing a brief overview of what Holly+ is, and then delve into our speculative feminist and care-ethics informed critique.
We conduct our critique from a particular point-of-access, in which we occupy a position as spectators and observers of the work. We see this as in coherence with the observable intent that Herndon wishes for their work to be experienced. Our occupancy of this stance is deliberate. We have utilised only publicly accessible information regarding the Holly+ artwork, encompassing information regarding the general model structure, its governance, affiliated parties and Herndon’s recorded statements regarding this artwork. This is so that we may critically examine what is ‘visibilised’ in and about the work, so that we may in turn be able to critically address the components of the work that appear ‘invisibilised’ [46]. Our understanding of these terms proposed by Hampton—’invisibilise’ and ‘visibilise’—as an active choice in what components of a system are seen versus unseen and acknowledging that these choices are (potentially) accompanied by harms. We see invisibilisation and visibilisation as core concerns in connection with Gray and Witt’s 4th interventionist principle: accountability and transparency.
Created by artist Holly Herndon, the work Holly+ is a voice model built in collaboration with Never Before Heard Sounds (NBHS hereafter), a music studio devoted to the construction, development and deployment of AI powered tools for browser-based musical production. Structurally, it is a custom deep neural network trained upon recorded voice data (constituting singing and/or speech) of Herndon, deployed as a browser-based tool where prospective users upload an audio file (presumably of their own, or publicly sourced recorded material). The model utilises pitches and rhythms from the uploaded audio file, adding additional components from the training data provided by Herndon [47]. The browser-based platform with which one can engage with Holly+ is presented in Figure 1 below, and accessible for engagement via the following link.
As a starting point for our critical evaluation, we drew inspiration from Haraway’s Implosion [48] methodology as delineated by Dumit [38] to formulate a Knowledge Map and preliminary index (see Figure 2 below) of the various dimensions and structures oscillating within and around Holly+. We highlight, that of the 14 dimensions described by Dumit, our Knowledge Map below consists of 12 dimensions, delimited due to the scope of this paper.
We then troubled the artwork through extrapolating connections extending from our central matters of concern data and identity; management and reclamation; and preservation and protection of legacy. We connected these matters of concern to principles from Carroll et al’s CARE Data Principles [42], and Gray and Witt’s [25] Feminist Data Ethics praxis. With regards to CARE principles, we utilised their larger categorisations of people-, purpose- and data-oriented concerns to probe how our matters of concern revealed in Holly+ may be motivated through these larger CARE categorisations. Similarly, Gray and Witt’s feminist data ethics principles were engaged as critical lenses of how matters of concern in Holly+ may or may not be coherent with a feminist data ethics. This can be seen in Figure 3 (below), which depicts our three-layer methodological approach to analysis.
Our Haraway Index was highly generative in illustrating dimensions with multiple entanglements to our central matters of concern—data and identity; management and reclamation; and legacy. Of the 12 dimensions we evaluated, the most richly entangled were the technological; labour; political; and economic dimensions. We have utilised these 4 dimensions as additional Latour-informed scenography to our feminist and care-centric analysis.
From our positionality outlined above, we identify three main pillars of concern pertaining to and within Holly+, concerning data and identity; management and reclamation; and legacy4.
One especially notable aspect of Holly+ is the novel approach Herndon adopts in the management of artistic work engaging with the voice model as a generative tool. Holly+ is a publicly accessible tool, and Herndon motivates her decision in open-access as an intention “ …to decentralize access, decision making and profits made from my digital twin, Holly+ …” [47]. Here, we wish to cast a critical gaze over the particularities of how principles and modus operandi of speculative feminism and care ethics may (and may not) be embedded in the procedures, presentation and adjacent framings of this artwork.
It is clearly disclosed on Herndon’s personal webpage that a Decentralised Autonomous Organisation (DAO) [49] stewards artistic work that deploys Holly+. For contextual grounding with respect to how we proceed with our critique in light of the DAO5 stewardship, Herndon has previously engaged in discussion around decentralisation within AI-arts [50], and the reclamation of ownership of one’s (literal) voice in an age of increasing concern of ethical implications of vocal deepfakes and voice synthesis [51] [52] [53] [54] [55].
They argue that the distribution of tools such as those offered through Holly+ is in alignment with values pertaining to communality and commonality of voice. Further, they argue for DAO as a means to enable ethical, officially sanctioned and informed experimentation with another’s vocal likeness and further enabling communal financial benefit in economic proceeds generated from the use of a voice model. We, however, argue that the deployment of the DAO is in fact not substantially decentralising decision-making. We argue that significant decision-making which has implications for how Holly+ has been made and can be used, has clearly already been established by Herndon and NBHS in their design of the system, the means of interacting with Holly+, and the terms of agreement within the stewardship itself. The potential responsibility of stewards is thus delimited to governing ‘fair-usage’ of minted artworks created with Holly+, and not affording governance of the evolution of the architecture of Holly+ over time. We therefore do not see the full scope of decision-making pertaining to Holly+ as fully decentralised.
Herndon describes one of their underlying motivations for the birth of Holly+ as an act of futuring and “maintaining the value and reputation of [their] voice [rather] than the rights being passed down to someone less familiar with the values and standards associated with [their] work”. Their justification for this is grounded in concerns that inherited rights—through a next-of-kin or other Western-centric inheritance tradition—offer less posthumous protections than a public and digital distribution of governance. We do not critique Herndon’s expression of feeling more comfort in distributed ownership of her voice model, we do however note interesting and “sticky” concepts entangled with this pertaining to the matter of the public following of Holly+ and the DAO stewardship.
The first “sticky” concept we wish to highlight is the entrance procedure of the DAO [56] stewardship. Herndon outlines how membership into the DAO is contingent on the distribution of ERC-20 VOICE tokens [57] which are on the Ethereum blockchain [58]. These tokens represent voting shares in Holly+ DAO. These tokens will be “airdropped to collectors of my art, friends and family of the project, and other artists selected to participate in using the Holly+ voice to create new works.” We can therefore plainly assume, that Holly+DAO stewards, either already have a vested financial interest in Herndon’s work (in the example of collectors), are already intimately familiar with Herndon and their work (friends and family of the project), or have been deemed by Herndon as possessing sufficient technical competency or musicality to create ‘suitable enough’ artwork using Holly+ (other artists selected to participate in using the Holly+ voice). We argue that the procedure for becoming a DAO stewards is highly selective, curatorial and holds the potential for exclusion based on cultural capital, digital accessibility and economic status.
This leads into our second concern, the preservation of legacy. The formation of culture does not take place in a vacuum, and there are (potentially) deeper issues in anticipating that one’s values and standards may be preserved for the future production of artwork taking place in a future environment and context that we cannot yet imagine. How might the Holly+DAO stewards in 100-years’ time be best suited or situated to make decisions that honour Herndon when living memory of Herndon as an artist may no longer exist? This assumption can be further troubled by speculating how applicable or relevant the cultural values or artistic standards of an artist may be in this selfsame future context. Stickiness and murkiness reside in the question around what constitutes an appropriate, or artistically relevant, usage of Herndon’s voice especially when voting stewards may approve an offensive or uncharacteristic deployment of Holly+ [59]. The premise of Holly+ as an artwork is grounded—and indeed, dependent—in public interaction. The greater the engagement levels, the greater the social value that is attributed to the artwork. The DAO incentivisation scheme concretises this ‘value of attention’, distributing profits of artworks made with Holly+ amongst stewards to encourage their decision-making in ‘minting’ usages of the voice model to increase the social capital (i.e. the visibility and distribution) of Holly+. We speculate on the potential stickiness of this in regards to Herndon’s intention to preserve their artistic legacy. This proves especially troublesome a notion, especially when we acknowledge that the economic profits generated by any usage [60] may indeed subvert Herndon’s own vision for fiscally incentivising DAO stewards to preserve her artistic legacy. Money talks, controversy sells, and the distinction between ‘acceptable’ and profitable are not necessarily kept apart [61].
Our Haraway Implosion Index further revealed imbalances pertaining to what was visibilised and invisibilised in the artwork, which we wish to draw attention to. The first is the involvement with NBHS, which we understand as having been involved in the design and development of Holly+. Adjacent to this invisibilised element is the code, which has not been made accessible anywhere that we could locate. Presumably, any open-source access to the code has been dismissed by the immediate partners in Holly+ (which we speculate encompasses Herndon and NBHS) in the protection NBHS’s commercial interests as an organisation developing online AI music tools. Second, we note also that there is ambiguity as to the sourcing of the original dataset of Herndon’s voice, and whether this dataset was compiled specifically for the training of the Holly+ model or constitutes Herndon’s historical vocal data.
We turn now to our critical examination of people-, purpose- and data-centric values in Holly+ in relation to our matters of concern. The larger structural design of Holly+ as an artwork reflects values around collective interaction with identity, through the capacity to re-realise an audio recording through the consensual usage of Herndon’s vocal likeness. Here, we understand that Holly+ is prioritising both people- and purpose-centric values: by serving as a model for establishing working processes for consensual and collective engagement with identity play [62]. We understand this concentration of collective engagement and explorative play with Holly+ as likewise reflecting principles of care- in ‘demystifying’ AI-tools through open-access play, and through Herndon’s attention to how future demand of voice models must be informed by a system of governance that is in the best interests of the voice-origin.
Further, the legal and financial structures that govern the verified usage of Holly+ reflect a concern towards the people-, purpose- and data-centric values proposed by Carroll et al. Herndon’s vocal likeness is established as connected with their personhood6 and image as an artist- and therefore Herndon as implicated in any future utilisation of Holly+ in an artistic work. The DAO stewardship system appears to address these values by protecting the verified usage Herndon’s vocal likeness and artistic legacy whilst enabling collective participation in the formation of musical subcultures that would utilise the usage of an artist’s voice in a posthumous context [63]. We can understand this as establishing a prioritisation towards people (the stakeholders in Holly+; purpose (Herndon’s view of future voice model usage); and data (Herndon’s voice and artistic legacy as data).
However, when we further position the Holly+ DAO in relation to matters of fact and concern, we observe a conflict of people-versus-purpose centric values. From a matters of fact position, Herndon is reclaiming ownership (of data; of voice; of their likeness) with NBHS, and enabling open-source access to their identity play. However, a matter of concern is that NBHS (and Herndon) are making very rigid decisions about how the model interacts with user-contributed material; the selectivity of what vocal material is added to the dataset the model is trained on; and how the model’s verified usage may be distributed (through deploying a Holly+DAO). Here is where the conflict lies: the conflict between the open-access intentionality of Herndon, with the closed-system development of the Holly+ architecture.
When we take additional lenses from Gray and Witt’s 5 feminist data ethics principles, we can further understand that the matters of concern in Holly+ become somewhat more tremulous, or ambiguous. By this, we mean that our matters of concern pertaining to Holly+ are troubled when examined through the 5 feminist principles formulated by Gray and Witt. This therefore requires our care and attention.
With regard to the first principle—equitable distribution of responsibility—although the structure of governance of Holly+ through the DAO aims to equitably divide decision-making power, we do not see this distribution as truly equitable. As had previously been discussed, the distribution of stewardship tokens is contingent on either a financial and/or labour investment in Herndon’s artistic work; a familial or network (by this we also presume a cultural capital) connection to Herndon; or bestowment of a token based on Herndon’s assessment of the recipient’s artistic merit or capacity. We argue that this limits access, by requiring technical capacity and familiarity with—and a secure financial position to invest in—cryptotech. Based on these grounds it can be argued that these pathways to stewardship are in fact not entirely equitable.
The second principle—critical positionality—is more clearly addressed. Herndon has continually articulated their views on the future usage of voice modeling, and the inclusion of Web3 technologies to safeguard the legal interests of artists. In the third principle—the centering of human(s) throughout the pipeline—this becomes more difficult to discern. Naturally there is a centering of Herndon’s capacity to share their likeness and encourage collective and creative usage of their identity play. However, we were unable to ascertain specifically how a human-centered approach was applied with regards to data collection or system design. This invites further speculation as to how the valuation of collective play—an apparent concern addressed thematically in Holly+—may be more transparently addressed in the design of the model’s architecture and its interactivity.
The fourth principle—transparency and accountability— proves similarly more problematic for us to assess. We observed a lack of transparency with regards to the particularities of the vocal model, and specifically to the availability of the code. We assume this as withheld due to the commercial interests of Herndon and NBHS, yet this withholding necessitates clarification. It must not go unacknowledged that Holly+ is an artwork made in collaboration with 2 entities, one an artist and one an organisation, both with vested interests in preserving certain aspects of code as their intellectual property and holding cultural and financial value.
When we further consider transparency and accountability, it is also not clear how this is factored in with regards to the Holly+DAO. On one hand, there is transparency with regards to how the stewardship is implemented to govern verified usages of Holly+. On the other hand, we were unable to find any information regarding who specifically had been awarded DAO stewardship. This is an area of concern, as the transparency of this system is selectively visibilised and invisibilised- and contrary to Gray and Witt’s proposed principle. There is also ambiguity as to the nature of accountability in regards to actions taken by DAO stewards, present and future. We had previously ruminated on potential implications of stewards having to make decisions regarding usage of Herndon’s vocal likeness in a speculative future context with—potentially—markedly different values around culture than we have in our present reality.
In the fifth and final principle—diverse representation and participation— this is perhaps the most ambiguous to determine. On the development end, Holly+ utilises libraries and frameworks which have presumably been constructed by a particular demographic [45], with shared or complimentary technical skillsets. With regards to user engagement, its browser-hosting enables widespread, international participation. However, access is contingent on computer or smartphone access and a reliable internet connection- which can be significant factors of exclusion.
Our critique reveals that AI-systems carry many residues: unintentional and intentional imprints from the datasets they have been trained on7; the algorithms they have been made with8; and the actors who then inherit or use these systems in varying applications. We see a critical need for these residues to be address, and we propose interdisciplinary feminist methods as a means to do so. This constitutes work in 3 categories: de-centralisation of management; preservation and protection of legacy; and the critical prioritisation of people-, process- or data-oriented principles.
With regards to de-centralisation, we see potential in exploring alternative structures to put ‘power’ back in the hands of people accessing and making the artworks, rather than a board of directors from a recording label [64]. We foresee this matter of concern as being a crucial area for future troubling of existing power structures within the music industry.
In terms of preservation and protection of legacy, we have seen the deployment of Web3 legal and financial technologies troubling current modus operandi protecting artistic legacy. Through engagement with novel systems of artwork stewardship, we foresee future ‘unsettling of the waters’ with how artists can protect or distribute ownership and management of their data9 and artistic legacy. We foresee such engagement with AI technologies eliciting profound changes in the preservation and protection of the legacy of an artist.
A final10 matter of concern is the critical prioritisation of people-, process- or data-oriented principles. We have seen these navigated in Holly+ through the concern for the future of ethical voice model utilisation (people- and process-oriented) and subsequent implementation of DAO governance. We propose 2 central questions that researchers may utilise as a preliminary step in their implementation of feminist and care-full methods in musical-AI design: ‘who and/or what is invisibilised?’; and conversely, ‘who and/or what is visibilised?”
This paper is an initial peek into implementations of feminism and care ethics into musical-AI. The scope of the matters of concern addressed in this paper is substantial, with important future work needed with regards to further analysis required of hegemonic power structures within the field of musical-AI, and evaluation of how barriers of access—linguistic, social and digital—are implemented throughout the economy of AI. We further anticipate that the presentation of practical examples of conscious engagement with matters of fact, concern and care will form the basis of our future work in this regard.
Within this paper, we have opened and stepped into a critical space within which we have troubled the waters of Musical-AI. We have outlined existing research work that engages with the implementation of feminist discourse, perspectives and methodologies across disciplines such as science and technology studies (STS), care ethics, and AI and Data Ethics. We have taken up a critical feminist lens of a musical-AI artwork—Holly+—and care-fully troubled the various dimensions within this artwork that we see as collective matters of concern across intersecting disciplines negotiating tensions around AI. Through our interdisciplinary analytical approach, we have revealed matters of concern which pose future troubling of power structures within the music industry. Further resulting from our preliminary critique through a speculative feminist lens, we address the burning matters of concern within Musical-AI and outline potential directions for future work in troubling inherited tools, systems, methodologies and lore around artistic and musical AI use.
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.