AIMC23 Abstract Submission
There has been an explosion of generative algorithms and tools in recent years; applications such as ChatGPT for generating text, Midjourney for generating images or AIVA for generating music have become very popular. As the years pass, we can see an ever-increasing number of generative algorithms and applications being released. Part of their appeal is that such applications are highly accessible to the general public and require little, if any, computational expertise. The ease with which people can generate creative artefacts, such as music, does come with some drawbacks however. There are practical drawbacks to not being able to determine if an artefact was created by a human or not. If we are unable to determine if an artefact is human or computer generated, such methods may cause disruption and difficulties for amateur musicians, artists and creators and for sites or publications who aim to highlight, publish or promote independent artists. Furthermore, many artists within creative domains have shown concern and outright disdain at the notion of generative algorithms being applied to creative tasks. Some of this anger may be attributed to the notion of the humanity of creativity – that machines cannot possibly generate things of genuine beauty, art or creativity.
This paper considers a number of practical and ethical questions that are pertinent to the new reality whereby algorithms can be easily used to generate creative artefacts: How can it be determined if an artefact was created by a person or an algorithm? Can an evaluation of the Creativity within an artefact help determine its authorship? Is it necessary to ascertain human versus machine authorship or should the quality of the output be the only issue that matters? What are the repercussions of a person attempting to pass an artefact as human-generated when it was in fact created by an algorithm? While it may not be possible to definitively answer all questions at this time, it is vital to consider these questions and consider the impact and repercussions that such autonomous methods can have on creative authorship.
In early 2023, the world seemed obsessed with ChatGPT (ChatGPT, 2022). Those of us who work in Artificial Intelligence (AI) or keep up to date with advances in digital or software applications may have been very familiar with its predecessors GPT-3 or other large language models, but it is hard to deny that the interest around ChatGPT seemed different. People with little or no technical background sat up and took interest: AI can generate text indistinguishable from that of a human. More importantly, not only are the results impressive, ChatGPT was easy to use – requiring absolutely no specialist technical knowledge or coding skills.
As time progresses, and as we discuss below, the results might not in reality hold up to the initial hype, but the popularity of ChatGPT is not unexpected. For many years, AI scientists and developers have been considering how to make systems more human-like. Since the proposal of Alan Turing’s Turing Test, many AI researchers have been obsessed with creating a machine that can fool or beat us (Turing, 1950). When a machine beats a human in skilled, creative games such as Chess or Go, this always makes headlines (Campbell, Hoane and Hsu, 2002; Silver et al., 2016). Even in creative domains such as music, many validations of autonomous systems focus on evaluations according to a human-like quality (Loughran and O’Neill, 2016). ‘This could have been produced by a human’ is the gold standard for artificially generated artefacts. The new appeal, or danger depending on your point of view, of systems such as ChatGPT is in their ease of use; systems are no longer merely being proposed academically but are easily accessible out in the real world, ready to confuse and confound teachers, reviewers and curators.
So what measures can, or should, we take to determine whether the author, musician or artist of a given artefact is made of blood, sweat and tears or ones and zeros? Is there even a need to be able to determine who or what has authored a given artefact or should the quality of the output be all that matters? Could we use some measure of creativity to determine if the artefact was generated by human or machine? This paper considers some of these questions, particularly for musically creative systems.
The following section briefly discusses a number of generative systems, with a focus on music. Sections 3 and 4 consider the risks and opportunities that could arise from AI creativity. Section 5 discusses evaluation and particularly looks at ways in which creativity has been evaluated in the past. Section 6 considers the outlook for AI applied to creativity before we draw some conclusions and propose future directions in Section 7.
ChatGPT may seem new and exciting but many generative systems, even in creative domains, have been proposed and developed over a number of years. One of the biggest names in generative AI at the moment is OpenAI, the parent company of ChatGPT and its predecessor GPT-3 (OpenAI, 2015). OpenAI is a research laboratory that is developing a range of products with an aim towards Artificial General Intelligence including image generator DALL-E and speech to text translation Whisper. OpenAI’s mission statement states that its aim is ‘to ensure that artificial general intelligence benefits all of humanity’. Whether the company still has this mission at its core, or whether this can be lived up to is beyond the scope of this study, but there is no denying that OpenAI is at the front of the current generative AI revolution. Other companies are working on generative systems however. For instance Midjourney is an independent lab that created a text to image generator of the same name that creates images based on text prompts into Discord (Midjourney, 2022). Huggingface is an open source AI community that supports a large range of AI applications with a strong focus on ethics (Hugging Face – The AI community building the future., 2016). But for the purpose of this submission, we will focus the remainder of this section on musical generators.
Arguably[1] music is no more special than any other creative domain. It is equally difficult to autonomously generate creative music as it is to generate creative prose but, for this audience, we will focus on music generation systems and how they have developed over the years. Music and computation have developed in harmony throughout the years. As far back as 1957, the first musical score the Illiac Suite was generated by a computer (Hiller and Isaacson, 1958). Shortly thereafter, the IBM 704 at Bell Labs performed `Daisy Bell' (or Bicycle Built for Two) as the first demonstration of computer speech synthesis in 1962 (Chafe, 2013). The use of musical melody, which was superfluous to the needs of the demonstration, illustrates that those developing such machines, over half a century ago, did see aesthetics, beauty and creativity within them.
It is then no surprise that both academic systems and commercial applications have been developed to autonomously generate and compose music. It must be accepted however that ‘music composition’ is not a simple task nor even is it, by itself, a well-defined task. What exactly is your proposed system aiming to do? Are you looking to create a single melody, to harmonise a part or create chord progressions? What style of music you wish to create will also play a huge part on the design of the system – do you wish to create orchestral music, vocals, a blues trio or maybe hard-core electronica? It should be clear, quite quickly that the task of ‘creating music’ is nether simple nor easily defined. It is possibly because of the breadth and scope of this task that so many systems and applications have been proposed, both academically and commercially. In recent years various systems for music production and composition have been proposed by computations methods such as Artificial Neural Networks (ANNs) or Evolutionary Computation (Loughran and O’Neill, 2020). A full review of such systems is beyond the scope of this paper but a comprehensive review can be found in (Tatar and Pasquier, 2018).
In addition to academic systems, user-friendly commercial applications for generating music have been flooding the market in recent years. AIVA defines itself as ‘The AI composing emotional soundtrack music’(AIVA, 2016). AIVA assists a composer to either compose within one of a number of preset styles or to upload their own MIDI file which will then influence the new original score. AIVA can be used for free if the user is happy to leave any credit with the platform or just €33 per month to have full copyright of anything created using it. Soundful is a music generation app that can easily create background music in a variety of moods and genres (Soundful, 2023). Again it starts with a free package but is customisable for those who wish to own all copyrights. Those who have a bit more programming skills and prefer to see ‘under the hood’ may prefer to get into development using a platform like Magenta (Magenta, 2022). Magenta runs on tensorflow and requires python programming and some level of ML knowledge to get started. However, Magenta have now also released their Magenta Studio which comprises of a series of plugins that work directly with Ableton Live without the need for coding.
There are many more similar products on the market (Boomy, 2023; InfiniteAlbum, 2023). In recent years, music generation applications have become increasingly popular. Similar to ChatGPT, a lot of the draw of these applications is that they require little, if any, computational knowledge. For many of them, in fact, you also require little if any formal musical knowledge either. While this makes such systems fun to use, the lack of a priori knowledge, education, ability or ‘talent’ required to engage with such systems is a point of contention for many musicians and composers.
Many people are content with the idea of artificial intelligence, yet artificial creativity can cause more unease for the general public. Creative processes and artefacts, such as music, can feel personal and special – somehow inherently more ‘human’ than other more generic or computational tasks. This can lead to the idea that an AI could never actually generate something creative or exhibit true creativity. There is a comfort in thinking that an algorithm or machine could never achieve this. Furthermore, exhibiting creativity implies a sense of personality, individualism or even embodiment. We act creatively though our mind, rather than our brain, which opens up the questions of what is in our minds and how do we think? (Hofstadter, 1979)
However, it is important to realise that just because we cannot understand how an AI could be creative – this does not mean that it will not or cannot come to pass. Just because we think we cannot enumerate creativity, doesn’t mean we should stop trying. Marvin Minsky acknowledged this lack of foresight in humans over 40 years ago in his essay whereby he stated ‘There's a big difference between “impossible" and “hard to imagine." The first is about it; the second is about you!' (Minsky, 1983). Creativity is a difficult quality to define, but it is also merely as aspect of intelligence (Boden, 1998). If, as is evident from research since the 1950s, we can work towards algorithms exhibiting intelligence, then we cannot rule out the idea that they could be truly creative, just because it makes us feel better.
If AI systems really can impersonate humans through generating artefacts, what are the most imminent risks or ethical issues to be aware of?
In relation to text generators such ChatGPT, plagiarism – particularly from students – immediately emerged as one of the main concerns. Generative systems, by their very design, remove a lot of the effort, skill and work that is involved in creating the final artefact. Therefore if this artefact is to be valued, such as through an academic assignment, there is a real danger that students could misuse such tools to fool teachers and examiners. The risks involved in this are both for the students and the institute: faking their way through an educational process will leave a student underserving of any award and unqualified for any position or role dependent on such award. Institutions that start to deliver unqualified or low standard graduates will have their reputation tarnished. Peer review processes and reviewers will also need to be aware of the dangers of such plagiarisms. There is a role for ChatGPT within education, but the ethical pitfalls and dangers must be considered (Mhlanga, 2023).
Artists, like all of us, do not like the idea of being replaced. More than that, many artists feel passionately that their art cannot be automated, and that it is offensive to even try. A recent conversation in traditional music repository community The Session became quite heated as it was announced that someone had made a submission that was generated by an AI (‘Tunes’ composed by AI, 2023). The submission was removed and it was stated that AI-composed tunes were not to be submitted. But the discussion around the lack of merit within AI-composed tunes continued.
We know that AI models learn from data presented to them – but who is to ensure that the data they are learning from has been accessed with full consent and permission from the original artist? Many artists have been speaking out about this and now, as of July 2023, US comedian Sarah Silverman is suing OpenAI on the grounds that ChatGPT was trained on her work without permission in what could be a landmark case for such claims (Milmo, 2023).
There are only a few well known communities or avenues that have the capacity to support new and upcoming artists. Often such communities are under-staffed or under-funded and do not have the capacity to deal with the extra workload of determining if a submission was made by a human or not. For instance, on February 20th 2023 the popular Sci-Fi magazine Clarkesworld announced via social media that they had to close submissions as they could not deal with the recent influx of submissions generated by ChatGPT (Clarke, 2006; Clarkesworld, 2023). If these avenues are closed to artists, there is a real danger to the incubation and support of up and coming talent that needs this early stage support to gain the critical momentum they need to achieve success. If we dampen such opportunities – how many future writers, poets or musicians could we miss?
Historically, any automation has had a negative effect on certain job markets. Industrialisation reduced jobs for labourers and the current AI boom has created concerns around a reduction in jobs in a variety of workforces. The societal impact of the development of any new technology must be considered and we cannot be naive and not acknowledge that autonomous creative systems will have negative effects. Artists and musicians have some of the most unstable career paths among all disciplines; many rely on commissions for individual works. There is also a huge difference in the level of pay, security and success across professional musicians. As a less conservative career path, the employment of artistic creators is fraught with uncertainty; the plight of the ‘starving artist’ is not to be underestimated. And it is likely that the negative impact on work opportunities will be felt by those who are most in need of the work. While we can be confident that the Hollywood composers will not be replaced by AI counterparts, how confident will the lesser known composers and artists be? There will be an unequal survival probability across artists, based on level of establishment and connection, rather than any level of merit.
Algorithms, particularly those that are data-driven, are highly susceptible to bias; any bias within the dataset is likely to be propagated through the system. Even beyond the data, any autonomous decision or judgment made within the system, may be susceptible to bias. There has been much academic research into the types of bias that such a system may exhibit (Mehrabi et al., 2021) along with method developed to mitigate such biases (Bellamy et al., 2019). We must, therefore, be aware that the creative systems discussed here are also likely susceptible to bias. Bias, like creativity, can be a difficult concept to define or measure, but the potential dangers of undetected detrimental biases by an autonomous creative system must be considered. A recent article in MIT magazine displayed the vulgarity that biases in creative ‘fun’ systems can easily exhibit as a female journalist found using the new AI avatar app Lensa (Heikkiläarchive, 2022). While Lensa created fun, interesting avatars for her colleagues, Melissa Heikkilä was disturbed to find that Lensa focused on, not just the fact that she is female, but also her Asian heritage and produced an alarming amount of overtly sexualised images .
It is quite evident how text generative systems such as ChatGPT could be used in nefarious ways to imitate another person, But other creative applications could be used in a similarly dangerous and illegal manner. The dangers of deepfakes being used to impersonate people for dangerous applications, particularly involving pornography, are well-known (Westerlund, 2019). In the age of ‘fake news’ where we know media can be manipulated easily – how can we even trust that any new artefact has indeed been produced by the person who produced it or with the intention that we are receiving it?
The previous section may portend to doom and gloom, but can we also see an upside to the continued aspiration towards artificial creativity?
Labour costs money. The counterpoint to the above issue of a reduction in the workforce is that small companies, startups and individuals can now generate or access creative content that would never be affordable to them if they had to employ an artist. For instance, an independent game developer who once may have had to hire a composer to generate music can now create generative, responsive music to accompany a game from tools such as those discussed in Section 2.
Creativity is an aspect of intelligence. However, it is an interesting aspect that can help us learn more about the scope and breadth of intelligence. Many critical essays and books from leading AI researchers have considered creativity as an important part of the understanding of the capabilities of AI (Minsky, 1983; Boden, 1998). When looking to understand our own intelligence, and hence looking at how to replicate it artificially, it is vital to look beyond the mundane and consider what it means to be intelligent. A study into Computational Creativity can only help broaden the scope, meaning and impact of AI.
Musical training is a privilege and luxury that is not available to all. Many people do not have the opportunity to be formally trained in music, but that does not mean they do not have the ability to drive it. Disadvantaged individuals or groups are less likely to have accessibility to formal music training or instruments, but these low cost apps that as easy to use are equally accessible to all.
As stated above, generative systems can allow those who are not formally educated in music to create new music. We have listed this as a danger, but there is also opportunity from the openness of such systems. Some people, through no social or financial disadvantage, simply do not study music throughout their lives and hence potential hidden talents never come to light. With such easy access to these generative systems, with minimal training or practice required to get started, users are free to start creating music almost immediately. Any dormant or unchecked talent is free to be expressed and shared.
What if, instead of using generative systems to create a final artefact, we merely used them as ways to generate ideas? Creative people can get stuck in slumps, equivalent to ‘writers block’. What if these tools could be used to change your focus, get you out of a block and steer your creativity towards new ideas. Such an idea has been proposed by David Holtz, the creator of Midjourney who has stated that a lot of graphic artists use Midjourney as part of the artistic process (Claburn, 2022).
What if the goal of the creator of the system is not to mimic or replace a human but to simply create the best AI system? If we take the subterfuge out of the problem, the challenge of creating the ‘best’ AI for tasks become a lot more palatable. Competitions on these types of challenges have become popular in recent years such as the AI Song Contest and AI Music Generation Challenge (Sturm and Maruri-Aguilar, 2021; AI Song Contest 2022, 2022). If the use of AI is upfront and in fact part of the problem being assessed, then the issue of ‘pretence’ is lifted. Such challenges offer a way to better understand music and better understand the workings of AI.
Never dismiss the appeal in creating something new and exciting. As long as people enjoy creating with AI and seeing the results, there will be a market for these tools.
Many of the problems and opportunities listed above are dependent on whether or not a system is artificially or human generated. It is this untruthful or deviant potential from these systems that pose the risks to artists and users. It must be acknowledged, however seemingly evident, that such deviance is from the users of the systems and cannot be inherent from the systems themselves; there are no ‘bad’ systems, just bad people. Nevertheless, determining a way in which to accurately and confidently ascertain if an artefact was generated by a machine or a person is the best way to validate the outcome and mitigate the risks posed above.
There are a number of applications already developed to identify ChatGPT plagiarism such as GPTZero (GPTZero, 2022). OpenAI themselves are even developing an AI classifier for indicating AI-written text (Kirchner et al., 2023). However, the limitations of such methods are clear as their accuracy is hard to establish and some contemporary studies indicate an alarming potential for bias within such detectors (Liang et al., 2023). If we were to consider plagiarism in other domains, such as music, how might we approach an autonomous and fair method of accurately determining what it human versus machine generated?
We have established that creativity is a difficult concept to define. That which is difficult to define will naturally be difficult to measure or evaluate objectively. The difficulty in evaluating creativity has led to a lack of evaluation among proposed CC systems (Jordanous, 2012). The evaluation of creativity of a CC system lies within determining whether or not a system is truly creative or ‘merely’ generative. Just because a system generates an artefact in a creative domain, does not mean that this system is actually exhibiting creativity. For instance, we can trivially validate if a music system generates music – this is evident from the output. We could then validate this music by checking some quality such as that it performs in the correct style or correctly follows musical rules. We could use a human as the evaluator through a series of listening tests, but again we need to be careful to specify what the human is evaluating or the results could be too subjective (Loughran and O’Neill, 2016). Whether the evaluation is performed by a human or through a series of statistical or pre-determined measurements, it is imperative that it is the creativity of the system that is being measured.
Despite the difficulties in defining and hence evaluating the creativity of autonomous systems, a number of methods have been proposed. One of the most renowned authors on creativity, Margaret Boden, has stated that in order for creativity to be present a system must show novelty and have value (Boden, 1998). This has formed the basis of a number of systems proposed to evaluate the creativity of a system including a framework of 19 criteria proposed by Ritchie (Ritchie, 2007). Colton developed a framework to distinguish between creativity and the perception of creativity based on skill, appreciation and imagination in the Creative Tripod (Colton, 2008). Jordanous set out a Standardised Procedure for Evaluating Creative Based Systems (SPECS) in response to a lack of rigour in creative evaluations and a lack of clarity as to what should be involved in evaluating a creative system (Jordanous, 2012). Jordanous identified 14 distinct components that act as building blocks for creativity and proposed three steps to evaluate your system. These steps were hinged upon the developer identifying the definition of creativity that a system should satisfy to be deemed creative. Hence we return to the difficulties inherent in the definition and our understanding of what it mean to be creative.
Bringsjord et al. proposed the Lovelace Test (LT) for creativity (Bringsjord, S., Bello, P. and Ferrucci, 1995). The authors named this test after the ‘Lovelace Questions’ named after Ada Lovelace (Boden, 2004). The LT is simple: given an artificial agent A, its output o and its human architect H, the test is passed if H cannot explain how A produced o. What is interesting is that it is the Human designer of the system that acts as the evaluator and not an external third party. This does still hinge on what is meant by ‘explain’ however. We like to think that the programmer should be able to explain there own reasoning in the design of their system, therefore the test can only be passed if the system does something, within the operating remit of the designed system, that the architect cannot explain.
The number and variety of evaluation methods and processes proposed show that there is no single best way to evaluate creativity. Ritchie’s criteria are weighted between the quality of the output and that what was expected; the Creative Tripod considers the difference between actual creativity and the appearance of creativity; the LT is dependent on the opinion (or confusion?) of the designer; and the SPECS model allows the evaluator to define how to evaluate creativity in the first place. Hence it is more about the system as a whole and not the mere output. This aligns with the third quality that is often quoted as being necessary for creativity to be present – that of intent. If we can determine the intent behind a creative system, we may be able to determine who is behind it – who wished for this output to be generated.
The methods above are used as ways to evaluate the creativity of an autonomous system. Such systems are know to be autonomous – their authors and creators are not alluding to anything else and are generally quite open about their architecture and workings. It is best practice in academia to be open and reproducible, indeed it is increasingly becoming a requirement. But is there anything in the method of looking for this autonomous creativity that we could bring to evaluate or determine, to any degree of certainty, whether an artefact posing to have been created by a human has actually been algorithmically generated? In a sense, this turns this evaluation on its head, but there is a lot of well founded established work in this area that could be considered for applications such as this.
This new wave of AI will certainly have an effect on our lives, but it is not the first time that a new technology has appeared so exciting. Many new technologies have been developed over time with huge effects on our daily lives. Will AI really be more impactful than mobile phones, the internet or the modern printing press? Only time will tell, but it is worth asking these questions now. In reviewing the risks and opportunities in Sections 3 and 4, one must be struck by the conflicting nature of these arguments; many aspects of these new systems can be seen as both a risk or an opportunity, depending on your frame of mind. There are some people who are very positive about the future of AI and some who are very negative, and of course there are many undecided. For instance:
· AI optimists are confident that AI will be of benefit to all human-kind. Their optimism often leads to the belief that AI will be able to solve the world's problems; we’ll be in driver-less cars by next year;
· AI enthusiasts are also confident in the future applications of AI, but more than that they are actively looking at all new AI applications, gadgets, tools and toys that are released;
· AI sceptics may (or may not) be aware of the current trends and advances in AI, but are sceptical in relation to how quickly they will develop, the level of impact they will have, or the breath and scope that AI can achieve. Such sceptics may not believe that artificial general intelligence (AGI) is achievable;
· AI doomsayers are more than sceptical, they have concerns that AI, or particularly AGI, could have severe or catastrophic consequences on human life;
· AI defeatists may have once objected to AI, but are now resigned to it. After all, you can’t stand in the way of progress.
The above does not aim to trivialise the current outlook on the future of AI, but to highlight that opinions on the role of AI may differ depending on one’s point of view or agenda. In reality, most of the general public probably doesn’t give AI much thought on a daily basis. But those who work with or develop autonomous systems have an obligation to check any assumptions they may have in the work that they are performing. For those who work in creative AI applications this involves considering the risks that are posed in Section 3 above.
One must always be aware as to how quickly this technology changes, and in particular, how quickly applications can be bought or lost to the world just as new ones emerge. The economic world of AI music start-ups is fraught with uncertainty and apps that were very popular only a few years ago have either been swallowed by larger entities or disappeared altogether. In contrast to this, on 14/03/23 OpenAI have just announced the release of their successor to ChatGPT, GPT-4, a multi-modal module that accepts both images and text prompts. We are sure that this will cause even more of a stir than ChatGPT.
‘Creativity is Intelligence having fun.’ – a quote that is generally, albeit questionably[2] attributed to Albert Einstein. But there is indeed a great sense of fun and exploration around these new accessible AI techniques when they are applied to creative domains such as music. This paper discussed a number of risks and opportunities that should be considered in the application of AI techniques within creative domains. We considered contemporary generative systems such as those currently on offer through OpenAI and more specifically at academic and commercial systems for generating music. We considered risks and opportunities from developing Creative AI and determined that some risks could be seen as opportunities depending on your point of view. The biggest risk associated with these systems is in not being able to determine whether or not the output was actually generated by a human, if it was claimed to be so. To this end, we reviewed some evaluation techniques used in CC research and propose that these methods could be used to develop a new method of determining the authorship of creative artefacts.
It is clear at this stage that AI is here to stay. Whether that is set to be a good or bad thing is really dependent on those who develop the technologies. In reality, it is unlikely to be universally good or bad, but to have good and bad impacts. Developers should always be aware of the ethical issues that the development of their algorithms may pose. It is the responsibility of developers, and particularly BigTech, to maintain responsible engineering practices and ensure their products Do No Harm. At the end of the day, products can always be misused, so we must be cognizant of this possibility and develop methods to mitigate this misuse, where possible.
As for whether an AI can be Creative, as a final thought, I decided to ask the AI in question what it thinks[3]:
Q: Can an algorithm be creative?
ChatGPT: ‘As an AI language model, I can say that algorithms can generate outputs that can be considered creative in the sense that they can produce novel and surprising results. However, the creativity of an algorithm is limited by the data and rules that it has been trained on, and it is ultimately a result of its programming and data inputs. While AI can produce outputs that resemble creative works, it lacks the human capacity for original thought, intentionality, and the ability to express emotions, which are essential aspects of creativity as we understand it. So, while AI can simulate creativity, it cannot truly replicate or replace the creativity of human beings.’
So, in its own words, no.
This work is supported by Dundalk Institute of Technology (DkIT)
AI Song Contest 2022 (2022). Available at: https://www.aisongcontest.com/ (Accessed: 14 March 2023).
AIVA (2016) AIVA - The AI composing emotional soundtrack music. Available at: https://www.aiva.ai/ (Accessed: 13 March 2023).
Bellamy, R. K. E. et al. (2019) ‘AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias’, IBM Journal of Research and Development, 63(4–5). doi: 10.1147/JRD.2019.2942287.
Boden, M. A. (1998) ‘Creativity and artificial intelligence’, Artificial Intelligence, 103(1–2), pp. 347–356. doi: 10.1016/s0004-3702(98)00055-1.
Boden, M. A. (2004) The Creative Mind: Myths and Mechanisms. Psychology Press.
Boomy (2023) Boomy - Make Generative Music with Artificial Intelligence. Available at: https://boomy.com/ (Accessed: 12 July 2023).
Bringsjord, S., Bello, P. and Ferrucci, D. (1995) ‘Creativity, the Turing test, and the (better) Lovelace test’, ACM SIGAda Ada Letters, pp. 57–66. doi: 10.1145/216578.216582.
Campbell, M., Hoane, A. J. and Hsu, F. H. (2002) ‘Deep Blue’, Artificial Intelligence, 134(1–2), pp. 57–83. doi: 10.1016/S0004-3702(01)00129-1.
Chafe, C. (2013) ‘Glitch Free FM Vocal Synthesis’, Center for Computer Research in Music and Acoustics, Stanford University.
ChatGPT (2022). Available at: https://chat.openai.com/chat (Accessed: 14 March 2023).
Claburn, T. (2022) Holz, founder of AI art service Midjourney, on future images • The Register, The Register. Available at: https://www.theregister.com/2022/08/01/david_holz_midjourney/ (Accessed: 14 March 2023).
Clarke, N. (2006) Clarkesworld Magazine – Science Fiction & Fantasy. Available at: https://clarkesworldmagazine.com/ (Accessed: 14 March 2023).
Clarkesworld (2023) Twitter. Available at: https://twitter.com/clarkesworld/status/1627711728245960704 (Accessed: 14 March 2023).
Colton, S. (2008) ‘Creativity Versus the Perception of Creativity in Computational Systems’, AAAI Spring Symposium: Creative Intelligent Systems, pp. 14–20. Available at: www.aaai.org (Accessed: 14 March 2023).
GPTZero (2022). Available at: https://gptzero.me/ (Accessed: 14 March 2023).
Heikkiläarchive, M. (2022) The viral AI avatar app Lensa undressed me—without my consent | MIT Technology Review, MIT Review. Available at: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ (Accessed: 14 March 2023).
Hiller, J. . L. A. and Isaacson, L. M. (1958) ‘Musical Composition with a High-Speed Digital Computer’, Journal of the Audio Engineering Society, 6(3), pp. 154–160.
Hofstadter, D. (1979) Gödel, Escher, Bach: an Eternal Golden Braid. New York: Basic Books.
Hugging Face – The AI community building the future. (2016). Available at: https://huggingface.co/ (Accessed: 14 March 2023).
InfiniteAlbum (2023) INFINITE ALBUM. Available at: https://www.infinitealbum.io/ (Accessed: 12 July 2023).
Jordanous, A. (2012) ‘A Standardised Procedure for Evaluating Creative Systems: Computational Creativity Evaluation Based on What it is to be Creative’, Cognitive Computation 2012 4:3, 4(3), pp. 246–279. doi: 10.1007/S12559-012-9156-1.
Kirchner, J. H. et al. (2023) New AI classifier for indicating AI-written text, OpenAI. Available at: https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text (Accessed: 14 March 2023).
Liang, W. et al. (2023) GPT detectors are biased against non-native English writers, arXiv.
Loughran, R. and O’Neill, M. (2016) ‘Generative Music Evaluation: Why do We Limit to ’Human’ ?’, in Conference on Computer Simulation of Musical Creativity. Huddersfield. Available at: https://www.researchgate.net/publication/304284746 (Accessed: 12 April 2022).
Loughran, R. and O’Neill, M. (2020) ‘Evolutionary music: applying evolutionary computation to the art of creating music’, Genetic Programming and Evolvable Machines, 21(1–2). doi: 10.1007/s10710-020-09380-7.
Magenta (2022). Available at: https://magenta.tensorflow.org/ (Accessed: 13 March 2023).
Mehrabi, N. et al. (2021) ‘A Survey on Bias and Fairness in Machine Learning’, ACM Computing Surveys. Association for Computing Machinery. doi: 10.1145/3457607.
Mhlanga, D. (2023) ‘Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning HUMAN RESOURCE MANAGEMENT-FROM INNOVATIVE SOLUTIONS TO SUSTAINABLE ORGANISATIONAL DEVELOPMENT View project The Impacts of Digital Transformation View project’, Article in SSRN Electronic Journal. doi: 10.2139/ssrn.4354422.
Midjourney (2022). Available at: https://www.midjourney.com/home/?callbackUrl=%2Fapp%2F (Accessed: 13 March 2023).
Milmo, D. (2023) Sarah Silverman sues OpenAI and Meta claiming AI training infringed copyright | Artificial intelligence (AI) | The Guardian, The Guardian. Available at: https://www.theguardian.com/technology/2023/jul/10/sarah-silverman-sues-openai-meta-copyright-infringement (Accessed: 12 July 2023).
Minsky, M. (1983) ‘Why People Think Computers Can’T.’, Technology review, 86(6), pp. 3–15. doi: 10.1016/0198-0254(84)93647-1.
OpenAI (2015). Available at: https://openai.com/ (Accessed: 14 March 2023).
Ritchie, G. (2007) ‘Some empirical criteria for attributing creativity to a computer program’, Minds and Machines, 17(1), pp. 67–99. doi: 10.1007/s11023-007-9066-2.
Silver, D. et al. (2016) ‘Mastering the game of Go with deep neural networks and tree search’, Nature 2016 529:7587, 529(7587), pp. 484–489. doi: 10.1038/nature16961.
Soundful (2023) AI Music Generator - Royalty Free Music For Creators | Soundful. Available at: https://soundful.com/ (Accessed: 12 July 2023).
Sturm, B. L. T. and Maruri-Aguilar, H. (2021) ‘The Ai Music Generation Challenge 2020: Double Jigs in the Style of O’Neill’s 1001’, Journal of Creative Music Systems, 5(1). doi: 10.5920/JCMS.950.
Tatar, K. and Pasquier, P. (2018) ‘Musical agents: A typology and state of the art towards Musical Metacreation’, https://doi.org/10.1080/09298215.2018.1511736, 48(1), pp. 56–105. doi: 10.1080/09298215.2018.1511736.
‘Tunes’ composed by AI (2023) The Session. Available at: https://thesession.org/discussions/47876 (Accessed: 14 March 2023).
Turing, A. M. (1950) ‘Computer Machinery and Intelligence’, Mind Association, Oxford University Press, 59(236), pp. 433–460.
Westerlund, M. (2019) ‘The Emergence of Deepfake Technology: A Review’, Technology Innovation Management Review, 9(11), pp. 39–52. doi: http://doi.org/10.22215/timreview/1282.
[1] We are open to this argument.
[2] https://quoteinvestigator.com/2017/03/02/fun/#:~:text=The%20line%2C%20sometimes%20misattributed%20to,words%20he%20wrote%20in%201984.
[3] asked at https://chat.openai.com/chat on 13/03/23