Speculative Modeling of Future Aesthetics and Artificial Intelligence Art

This research explores speculative modeling of future aesthetics in AI art, focusing on integrating advanced consciousness within AI systems. It addresses the ethical and technical challenges of artificial consciousness, examining AI’s potential to evolve and engage in creative expression. Central to the study is the Aesthetics of Artificial Intelligence Art (AAIA), which aims to analyze and create AI systems capable of aesthetic experiences. Through interdisciplinary methods, including case studies, the research predicts and shapes interactions between humans and AI in art. It applies neuroscientific theories to evaluate AI consciousness potential, enhancing methodologies across disciplines. This study not only forecasts developments in AI art but also aims to influence the creation and perception of art through advanced models of consciousness.

ESTHETICS OF ARTIFICIAL INTELLIGENCE ART

Tomáš Marušiak MFA

Masaryk University Brno, Faculty of Arts, Department of Musicology (Czech Republic)

Cultcode-Institute  of Visual Art (Slovak Republic )

Ongoing Research 2024:

Speculative Modeling of Future Aesthetics and Artificial Intelligence Art Assuming the Presence of Advanced Consciousness in AI Systems Through Artistic Research.

The design of this research was conceived with the goal of providing insights into the methodological possibilities for speculative modeling of future aesthetics and artificial intelligence art, assuming the presence of advanced consciousness in AI systems through artistic research. It is essential to acknowledge that the development of artificial consciousness raises various philosophical, ethical, and technical questions, which could also pose a threat to progress due to rigid backwardness. It is now predictable that AI continues to evolve and is approaching the creation of systems that may exhibit certain features of consciousness. Yet, this research area remains the subject of intense study and broad discussion, indicating an exciting, yet complex and challenging future. This statement implies that I do not ignore the ethical dimension concerning questions such as: „What rights should be granted to systems with artificial consciousness? How should we approach systems that may exhibit features of consciousness?“ I also believe that movements in this space will bring some form of prevention against misuse, necessary in relation to the increasing number of AI systems capable of convincingly imitating human conversation, evaluating based on the likelihood of possible solutions, or generating various forms of artistic or cultural artifacts, previously only within the human domain (Manovich & Arielli, 2019). However, the primary interest lies in addressing questions about the feasibility of realizing conscious systems capable of engaging in the production of art and influencing it as full partners. A certain type of predictive effort is already brought by strategies that operate with AI as a partner and to some extent respond to and integrate these trends. I concede that such movement may find support in the concept of “programmed visions” [2], which represents representations created by data-based software systems capable of predicting future events or shaping our understanding of the past.

Based on these foundations, there was an approach to the conceptual establishment of the aesthetics of artificial intelligence art – Aesthetics of Artificial Intelligence Art (AAIA), as a research area in computational aesthetics, aimed at understanding and creating artificial systems capable of analyzing and producing aesthetic experiences and enabling their transfer between human and machine. This multidisciplinary field combines computer science, artistic practice, art science, neuroscience, psychology, and philosophy. AI technical means are used in AAIA to analyze, understand, and simulate human thinking, the perception of consciousness processes, and the creation of models in the world of artificial intelligence art. Research in the field of AAIA supports the development of intelligent systems that can enhance creative activity, but also allows a deeper understanding of the processes of natural and artificial consciousness.

In the theoretical part, fundamental scientific positions are presented that refer to the possibility of realizing consciousness in artificial intelligence systems, the foundations representing the understanding of artistic research that uses the principle of transposing approaches between science and art, and the current state of knowledge in the field of computational neuroscience. Following the theoretical foundations, a proposal for research methods is formulated, which relies on the methodological design of speculative models. Such modeling is based on a multidisciplinary approach that focuses on creating and exploring possible, presumed, or alternative futures. This approach to modeling is structured from three research positions. The first position utilizes case studies referring to realized AI Art projects. This approach provides an empirical basis for the development of speculative models, enabling thorough analysis and reflection on existing examples and their applicability in various contexts. Through the second position, speculatively oriented artistic research is conducted, focused on identifying the emotional reactions of recipients to speculative predictions. The third position concentrates on the direct integration of the knowledge and results obtained into the developing speculative models. This step allows the application of theoretical findings and artistic inputs, ensuring their testability, adaptability, and potential for innovation. This position represents a key bridge between the conceptual design and its application in real or presumed contexts.

Consciousness in Artificial Intelligence

The study „Consciousness in Artificial Intelligence: Insights from the Science of Consciousness“ (3) introduced an innovative methodology for investigating the consciousness of artificial intelligence – CAI, based on three key principles. This approach includes the integration of theoretical and empirical methods within an interdisciplinary framework. The first principle is the adoption of computational functionalism (CF) as the basic working hypothesis. This hypothesis assumes that CAI is theoretically possible and that by studying the functionality of AI systems, it can be determined whether these systems are capable of manifesting consciousness. Computational functionalism provides a framework for understanding how computational processes in AI can model or simulate conscious states. The second principle utilizes theories from neuroscience concerning consciousness, which provide substantial empirical support for evaluating CAI. These theories focus on identifying neuronal and functional criteria necessary for the manifestation of human consciousness and their application to AI systems. This approach allows researchers to apply findings from neuroscience to assess the potential and characteristics of consciousness in artificial intelligence. The third principle deals with evaluating whether AI systems perform functions considered indicative of consciousness by scientific theories. The credibility of these systems as carriers of CAI is based on three criteria: (a) similarity of functions, (b) strength of evidence for the respective theories, and (c) belief in computational functionalism.

The cited study adopted a realistic approach to CAI when evaluating existing AI systems in light of neuroscientific theories of consciousness such as: the recurrent processing theory (RPT) (4–6), Global Workspace Theory (GWT) (7) (8) (9) (10) (11), Higher-Order Theories (HOT) (12), Predictive processing (PP) (13–18), and the attention schema theory of consciousness (AST) (19, 20). Within the context of this research into artificial intelligence consciousness (CAI), these theories also yield specific indicators for evaluating consciousness, known as „indicative properties“ of consciousness. These indicators are clarified using computational terms, enabling a more precise assessment and analysis of artificial intelligence systems. In the discussion of CAI and computational CF, it is important to note that the study does not include discourse on the integrated information theory (21, 22), as formulated by Oizumi, Tononi, and Koch. This aspect is crucial because the standard construction of IIT is considered incompatible with the assumptions of CF, which are the basis for examining CAI in the study. According to Tononi and Koch (22), a system implementing the same algorithm as the human brain may not necessarily be conscious if the components of this system are not of the appropriate type. This view implies that the structure and material properties of the system are as important as the algorithm itself in determining whether the system is capable of consciousness. Further, some proponents of IIT (23) argue that digital computers will likely not be conscious, regardless of the programs they run. This view highlights the difference between digital computers and systems that might exhibit consciousness traits according to IIT.

According to the analysis performed, current AI systems do not fully exhibit the presence of CAI. Although some aspects of consciousness may be simulated or modeled in these systems, the complexity and depth of true consciousness, as understood in the context of human consciousness, remain out of reach for current AI technologies. On the other hand, findings suggest that there are no apparent technical obstacles to building advanced AI systems that might meet these indicators and thus potentially exhibit CAI attributes. This conclusion opens up possibilities for further development and innovation in AI, where integration of more advanced models of consciousness based on neuroscience and cognitive science could occur.

Artistic Research

 In current academic discourse, artistic research often appears as a space filled with uncertainty, which is paradoxically seen as a source of its innovative potential. This ambiguity and lack of a fixed identity are not viewed as weaknesses but as key characteristics that allow artistic research to explore new territories and expand the boundaries of knowledge. Historically, as highlighted by Loveless (24), artistic research has undergone a development that includes a transition from monodisciplinary to interdisciplinary approaches, thereby broadening its methodological and theoretical base. This evolution inevitably led to the integration of artistic research with conceptual, social, and activist currents in contemporary art. Loveless identifies artistic research as a dynamic field that is the result of historical changes and is characterized by its ability to adapt and respond to various disciplinary and interdisciplinary influences. Accepting the statement that generating research in art represents a logical development in the context of its interdisciplinary effect means recognizing that artistic research is fundamentally dynamic and adaptable, capable not only of responding to current social and political challenges but also actively contributing to the creation of new knowledge and ways of understanding the world.

Continuing in this outlined discourse, and reflecting on the analysis of the current state in the field of artistic research-AR and creation, focused on the field of AIArt, a certain kind of operability with consciousness can be identified. I understand consciousness very broadly for the purposes of this identification, from its natural forms in the form of human or animal consciousness represented by various theories to models of computational and machine consciousness. It seems significant for many AIArt strategies that artistic research utilizes the transposition of scientific methods into artistic creation for the purpose of creating a work as an AI system, which possesses elements of consciousness in real or symbolic form. This strategy is governed by two principles of operational logics. The first, human operational logic, constructs the way in which consciousness should be involved in creating aesthetic operations. The second, the operational logic of CAI, is the process of aesthetic operations itself. Such a process can be described as research with an equivalent transpositional potential of scientific and artistic approaches, which brings or verifies knowledge about natural or artificial consciousness.

In connection with the above, I will attempt to approximate the observational principles of Mario Klingemann in connection with the elements of partnership between humans and possible AI consciousness systems, on the project of a robotic dog named A.I.C.C.A. (Artificially Intelligent Critical Canine) in the Colección SOLO gallery – Madrid, whose purpose is to provide „sarcastic and disrespectful“ critiques of art (25-27). The dog, reminiscent of a futuristic terrier moving on a wheeled platform and issuing its comments on art through a printed receipt block, as is customary in a regular store purchase, which more than symbolically comes out of its backside. The question is directed at whether this project can be perceived simultaneously as artistic research, scientific research work, or a social joke, when the differences between positions are „infra-thin“? Hence, whether the manifested ambiguity is not a sign related to the need to use transpositions as a tool of artistic research. Dario Gamboni in the book Potential Images: Ambiguity and Indeterminacy in Modern Art (28), provides examples from art history that suggest that artists have always found multistable or ambiguous images relevant. Referring to Mitchell (29), a „multistable image“ can be understood as an image depicting ambiguity, which contains two different representations. The discourse on multistable images, opening up the significance of transpositions and the meaning of „infra-thin“ for artistic research, which originally represents a neologism by Marcel Duchamp for the finest nuance of difference, suddenly takes its place here. I see Klingemann similarly to Duchamp when he sparked the consideration of artistic research as an activity that may not differ in form from scientific approaches. Thierry De Duve (30) presents an analogy on the interpretative ambiguity of Duchamp’s Fountain, that the separation of „infra-thin“ works fully when distinguishing the same from the same, when it comes to an indifferent difference or a different identity. Principally, one reading should not be preferred over another, but rather an effort to articulate the operational logic in understanding the complexity of identity structures and difference structures that were accepted in the transposition. The same can be applied when artistic aesthetics are created by scientific research processes, or vice versa. In separating the operational logic of A.I.C.C.A., we encounter two key aspects for forming the conditions for the subject model, as artistic research here is identified as led by two autonomous entities in one artistic strategy. The first represents a human, and the second a partially or potentially conscious but autonomous AI system.

Computational Neuroaesthetics

The model definition that frames the perspective on the aesthetics of artificial intelligence art enables a connection between humans and machines and their mutual transfer of aesthetic experiences, which forms a necessary prerequisite for understanding and communication in the field of Human and CAI partnership in art. Both the human and CAI positions, for the purpose of establishing what consciousness means, are based on the scientific-theoretical foundation of computational neuroaesthetics, which deals with the neurobiological processes of aesthetic perception and artistic creation. The foundation for both positions are theoretical constructions of consciousness relying on the model of natural human consciousness in the context of art reception and production. This section will present the foundations with regard to the current state of research and applicable methods.

The recent history of neuroaesthetics can be described by its most significant conceptual models. Firstly, I present the positions of Semir Zeki as a pioneer of neuroaesthetics, which are based on the construction of models based on modularization (31) (32). The modeling by Ramachandran and Hirstein is based on a combination of evolutionary approach with neurophysiological evidence and proposed a model to explain the aesthetic experience of visual art (33). Leder’s visual aesthetic model is based on the processing of information about aesthetic appreciation, which is divided into five stages: context and model input, perceptual analysis, implicit memory integration, explicit classification, cognitive mastering, and evaluation (34).

Chatterjee’s linear processing model is based on the visual and aesthetic experience describable by the principle of the aesthetic triad consisting of the sensorimotor system, the emotional-evaluative system, and the knowledge significance system (35). This model is useful for connecting various aspects of the cognitive processing phases of the aesthetic experience with specific brain structures. Chatterjee, in his 2011 article „Neuroaesthetics: A Coming of Age Story“ (36), indirectly defines the research framework of computational neuroaesthetics. This contribution represents a benefit in defining computational neuroaesthetics as a subfield of neuroaesthetics within cognitive neuroscience that explores the foundations of the aesthetic experience (37). Given the definitional framework of computational neuroaesthetics, it is important to mention Chatterjee’s essay „Art in an Age of Artificial Intelligence“ (38), which operates with the idea of embracing machine creativity as something indivisible in the art world as a partner for some artists. This also raises the assumption that the presence of predictions about the development of aesthetically sensitive machines will challenge our views on beauty and creativity and possibly our understanding of the essence of art. Most theories of creative processing, however, emphasize the central role of two types of thinking: divergent and convergent (39). Divergent thinking involves operations with many possibilities. This phase can also be considered as generative or imaginative. On the other hand, Chatterjee sees art as deliberately ambiguous. He points out that creative solutions are generated from many possibilities, which does not necessarily mean they are the correct or best ones. The overall view may suggest that the appropriation of aesthetics, or aesthetic experiences, which human consciousness from an evolutionary perspective does not recognize, but can accept in the context of so-called CAI aesthetics.

Apart from traditional methods of studying the neurobiological processes affecting aesthetic perception, the partnership of artificial intelligence and simulation neuroscience is coming to the fore. This scientific area has deep historical roots and, besides the natural science approach, brings various philosophical discourses. Simulation neuroscience can be seen as a multidisciplinary field at the intersection of neuroscience, computer science, and mathematics (40). Extensive brain simulations, such as the Human Brain Project (HBP), represent an ambitious step forward in the effort to replicate the complex structure and dynamics of the human brain, but are a race for decades (41).

For the purposes of the research project, it is important to mention as a fundamental study the „Semantic reconstruction of continuous language from non-invasive brain recordings“ (42), which demonstrates the feasibility of possibly reading human thoughts with the help of Foundation and functional magnetic resonance imaging. Such reading is performed based on the comparison of the analysis of natural language—the speech of the subject studied—with their hemodynamic response in the relevant cortical areas. The result is the pairing of these records for a model of the most likely spoken or unspoken words by the subjects through GPT-1. This method allows direct reading of thoughts with a high degree of probability. The potential for the use of mobile near-infrared spectroscopy is also an advantage. Although this approach does not represent a significant example of simulation neuroscience, it potentially offers broad possibilities for its use.

Research Objectives

The aim of the research is to analyze and provide a comprehensive response to the key research question: How can the application of speculative modeling, utilizing case studies and the methodology of speculative-oriented artistic research, predict and shape the future direction of partnerships between human beings and conscious artificial intelligence systems in the field of aesthetics and art of artificial intelligence in the realm of AIArt? This research focuses on exploring and developing theoretical and practical aspects that could influence and define future relationships and interactions between human subjects and advanced AI systems in the context of art creation and perception. The overall goal of this research is not only to provide predictions and possible scenarios for future development in the area of AIArt but also the applicability of conclusions and methodologies in the broader research complex of the research project Aesthetics of Artificial Intelligence Art.

Keywords: Speculative Modeling, Artificial Intelligence Art, Advanced Consciousness, Artistic Research, Ethical Dimension, AI Systems, Computational Aesthetics, Multidisciplinary Field, Theoretical Foundations, Consciousness in AI, Computational Functionalism, Neuroscientific Theories, Artistic Creation, Aesthetic Perception, Predictive Modeling, Human-AI Partnership, Neuroaesthetics, Machine Creativity, Simulation Neuroscience, Aesthetics of AI Art.

PREFERENCES:

1. MANOVICH, Lev and ARIELLI, Emanuele. Artificial Aesthetics: A Critical Guide to AI, Media and Design. 2019. http://manovich.net.

2. HUI KYONG CHUN, Wendy. Programmed visions: software and memory. . Cambridge (Mass.) : MIT press, 2013. Software studies. ISBN 9780262518512.

3. BUTLIN, Patrick, LONG, Robert, ELMOZNINO, Eric, BENGIO, Yoshua, BIRCH, Jonathan, CONSTANT, Axel, DEANE, George, FLEMING, Stephen M., FRITH, Chris, JI, Xu, KANAI, Ryota, KLEIN, Colin, LINDSAY, Grace, MICHEL, Matthias, MUDRIK, Liad, PETERS, Megan A. K., SCHWITZGEBEL, Eric, SIMON, Jonathan and VANRULLEN, Rufin. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. . 16 August 2023. Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive “indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

4. LAMME, Victor A.F. Towards a true neural stance on consciousness. Trends in Cognitive Sciences. November 2006. Vol. 10, no. 11, p. 494–501. DOI 10.1016/j.tics.2006.09.001.

5. LAMME, Victor A. F. How neuroscience will change our view on consciousness. Cognitive Neuroscience. 18 August 2010. Vol. 1, no. 3, p. 204–220. DOI 10.1080/17588921003731586.

6. LAMME, Victor A. F. Visual Functions Generating Conscious Seeing. Frontiers in Psychology. 14 February 2020. Vol. 11. DOI 10.3389/fpsyg.2020.00083.

7. BAARS, Bernard J. A Cognitive Theory of Consciousness. . Cambridge University Press, 1988.

8. DEHAENE, Stanislas, KERSZBERG, Michel and CHANGEUX, Jean-Pierre. A neuronal model of a global workspace in effortful cognitive tasks. Pnas. 2001. Vol. 95, no. 24.

9. DEHAENE, Stanislas and CHANGEUX, Jean-Pierre. Experimental and Theoretical Approaches to Conscious Processing. Neuron. April 2011. Vol. 70, no. 2, p. 200–227. DOI 10.1016/j.neuron.2011.03.018.

10. DEHAENE, Stanislas, KERSZBERG, Michel and CHANGEUX, Jean-pierre. A Neuronal Model of a Global Workspace in Effortful Cognitive Tasks. Proceedings of the National Academy of Sciences of the United States of America. 1998. Vol. 95, no. 24, p. 14529–14534.

11. MASHOUR, George A., ROELFSEMA, Pieter, CHANGEUX, Jean-Pierre and DEHAENE, Stanislas. Conscious Processing and the Global Neuronal Workspace Hypothesis. Neuron. March 2020. Vol. 105, no. 5, p. 776–798. DOI 10.1016/j.neuron.2020.01.026.

12. ROSENTHAL, David and WEISBERG, Josh. Higher-order theories of consciousness. Scholarpedia. 2008. Vol. 3, no. 5, p. 4407. DOI 10.4249/scholarpedia.4407.

13. SETH, Anil. Being you. . London, England : Dutton, 2021. ISBN 9781524742874.

14. SETH, Anil K. and BAYNE, Tim. Theories of consciousness. Nature Reviews Neuroscience. 3 July 2022. Vol. 23, no. 7, p. 439–452. DOI 10.1038/s41583-022-00587-4.

15. SETH, Anil K and HOHWY, Jakob. Predictive processing as an empirical theory for consciousness science. Cognitive Neuroscience. 3 April 2021. Vol. 12, no. 2, p. 89–90. DOI 10.1080/17588928.2020.1838467.

16. DEANE, George. Consciousness in active inference: Deep self-models, other minds, and the challenge of psychedelic-induced ego-dissolution. Neuroscience of Consciousness. 1 September 2021. Vol. 2021, no. 2. DOI 10.1093/nc/niab024.

17. HOHWY, Jakob. Conscious Self-Evidencing. Review of Philosophy and Psychology. 5 December 2022. Vol. 13, no. 4, p. 809–828. DOI 10.1007/s13164-021-00578-x.

18. NAVE, Kathryn, DEANE, George, MILLER, Mark and CLARK, Andy. Expecting some action: Predictive Processing and the construction of conscious experience. Review of Philosophy and Psychology. 10 December 2022. Vol. 13, no. 4, p. 1019–1037. DOI 10.1007/s13164-022-00644-y.

19. GRAZIANO, Michael S. A. and WEBB, Taylor W. The attention schema theory: a mechanistic account of subjective awareness. Frontiers in Psychology. 23 April 2015. Vol. 06. DOI 10.3389/fpsyg.2015.00500.

20. CHALMERS, David. The Meta-Problem of Consciousness. Journal of Consciousness Studies. 2018. Vol. 25, no. 9–10.

21. OIZUMI, Masafumi, ALBANTAKIS, Larissa and TONONI, Giulio. From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS Computational Biology. 8 May 2014. Vol. 10, no. 5, p. e1003588. DOI 10.1371/journal.pcbi.1003588.

22. TONONI, Giulio and KOCH, Christof. Consciousness: here, there and everywhere? Philosophical Transactions of the Royal Society B: Biological Sciences. 19 May 2015. Vol. 370, no. 1668, p. 20140167. DOI 10.1098/rstb.2014.0167.

23. ALBANTAKIS, Larissa and TONONI, Giulio. What we are is more than what we do. . 21 January 2021. If we take the subjective character of consciousness seriously, consciousness becomes a matter of “being” rather than “doing”. Because “doing” can be dissociated from

“being”, functional criteria alone are insufficient to decide whether a system possesses the necessary requirements for being a physical substrate of consciousness. The dissociation between “being” and “doing” is most salient in artificial general intelligence, which may soon replicate any human capacity: computers can perform complex functions (in the limit resembling human behavior) in the absence of consciousness. Complex behavior becomes meaningless if it is not performed by a conscious being.

24. LOVELESS, Natalie. How to make art at the end of the world. . Durham, NC : Duke University Press, 2019. ISBN 9781478004028.

25. KLINGEMANN, Mario. A.I.C.C.A. Online. 2023. Available from: https://aicca.me

26. DE LA CRUZ, Sofia. Meet A.I.C.C.A., the World's First AI Dog That Poops Art Critiques. Online. Available from: https://hypebae.com/2023/6/aicca-robotic-art-critic-dog-mario-klingemann-coleccion-solo-interview

27. ESTILER, Keith. Mario Klingemann Creates A.I.C.C.A Robotic Pooch That Poops Out Receipts of Art Critiques. Online. Available from: https://hypebeast.com/2023/6/aicca-robotic-dog-mario-klingemann-coleccion-solo

28. GAMBONI, Dario. Potential images. . London, England : Reaktion Books, 2004. ISBN 9781861891495.

29. MITCHELL, W J T. Picture theory. In : Picture theory. pbk. ed. Chicago : The University of Chicago Press, 1995. p. 45–57. ISBN 0226532321.

30. DE DUVE, Thierry and POLAN, Dana. Pictorial nominalism. . Minneapolis, MN : University of Minnesota Press, 2005. ISBN 9780816648597.

31. ZEKI, Semir. Inner vision. . London, England : Oxford University Press, 1999. ISBN 9780198505198.

32. ZEKI, Semir. Splendors and Miseries of the Brain. . 1. London : Wiley-Blackwell, 2008. Love, Creativity, and the Quest for Human Happiness. ISBN 978-1405185578.

33. RAMACHANDRAN, Vilayanur S and HIRSTEIN, William. The science of art: A neurological theory of aesthetic experience. Journal of consciousness Studies. 1999. Vol. 6, no. 6–7, p. 15–51.

34. LEDER, Helmut and NADAL, Marcos. Ten years of a model of aesthetic appreciation and aesthetic judgments : The aesthetic episode – Developments and challenges in empirical aesthetics. British Journal of Psychology. 3 November 2014. Vol. 105, no. 4, p. 443–464. DOI 10.1111/bjop.12084.

35. CHATTERJEE, Anjan and VARTANIAN, Oshin. Neuroaesthetics. Trends in Cognitive Sciences. July 2014. Vol. 18, no. 7, p. 370–375. DOI 10.1016/j.tics.2014.03.003.

36. CHATTERJEE, Anjan. Neuroaesthetics: A Coming of Age Story. Journal of Cognitive Neuroscience. 1 January 2011. Vol. 23, no. 1, p. 53–62. DOI 10.1162/jocn.2010.21457.

37. PEARCE, Marcus T., ZAIDEL, Dahlia W., VARTANIAN, Oshin, SKOV, Martin, LEDER, Helmut, CHATTERJEE, Anjan and NADAL, Marcos. Neuroaesthetics. Perspectives on Psychological Science. 17 March 2016. Vol. 11, no. 2, p. 265–279. DOI 10.1177/1745691615621274.

38. CHATTERJEE, Anjan. Art in an age of artificial intelligence. Frontiers in Psychology. 30 November 2022. Vol. 13. DOI 10.3389/fpsyg.2022.1024449. <

39. CORTES, Robert A, WEINBERGER, Adam B, DAKER, Richard J and GREEN, Adam E. Re-examining prominent measures of divergent and convergent creativity. Current Opinion in Behavioral Sciences. June 2019. Vol. 27, p. 90–93. DOI 10.1016/j.cobeha.2018.09.017.

40. FAN, Xue and MARKRAM, Henry. A Brief History of Simulation Neuroscience. Frontiers in Neuroinformatics. 7 May 2019. Vol. 13. DOI 10.3389/fninf.2019.00032.

41. AICARDI, Christine and MAHFOUD, Tara. Formal and Informal Infrastructures of Collaboration in the Human Brain Project. Science, Technology, & Human Values. 25 September 2022. P. 016224392211238. DOI 10.1177/01622439221123835.

42. TANG, Jerry, LEBEL, Amanda, JAIN, Shailee and HUTH, Alexander G. Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience. 1 May 2023. Vol. 26, no. 5, p. 858–866. DOI 10.1038/s41593-023-01304-9.