Artistic-Research Strategies in AIArt and Consciousness
Tomas Marusiak, 06/2024
The paper „Art-Research Strategies AIArt and Consciousness“ offers a reflection on the fact that the operability of consciousness poses a fundamental question for art-research strategies AIArt. This reflection is supported by both the possibility of realizing consciousness in artificial intelligence systems, as discussed in the 2023 study „Consciousness in Artificial Intelligence,“ and by the proposal that interprets art-research strategies as transposition processes of scientific procedures into the sphere of art, i.e., as research operations with manifestations of computational consciousness. Explanatory examples are provided in the form of analyses of Lauren McCarthy’s projects, which significantly support the presented starting points.
Introduction
The aim of the study is to support the consideration that operability with consciousness represents a fundamental question for AIArt artistic-research strategies. Primarily, this is due to the fascination with the idea of non-biological consciousness in art, and in society as a phenomenon to which artificial intelligence is closest. Secondly, because a certain form of non-phenomenological consciousness in artificial intelligence systems has the potential to be realized.
The first part of the study presents the concept of consciousness oriented towards its applicability in artificial intelligence systems. Its foundations are gradually presented, as well as a proposal on how to view it from the perspective of applying artistic-research strategies. This indirectly opens up a discourse on thought experiments that, through the remediation of the system, will elevate it to a higher qualitative level. The second part attempts to define the artistic-research strategies of AIArt. To support these definitions, their identification was proposed through the analysis of the projects Voice in My Head (2023) and Unlearning Language (2021) by Lauren McCarthy, which are then presented in the third part.
The central research problem, however, is the identification of operability with consciousness or its concept in AIArt artistic-research strategies, where a specific approach to artistic research can be indicated, as a transposition of scientific procedures into the realm of art. The research question, therefore, mainly aims at researching the quality of such operations.
It is also necessary to emphasize at the outset that the study approaches the relatively complex content with a special emphasis on presenting it as comprehensibly as possible across the interdisciplinary spectrum.
Consciousness in Artificial Intelligence Systems
Consciousness is one of the most complex and mysterious phenomena that humanity explores. Its nature and essence evoke various expectations and interpretations, not only as a scientific problem but also in popular imagination. Consciousness is often associated with free will, intelligence, and the ability to feel human emotions such as empathy, love, guilt, anger, and jealousy. It is something more than just a mechanical process. Despite intense research and discussions, there is still no acceptable scientific consensus that provides a stable framework for understanding consciousness. Various disciplines such as neuroscience, psychology, philosophy, and cognitive science offer different perspectives and theories. Regardless of advances in research, we still face questions that go beyond the scope of current scientific capabilities. Consciousness continues to appear as a multidimensional phenomenon that raises not only scientific, philosophical, or ethical questions but also significant political agendas concerning its computational form.
Part of this study is the explication of the issue of consciousness, focused on examining the possibility that the concept of consciousness in artificial intelligence systems may be realizable in the near future. This concept assumes that we will soon be able to develop artificial intelligence systems that exhibit manifestations of consciousness similar to those of humans. Ongoing research projects referenced in this study concentrate on enhancing the capabilities of artificial intelligence by developing systems with a higher probability of achieving conscious behavior [1]. The aim of this effort is to inspire and expand research in the field of cognitive science focused on conscious information processing, thereby enabling the creation of advanced artificial intelligence systems with human-like capabilities, particularly concerning highly developed reasoning and cognitive skills. It is also necessary to consider arguments that deny or relativize the possibility of realizing consciousness in artificial intelligence systems[2][3]. These arguments hold that consciousness requires an unspecified non-computational property of living organisms.
Computational Functionalism and Neurobiological Theories of Consciousness in the Concept of Constructing Consciousness in Artificial Intelligence Systems
Computational functionalism, as a philosophical concept, provides a framework for understanding the computational representation of consciousness, in which computational processes in artificial intelligence can model or simulate conscious states. This version of functionalism holds that all mental states are computational states of the brain. Such an approach considers the functional-computational organization of the brain, associated with its awareness, to be sufficient. This allows entry into causal relationships and interactions with the environment. The concept operates with the absence of substrate dependency, which means that mental states do not depend on specific physical material but on patterns of information and operations they perform. Such generalization is sometimes understood as a thought experiment, in which a person’s brain is gradually replaced, piece by piece, with artificial components that perform the same functions as the original brain parts. According to the absolute application of substrate independence, such a replacement of parts of the brain, provided that the system functions identically, should not affect the mind and consciousness of the person undergoing the process. This notion is, however, refined by stating that the substrate is not important for consciousness only if it does not affect the implementation of algorithms in the system [4].
Neuroscientific foundations generally rely on the need for neuronal data analysis, the conceptualization of computational and psychological models, as well as philosophical analysis, to arrive at a unifying stance on the known and knowable. Metaphysical theories of consciousness seek to understand the deep and complex question: „What is the relationship between consciousness and the material world?“ These theories offer different perspectives on how these two phenomena are related, from the claim that they are inseparable to the assertion that they are fundamentally different. The most representative positions in the metaphysics of consciousness, according to Chalmers, are: property dualism, panpsychism, materialism, and illusionism. Both positions are important for the purposes of this study, despite their expected opposition, as they offer different qualities of views on the issue of consciousness [5][6].
For the purposes of explaining the study „Consciousness in Artificial Intelligence 2023,“ the following theories were selected: Global Workspace Theory, Recurrent Processing Theory, and Higher-Order Theory. These theories are compatible with the theoretical foundations for the scientific study in question, mainly due to their support in computational functionalism [7]. The discourse on the Integrated Information Theory (IIT), which is among the most influential theories of consciousness currently, as formulated by Oizumi, Tononi, and Koch, was not included in the selection nor in the study [8]. The reason for its exclusion was its incompatibility with the assumptions for computational functionalism, which is the basis for examining consciousness in artificial intelligence systems within the study. However, some proponents of IIT argue that digital computers are unlikely to be conscious regardless of the programs they run [9]. This view emphasizes the difference between digital computers and systems that could exhibit the properties of consciousness according to IIT.
The Global Workspace Theory is a concept in cognitive science that attempts to explain consciousness [10]. It posits the existence of a „global workspace“ in the brain. This space integrates information from various areas of the brain, allowing us to be aware of our thoughts, feelings, and perceptions. Most information processing in the brain occurs in modules. For something to become a conscious experience, it must be integrated into the global workspace, as the global workspace has limited capacity and can process only a certain amount of information at a time. This explains why we can focus on only a few things, linking the global workspace with attention. Information that captures our attention is more likely to become conscious.
The Recurrent Processing Theory emphasizes the importance of dynamic and recurring interactions in the brain for effective information processing and perception. This theory is based on the premise that recurrent processing occurs in areas where sensory systems are highly interconnected and involve feedforward and feedback loops [11]. According to the theory, these loops are necessary and sufficient for consciousness. For example, feedforward connections from the primary visual area of the human brain transmit information to higher-level processing areas. Initial registration of visual information involves sequential processing. During this transmission, feedback connections linking visual areas are formed [12]. Activation of such connections leads to dynamic activity in the visual system, thus enabling the integration of information from various sensory modalities and levels of processing.
The Higher-Order Theory is a cognitive theory of consciousness, according to which consciousness is the ability to have thoughts about one’s own mental states [13]. In other words, we are self-aware because we can think about our thoughts, feelings, and perceptions. According to this theory, a person can be in a conscious visual state of perceiving an object if they imagine themselves in this visual state. Some extensions of the theory operate with the possibility that a person can be in a conscious state even in the absence of a visual system [14].
Feasibility of Consciousness in Artificial Intelligence Systems
The study „Consciousness in Artificial Intelligence (2023)“ presents a methodology for exploring artificial intelligence consciousness based on three key principles. The first principle is the acceptance of computational functionalism as a foundational hypothesis, which posits that consciousness in artificial intelligence systems is theoretically possible and that by examining the functionality of these systems, it is possible to determine whether they are capable of manifesting consciousness. Computational functionalism provides a framework for understanding how computational processes in artificial intelligence systems can model or simulate conscious states. The second principle utilizes theories from the field of neuroscience related to consciousness, which were partially presented. These theories provide substantial empirical support for evaluating consciousness in artificial intelligence systems and focus on identifying the neuronal and functional criteria necessary for the manifestation of human consciousness and their application to artificial intelligence systems. This approach allows researchers to apply insights from neuroscience to assess the potential and characteristics of consciousness in artificial intelligence systems. The third principle involves evaluating whether artificial intelligence systems perform functions that are considered indicative of consciousness by scientific theories.
The study takes a realistic approach to the concept of consciousness in artificial intelligence systems by thoroughly evaluating existing artificial intelligence systems with regard to neuroscientific theories of consciousness such as: Recurrent Processing Theory [15], Global Workspace Theory [16][17], Higher-Order Theory [18], Predictive Processing [19], and The Attention Schema Theory of Consciousness [20]. In the context of this research, specific indicators for assessing consciousness, known as „indicative properties of consciousness,“ are derived from these theories. These indicators are explained in computational terms, allowing for a more precise assessment and analysis of artificial intelligence systems.
According to the analysis conducted, current artificial intelligence systems do not exhibit the presence of consciousness in its entirety. Although some aspects of consciousness can be simulated or modeled in these systems, the complexity and depth of actual consciousness, as understood in the context of human consciousness, remain beyond the reach of current artificial intelligence technologies. On the other hand, the findings suggest that there are no apparent technical barriers to building advanced artificial intelligence systems that could meet these indicators and therefore potentially exhibit attributes of consciousness. This conclusion opens up possibilities for further development and innovation in the field of artificial intelligence, where the integration of more advanced models of consciousness, based on neuroscience and cognitive science, could occur.
Thought Experiment as Operability with Consciousness in Artificial Intelligence Systems in Relation to Artistic-Research Strategies
Based on the current scientific knowledge presented in the study „Consciousness in Artificial Intelligence (2023),“ we identify several key properties that can be attributed to consciousness in artificial intelligence systems. These properties include algorithmic recurrence, the ability to generate organized and integrated perceptual representations, global information availability, a selective attention mechanism, predictive models representing the state of attention, generative perception, and metacognitive monitoring. The indicated properties are derived from various scientific theories of consciousness and suggest that artificial intelligence systems exhibiting multiple of these properties have a higher probability of achieving consciousness. In other words, these states or processes do not include subjective experiences or feelings, which are an integral part of conscious experience as generally experienced by humans, but the ability to simulate them is not excluded.
Despite high expectations regarding the implementation of consciousness in artificial intelligence systems, it is possible to lean towards the view that the concept of consciousness in these systems is rather a „zombie world.“ According to Chalmers‘ theory, the zombie world is physically indistinguishable from our world but completely devoid of subjective experience. Chalmers allows for the idea that it is possible to imagine an entire zombie world that is physically indistinguishable from our world but completely without conscious experience. Since such a world is conceivable, Chalmers argues, it is metaphysically possible. This means that the existence of such a world is possible within the framework of philosophical logic, which supports his argument that consciousness is not reducible to physical processes. The zombie hypothesis can be understood epistemically as a problem of causal explanation rather than a problem of logical or metaphysical possibility. This approach operates with the explanatory gap that so far no one has provided a convincing causal explanation of how and why we are conscious. This gap is equally present in the inability to provide a convincing causal explanation of how and why we would not be zombies [21][22].
We assume that if there is no deviation from the trajectory of adopting the latest technological innovations related to the discourses on consciousness and machines with the essential core of AIArt artistic-research strategies, the realization of consciousness in artificial intelligence systems will also be reflected in future artistic production. However, we must accept the fact that there is no scientific consensus on what consciousness is, and even ideas about what consciousness could be are subject to speculation. Such a speculative approach might be an appropriate candidate for considering how to imagine AIArt artistic-research strategies operating with consciousness in artificial intelligence systems.
Let us imagine the construction of a thought experiment that could be based on the idea that if a process in an existing AIArt project can be considered an expression of consciousness in the broadest sense of the word, it may be a suitable candidate for remediation in the construction of consciousness in artificial intelligence systems. The design of thought experiments should adopt such positions of artistic-research strategies that participate in creating new knowledge about the boundaries of consciousness in artificial intelligence systems and have at least a sufficiently conceivable quality of remediation concerning the computational expressions of consciousness. Such delineation can support the argument that although consciousness in artificial intelligence systems, as framed in the study, is a zombie, it is a zombie only in that it probably will not possess phenomenal consciousness, but it will certainly possess conceivable and potentially realizable expressions of consciousness. Just as many artistic-research strategies in AIArt require a certain type of research potential, it will also be necessary in the construction of thought experiments and speculative models.
AIArt Artistic-Research Strategies
An Attempt to Define Artistic-Research Strategies in AIArt
The attempt to define artistic-research strategies in AIArt relies more on a hypothesis than a theory, suggesting the construction of a model that sees them as a process of transposing various kinds of knowledge and scientific procedures into the creative process of artists. Despite the possible temptation to frame these processes as artistic research, this was abandoned due to its complex definability. The definitional risk is confirmed by current academic discussions, which often present it as an area marked by definitional uncertainty [23].
The acceptability of the mentioned model of artistic-research strategies is also defended by the discourse on multistable positions, which clarifies the importance of transpositions in the context of „infra-thin“ for artistic research, associated with Marcel Duchamp’s neologism for the subtlest shade of difference [24]. Thierry De Duve presents reflections in analogy to the interpretative ambiguity of Duchamp’s Fountain, stating that the „infra-thin“ separation works fully when distinguishing the same from the same, in cases of indifferent difference or different identity [25]. Principally, one reading should not be preferred over the other; the important thing is the effort to articulate operational logic in understanding the complexity of structures of identity and differences that have been adopted in transposition. Schwab admits that artistic research operating with transpositional processes does not need to represent a stable field, discipline, or concept because each new example shifts this concept beyond the current definitional framework, which is generally not unusual in relation to dynamically changing situations in art [26][27].
In the research of contemporary AIArt production, we encounter several problems that currently complicate the approach to qualitative research and content analysis, for example, the identification of artistic-research strategies. The main problem is the lack of sufficient relevant literature containing unifying views or an inventory of realized projects since the beginning of AIArt development. Based on subjective conviction, knowing the current artistic production, it is indicated that a model of artistic-research strategies with the presence of operability with consciousness could be identifiable in the projects of Mario Klingemann such as Uncanny Mirror (2018) [28][29] and Circuit Training [30]. This list could be expanded by explaining the artistic-research strategies in projects by: Alexander Mordvintsev, Gene Kogan, Jake Elwes, Jon McCormack, Joy Buolamwini, Memo Akten, Michael Sedbon, Mika Tyku, Mimi Onuoha, Refik Anadol, Scott Eaton, Stephanie Dinkins, or Tega Brain.
After a responsibly conducted analysis and based on subjective consideration, an explication of selected projects by Lauren Lee McCarthy was undertaken, which can demonstrate artistic-research strategies based on the transposition of scientific procedures into the realm of art, as well as a significant element of operability with consciousness. McCarthy’s selection was reinforced by her approach to artistic research and critical reflection on AI authorities and their power control over people. Such a type of authority is identifiable in the projects Voice in My Head (2023) and Unlearning Language (2021).
Lauren Lee McCarthy as an Explicatory Example
Lauren Lee McCarthy is an American artist and researcher who focuses on exploring interactions between people and technologies, specifically addressing topics such as potential power intrusions into people’s privacy and a critical view of the coexistence of humans and artificial intelligence systems. Her artistic-research strategies are oriented toward examining the social and ethical implications of technological innovations. She uses various media and approaches, including performative interventions, installations, and software projects.
In her projects, she programmatically works with the idea of consciousness in artificial intelligence systems. However, the conscious system in these projects is not presented as an entity aware of its own existence or demonstrating feelings, but rather as a system that does not indicate manifestations of phenomenological states of consciousness. It often appears as a power authority integrating consciousness without subjective experiences or feelings. Her artistic-research strategies operate with methods of computer science research directly applied in simulated social situations as part of artistic projects. This way, they touch on the boundaries or intersections of two operational levels of consciousness. The first level of consciousness is represented by people who voluntarily participate in performative-interactive projects and interact with the artificial intelligence system that influences their conscious states. The second level is represented by the artificial intelligence system, which has information on controlling and inputting human conscious systems.
Unlearning Language
The performative-interactive project Unlearning Language (2021) designed by Lauren Lee McCarthy poses a bold question: Can we as humans unlearn to behave like machines? [31]. This project was realized as an experiment in which artificial intelligence systems lead a group of participants on a journey to „unlearn“ behavior and language patterns that make us too similar to machines. The group is led by artificial intelligence systems aiming to train people to behave less mechanically. As participants communicate, their speech, gestures, and facial expressions are detected by artificial intelligence systems and highlighted using light, sound, and vibrations. Participants must collaborate and find new ways of communication that the artificial intelligence cannot detect. These methods may include clapping, buzzing, or adjusting the speed, pitch, or pronunciation of speech.
The artistic-research strategies are oriented toward exploring the relationship between language and reality. It is specifically pointed out that language is not just a tool for describing reality but also a construct that shapes it. Through interaction with artificial intelligence, participants are forced to reflect on how language influences their thinking and perception of the world. The project deliberately overturns current paradigms set by the technology industry. McCarthy and the collaborating team programmed artificial intelligence that recognizes speech, gestures, and facial expressions and responds to a group of participants communicating in a translucent room. The participants‘ task is to find ways of communication that the AI module cannot recognize, such as buzzing, clapping, or changing the pitch of their voice. The situations presented reveal how our behavior can be shaped by technological systems. In the work, artificial intelligence functions as a controller that responds to standardized behavior patterns, forcing participants to innovate and find new ways of communication. This process can have a profound impact on the self-perception of participants, who begin to reflect on the mechanical nature of their everyday interactions.
Voice in My Head
Voice in My Head (2023) is a project that combines strategies based on the use of AI technologies and interactivity to explore how an AI chatbot can influence and shape personal extended conversational decisions [32]. Personal conversational decision-making is understood as a person’s inner dialogue conducted through inner speech. The project’s design places participants in interaction with a chatbot based on ChatGPT, which gives them instructions through a wireless headset. In this way, the voice from the headset symbolically becomes their inner voice, with the potential to intervene in their social reality.
The philosophical-psychological perspective opens the potential for discourse on multiple levels of the mind-body relationship. This relationship manifests itself through the technological interface as a means of linking internal thoughts with the external world. It is further analyzed through the interaction between the participant and the chatbot, where the voice from the headset, as a representative of the mind, influences the participant’s physical behavior in relation to their body. Such a process can demonstrate that our thoughts are not isolated from the physical environment but are closely linked to it and influence our self-perception. Another level of relationship is activated when the chatbot intervenes in the participant’s conversational decisions, causing them to question the origin of their thoughts and decisions.
This intervention by artificial intelligence reveals how technologies can manipulate our thoughts and emotions, altering our self-perception and perception of reality. The presence of discourse in relation to the philosophical question of consciousness can be identified in the concept of selfhood, which needs to be understood in the context of philosophy and psychology and refers to the state or quality of being oneself, identity, or individuality. The project directly influences the participant’s self-perception by replacing their inner voice with the external voice of the chatbot. This intervention can cause dissonance in the participant’s identity, as their thoughts and decisions are modified by an external system. The emotional response of the participant to the chatbot’s instructions is an integral part of the project, even at the cost of exciting interactions that may cause feelings of confusion, frustration, or even relief when the participant’s decisions are supported or guided by the external voice. This can be considered an analysis of the emotional consequences of technological interventions, especially into our decisions, based on convergent thinking.
Summary
By analyzing the projects Unlearning Language and Voice in My Head, it is possible to identify the processes of transposing scientific methods into the realm of art. Significant examples for both projects involve the use of scientific methods from computer science based on insights from psychology or sociology. Both projects by Lauren Lee McCarthy explore profound questions concerning human consciousness and interaction with technology. Unlearning Language reveals the mechanical nature of our everyday interactions and forces us to seek new ways of communication, while Voice in My Head examines how technologies influence our inner thoughts and self-perception. These projects highlight the significant impact of technologies on our reality and identity, opening new discourses on the relationship between the mind, body, and technological systems.
The positions of Lauren Lee McCarthy in the analyzed projects undoubtedly reinforce the assumption that AIArt artistic-research strategies based on transpositional operations will be identifiable in other projects as well. Although it is necessary to state from a heuristic judgment that it would be shocking if they were not identifiable. Again, a subjective judgment is made, but at least in the project Voice in My Head, the potential for remediation can be detected, even in the form of a thought experiment and speculative design.
Based on this study, it is also possible to consider that artistic-research strategies may carry questions that are achievable by scientific methods, but scientists never ask them, revealing the infrapotential of phenomena. I want to draw attention to the importance and social responsibility of people—and still people, not yet machines—who model our future with artificial intelligence
[1] GOYAL, Anirudh a Yoshua BENGIO, 2022. Inductive biases for deep learning of higher-level cognition. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences [online]. 2022, roč. 478, č. 2266. ISSN 1364-5021. Available at: doi:10.1098/rspa.2021.0068
[2] SEARLE, John R., 1980. Minds, brains, and programs. Behavioral and Brain Sciences [online]. 1980, roč. 3, č. 3, s. 417–424. ISSN 0140-525X. Available at: doi:10.1017/S0140525X00005756
[3] SETH, Anil, 2021. Being You: A New Science of Consciousness. London: Faber and Faber. ISBN 978-1-5247-4287-4.
[4] BLOCK, Ned, 1995. On a confusion about a function of consciousness. Behavioral and Brain Sciences [online]. 1995, roč. 18, č. 2. ISSN 14691825. Available at: doi:10.1017/S0140525X00038188
[5] CHALMERS, David John, 1996. The conscious mind. New York, NY: Oxford University Press. ISBN 9780195105537.
[6] CHALMERS, David J., 2003. Consciousness and its Place in Nature. V: The Blackwell Guide to Philosophy of Mind [online]. B.m.: Wiley, s. 102–142. Available at: doi:10.1002/9780470998762.ch5
[7] BUTLIN, Patrick, Robert LONG, Eric ELMOZNINO, Yoshua BENGIO, Jonathan BIRCH, Axel CONSTANT, George DEANE, Stephen M. FLEMING, Chris FRITH, Xu JI, Ryota KANAI, Colin KLEIN, Grace LINDSAY, Matthias MICHEL, Liad MUDRIK, Megan A. K. PETERS, Eric SCHWITZGEBEL, Jonathan SIMON a Rufin VANRULLEN, 2023. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. 2023.
[8] OIZUMI, Masafumi, Larissa ALBANTAKIS a Giulio TONONI, 2014. From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS Computational Biology [online]. 2014, roč. 10, č. 5, s. e1003588. ISSN 1553-7358. Available at: doi:10.1371/journal.pcbi.1003588
[9] ALBANTAKIS, Larissa a Giulio TONONI, 2021. What we are is more than what we do [online]. 2021. B.m.: arXiv. Available at: http://arxiv.org/abs/2102.04219
[10] BAARS, Bernard J, 1993. A cognitive theory of consciousness. Cambridge, England: Cambridge University Press. ISBN 9780521427432.
[11] LAMME, Victor A.F., 2006. Towards a true neural stance on consciousness. Trends in Cognitive Sciences [online]. 2006,. 10, n. 11, s. 494–501. ISSN 13646613. Available at: doi:10.1016/j.tics.2006.09.001
[12] FELLEMAN, Daniel J a David C. VAN ESSEN, 1991. Distributed Hierarchical Processing in the Primate Cerebral Cortex. Cerebral Cortex [online]. 1991,. 1, n. 1, p. 1–1. ISSN 1047-3211. Available at: doi:10.1093/cercor/1.1.1-a
[13] ROSENTHAL, David, 2005. Consciousness and Mind. Oxford, England: Clarendon Press. ISBN 9780191568589.
[14] BROWN, Richard, 2015. The HOROR theory of phenomenal consciousness. Philosophical Studies [online]. 2015, roč. 172, č. 7, s. 1783–1794. ISSN 0031-8116. Available at: doi:10.1007/s11098-014-0388-7
[15] LAMME, Victor A. F., 2020. Visual Functions Generating Conscious Seeing. Frontiers in Psychology [online]. 2020, roč. 11. ISSN 1664-1078. Available at: doi:10.3389/fpsyg.2020.00083
[16] DEHAENE, Stanislas a Jean-Pierre CHANGEUX, 2011. Experimental and Theoretical Approaches to Conscious Processing. Neuron [online]. 2011,. 70, n. 2,p. 200–227. ISSN 08966273. Available at: doi:10.1016/j.neuron.2011.03.018
[17] MASHOUR, George A., Pieter ROELFSEMA, Jean-Pierre CHANGEUX a Stanislas DEHAENE, 2020. Conscious Processing and the Global Neuronal Workspace Hypothesis. Neuron [online]. 2020, roč. 105, č. 5, s. 776–798. ISSN 08966273. Available at: doi:10.1016/j.neuron.2020.01.026
[18] ROSENTHAL, David a Josh WEISBERG, 2008. Higher-order theories of consciousness. Scholarpedia [online]. 2008, roč. 3, č. 5, s. 4407. ISSN 1941-6016. Available at: doi:10.4249/scholarpedia.4407
[19] SETH, Anil K. a Tim BAYNE, 2022. Theories of consciousness. Nature Reviews Neuroscience [online]. 2022, roč. 23, č. 7, s. 439–452. ISSN 1471-003X. Available at: doi:10.1038/s41583-022-00587-4
[20] GRAZIANO, Michael S. A. a Taylor W. WEBB, 2015. The attention schema theory: a mechanistic account of subjective awareness. Frontiers in Psychology [online]. 2015, roč. 06. ISSN 1664-1078. Available at: doi:10.3389/fpsyg.2015.00500
[21] CHALMERS, David, 1995. Facing Up to the Problem of Consciousness. Journal of Consciousness Studies [online]. 1995, roč. 2, č. 3, s. 200–219. Available at: https://consc.net/papers/facing.pdf
[22] CHALMERS, David, 2018. The Meta-Problem of Consciousness. Journal of Consciousness Studies. 2018, roč. 25, č. 9–10.
[23] LOVELESS, Natalie, 2019. How to make art at the end of the world: a manifesto for research-creation. Durham ; London: Duke University Press. ISBN 9781478003724 9781478004028.
[24] DE DUVE, Thierry, 1998. Kant after Duchamp. London, England: MIT Press. ISBN 9780262540940.
[25] DE DUVE, Thierry a Dana POLAN, 2005. Pictorial nominalism. Minneapolis, MN: University of Minnesota Press. ISBN 9780816648597.
[26] SCHWAB, Michael, 2018. Transpositionality and Artistic Research. V: Michael SCHWAB, ed. Transpositions [online]. B.m.: Leuven University Press, Aesthetico-Epistemic Operators in Artistic Research, s. 191–214. ISBN 9789462701410. Available at: http://www.jstor.org/stable/j.ctv4s7k96.14
[27] SCHWAB, Michael a Henk BORGDORFF, 2014. The exposition of artistic research: publishing art in academia. Leiden: Leiden University Press. ISBN 9789087281649.
[28] VERBIST, Etienne, 2019. Mario Klingemann. V: www.artdependence.com [online]. Instruments of Creation or Can Artificial Intelligence Replace Human? Available at: https://www.artdependence.com/articles/mario-klingemann-instruments-of-creation-or-can-artificial-intelligence-replace-human/
[29] VALENTINE-LEWIS, Andrea, 2020. Biometric Metamorphoses // MirNs at New Media Gallery. V: ReIssue [online]. Available at: https://reissue.pub/articles/biometric-metamorphoses-mirns-at-new-media-gallery/
[30] KLINGEMANN, Mario, 2020. Circuit Training: Machine-made Art for the People [online]. 2020. Available at: https://artsandculture.google.com/story/0028/ngWRdP9M5scyLQ
[31] MCCARTHY, Lauren Lee, 2021. Unlearning Language [online]. 2021. Available at: https://www.laurenleemccarthy.com/unlearning-language
[32] MCCARTHY, Lauren Lee, 2023. Voice In My Head — Lauren Lee McCarthy [online]. 2023. Available at: https://lauren-mccarthy.com/Voice-In-My-Head
Bibliography
ALBANTAKIS, Larissa a Giulio TONONI, 2021. What we are is more than what we do [online]. 2021. B.m.: arXiv. Available at: http://arxiv.org/abs/2102.04219
BAARS, Bernard J, 1993. A cognitive theory of consciousness. Cambridge, England: Cambridge University Press. ISBN 9780521427432.
BLOCK, Ned, 1995. On a confusion about a function of consciousness. Behavioral and Brain Sciences [online]. 1995, roč. 18, č. 2. ISSN 14691825. Available at: doi:10.1017/S0140525X00038188
BROWN, Richard, 2015. The HOROR theory of phenomenal consciousness. Philosophical Studies [online]. 2015, roč. 172, č. 7, s. 1783–1794. ISSN 0031-8116. Available at: doi:10.1007/s11098-014-0388-7
BUTLIN, Patrick et al, 2023. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness [online]. 22. august 2023. B.m.: arXiv. [cit. 26.6.2024]. Available at: http://arxiv.org/abs/2308.08708
DE DUVE, Thierry, 1998. Kant after Duchamp. London, England: MIT Press. ISBN 9780262540940.
DE DUVE, Thierry a Dana POLAN, 2005. Pictorial nominalism. Minneapolis, MN: University of Minnesota Press. ISBN 9780816648597.
DEHAENE, Stanislas a Jean-Pierre CHANGEUX, 2011. Experimental and Theoretical Approaches to Conscious Processing. Neuron [online]. 2011, roč. 70, č. 2, s. 200–227. ISSN 08966273. Available at: doi:10.1016/j.neuron.2011.03.018
FELLEMAN, Daniel J a David C. VAN ESSEN, 1991. Distributed Hierarchical Processing in the Primate Cerebral Cortex. Cerebral Cortex [online]. 1991, roč. 1, č. 1, s. 1–1. ISSN 1047-3211. Available at: doi:10.1093/cercor/1.1.1-a
GOYAL, Anirudh a Yoshua BENGIO, 2022. Inductive biases for deep learning of higher-level cognition. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences [online]. 2022, roč. 478, č. 2266. ISSN 1364-5021. Available at: doi:10.1098/rspa.2021.0068
GRAZIANO, Michael S. A. a Taylor W. WEBB, 2015. The attention schema theory: a mechanistic account of subjective awareness. Frontiers in Psychology [online]. 2015, roč. 06. ISSN 1664-1078. Available at: doi:10.3389/fpsyg.2015.00500
CHALMERS, David, 2018. The Meta-Problem of Consciousness. Journal of Consciousness Studies. 2018, roč. 25, č. 9–10.
CHALMERS, David J., 2003. Consciousness and its Place in Nature. V: The Blackwell Guide to Philosophy of Mind [online]. B.m.: Wiley, s. 102–142. Available at: doi:10.1002/9780470998762.ch5
CHALMERS, David John, 1996. The conscious mind. New York, NY: Oxford University Press. ISBN 9780195105537.
CHALMERS, David, 1995. Facing Up to the Problem of Consciousness. Journal of Consciousness Studies [online]. 1995, roč. 2, č. 3, s. 200–219. Available at: https://consc.net/papers/facing.pdf
KLINGEMANN, Mario, 2020. Circuit Training: Machine-made Art for the People [online]. 2020. Available at: https://artsandculture.google.com/story/0028/ngWRdP9M5scyLQ
LAMME, Victor A. F., 2020. Visual Functions Generating Conscious Seeing. Frontiers in Psychology [online]. 2020, roč. 11. ISSN 1664-1078. Available at: doi:10.3389/fpsyg.2020.00083
LAMME, Victor A.F., 2006. Towards a true neural stance on consciousness. Trends in Cognitive Sciences [online]. 2006, roč. 10, č. 11, s. 494–501. ISSN 13646613. Available at: doi:10.1016/j.tics.2006.09.001
LOVELESS, Natalie, 2019. How to make art at the end of the world: a manifesto for research-creation. Durham ; London: Duke University Press. ISBN 9781478003724 9781478004028.
MASHOUR, George A., Pieter ROELFSEMA, Jean-Pierre CHANGEUX a Stanislas DEHAENE, 2020. Conscious Processing and the Global Neuronal Workspace Hypothesis. Neuron [online]. 2020, roč. 105, č. 5, s. 776–798. ISSN 08966273. Available at: doi:10.1016/j.neuron.2020.01.026
MCCARTHY, Lauren Lee, 2021. Unlearning Language [online]. 2021. Available at: https://www.laurenleemccarthy.com/unlearning-language
MCCARTHY, Lauren Lee, 2023. Voice In My Head — Lauren Lee McCarthy [online]. 2023. Available at: https://lauren-mccarthy.com/Voice-In-My-Head
OIZUMI, Masafumi, Larissa ALBANTAKIS a Giulio TONONI, 2014. From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS Computational Biology [online]. 2014, roč. 10, č. 5, s. e1003588. ISSN 1553-7358. Available at: doi:10.1371/journal.pcbi.1003588
ROSENTHAL, David, 2005. Consciousness and Mind. Oxford, England: Clarendon Press. ISBN 9780191568589.
ROSENTHAL, David a Josh WEISBERG, 2008. Higher-order theories of consciousness. Scholarpedia [online]. 2008, roč. 3, č. 5, s. 4407. ISSN 1941-6016. Available at: doi:10.4249/scholarpedia.4407
SEARLE, John R., 1980. Minds, brains, and programs. Behavioral and Brain Sciences [online]. 1980, roč. 3, č. 3, s. 417–424. ISSN 0140-525X. Available at: doi:10.1017/S0140525X00005756
SETH, Anil, 2021. Being You: A New Science of Consciousness. London: Faber and Faber. ISBN 978-1-5247-4287-4.
SETH, Anil K. a Tim BAYNE, 2022. Theories of consciousness. Nature Reviews Neuroscience [online]. 2022, roč. 23, č. 7, s. 439–452. ISSN 1471-003X. Available at: doi:10.1038/s41583-022-00587-4
SETH, Anil K a Jakob HOHWY, 2021. Predictive processing as an empirical theory for consciousness science. Cognitive Neuroscience [online]. 2021, roč. 12, č. 2, s. 89–90. ISSN 1758-8928. Available at: doi:10.1080/17588928.2020.1838467
SCHWAB, Michael, 2018. Transpositionality and Artistic Research. V: Michael SCHWAB, ed. Transpositions [online]. B.m.: Leuven University Press, Aesthetico-Epistemic Operators in Artistic Research, s. 191–214. ISBN 9789462701410. Available at: http://www.jstor.org/stable/j.ctv4s7k96.14
SCHWAB, Michael a Henk BORGDORFF, 2014. The exposition of artistic research: publishing art in academia. Leiden: Leiden University Press. ISBN 9789087281649.
VALENTINE-LEWIS, Andrea, 2020. Biometric Metamorphoses // MirNs at New Media Gallery. V: ReIssue [online]. Available at: https://reissue.pub/articles/biometric-metamorphoses-mirns-at-new-media-gallery/
VERBIST, Etienne, 2019. Mario Klingemann. V: www.artdependence.com [online]. Instruments of Creation or Can Artificial Intelligence Replace Human? Available at: https://www.artdependence.com/articles/mario-klingemann-instruments-of-creation-or-can-artificial-intelligence-replace-human/