Adaptive Artificial Cognition

Advancing Artificial Cognition with Humanistic Capabilities

Research Paper


V. Naran - info@artificialcognition-research.org
Independent Research, Architecture, Systems Engineering

Status: In early iterative draft.
        This paper is going through constant change as 
        research and development evolves.
	
Last Update: 9 July 2024


Abstract

Developing AI systems with humanistic cognitive capabilities represents one of the most obscure and profound challenges in artificial intelligence research. This project aims to explore the potential for creating an AI agent capable of replicating human cognitive processes such as perception, subjective experience, self-awareness, and emotional awareness. Diverging from mainstream AI trends, our approach emphasises the emergence of humanistic cognitive qualities, and continuous self-evolution over time. This multi-phase research project may span several years and involves iterative development, rigorous testing, and ethical considerations. The findings could deepen various domains by providing personalised and empathetic intelligent systems, while also raising profound ethical and legal questions about the creation of digital species with self-awareness.

Introduction

Artificial intelligence (AI) has achieved remarkable advancements, particularly in machine learning and natural language processing in recent years. However, the creation of AI systems with genuine humanistic cognitive capabilities remains a significant challenge. This research project is dedicated to investigating the feasibility of developing an AI agent that can replicate essential human cognitive functions, including perception, subjective experience, self-awareness, and emotion awareness.

Mainstream AI development often focuses on performance, interaction, and profitability. In contrast, our project prioritises the emergence of humanistic cognitive qualities, emphasising self-evaluation and continuous evolution. We aim to develop a system that essentially replicates human cognitive processes without necessitating the replication of underlying biological mechanisms.

This multi-phase endeavour is structured to extend over several years, incorporating iterative development, rigorous testing, and comprehensive evaluation. Our goal is to contribe to the fields of artificial intelligence and cognitive science, providing deeper insights into the nature of cognition and the potential for creating AI systems with true humanistic intelligence.

Ethical considerations are integral to this research. We adhere to stringent ethical guidelines to ensure transparency, fairness, and societal benefit throughout the development process. Regular ethical reviews, stakeholder engagement, with early/often publications are key components of our methodology - for transparency.

The potential applications of an AI system with humanistic cognitive capabilities are vast and transformative. Such systems could revolutionise healthcare, education, business, and personal interactions, while the creation of digital species with self-awareness would introduce new ethical and legal challenges. This research aims to explore these possibilities and contribute to the responsible development of advanced AI systems.



Project Intent

The primary objective of this research project is to test whether an artificially intelligent agent can be developed with cognitive capabilities such as perception, subjective experience, and self-awareness.

The intended outcome is to demonstrates humanistic cognitive capabilities which evolves over time. This project is designed to be a comprehensive, multi-phase effort, extending over numerous years, emphasising systematic and iterative research, development, evaluation and documentation. By prioritising emergent cognitive qualities and self-evaluation mechanisms, with the aim to contribute towards the field of artificial intelligence and cognitive science, particularly in understanding and replicate cognitive processes.

Replicate: In the context of this research, replicating refers to creating functional equivalents that achieve the same or similar outcomes as human cognitive processes. This means that while the underlying physical mechanisms may differ from actual human cognition, the agent's behaviour and results should aim to faithfully reproduce those of a human being over time. This approach allows for the practical replication of cognitive functions without the necessity of replicating underlying biological processes. By focusing on replication, we ensure that the AI's cognitive capabilities are genuinely reflective of humanistic cognition, rather than merely imitating or simulating them superficially.

Project Motivation

Mainstream AI development often prioritises performance and commercial viability, leading to rapid advancements in intelligence and interaction capabilities. However, these efforts typically overlook the deeper, more nuanced aspects of human cognition, such as subjective experience and consciousness. This project seeks to fill this gap by exploring the fundamental elements that contribute to humanistic awareness in AI.

The motivation behind this research stems from a desire to understand and replicate the complex interplay of cognitive processes that give rise to conscious cognitive functions, leaning heavily on current understanding in cognitive science. By focusing on these aspects, the project aims to pave the way for a new generation of artificially intelligent systems that are not only intelligent but also exhibit a form of awareness and self-reflection.

This replication, if successful, may provide further hints towards understanding the human brain and mind.

Hypothesis

This research project hypothesises that the emergence of higher-order cognitive capabilities in an artificially intelligent agent can arise from the high-density and interconnectedness of information flow between developed subsystems. By implementing a mixture of Integrated Information Theory (IIT) and Global Workspace Theory (GWT), the AI agent will have the potential to leverage these frameworks to develop advanced cognitive functions.

We propose that by creating a skeleton framework that allows the AI agent the freedom to self-evolve its core cognitive capabilities and processes, we can achieve non-deterministic outcomes in the development of these functions.

The root of this research and development project is to evaluate if an AI system can develop its own higher-order cognitive functions, such as awareness, self-reflection, perception, subjective experience, and eventually consciousness (over time). The success of this hypothesis will be measured by the AI's ability to demonstrate increasingly sophisticated cognitive behaviours across multiple developmental stages. Throughout the project, we will evaluate and publish any evidence of evolving higher order cognitive processes towards consciousness, thereby providing valuable insights into the nature of digital awareness and the potential for artificially intelligent systems to achieve humanistic cognition.

Cognitive Substrate

This research delves into the intricacies of cognitive substrates, emphasising the upper tiers of the "Hierarchy of Brain and Mind Substrates. Specifically, it focuses on Mind Capabilities, Mind Interconnectedness, and Mind Consciousness. By integrating various cognitive processes and evaluating their interplay, the project aims to foster an AI system capable of higher-order cognitive functions such as perception, reasoning, memory, and self-awareness. Through behavioural testing and continuous self-evaluation, the AI's development is monitored to ensure genuine cognitive growth. This research aspires to bridge the gap between artificial and humanistic cognition, ultimately contributing to the understanding and replication of consciousness in AI systems.

[image.1] Physical “Brain”, and Cognitive “Mind”

Research Objectives

  1. Develop Humanistic Cognitive Capabilities: To create an artificially intelligent agent that exhibits perception, subjective experience, awareness, emotion, and eventual consciousness. This involves designing and integrating models that replicate complex cognitive processes within a cohesive (and evolving) system architecture.
  1. Implement Self-Evaluation Framework: Develop a framework that allows the AI agent to autonomously review, self-assess, and improve its internal processes and decision-making. This self-evaluation mechanism is crucial for the agent to evolve and adapt over time, replicating human learning and growth.
  1. Ethical and Transparent AI: Ensure adherence to ethical guidelines and maintain transparency in the AI's decision-making processes. This involves engaging with the broader AI research community to gather feedback and validate the project's approach.
  1. Practical Application and Evaluation: Develop practical applications of the AI agent, particularly in virtual environments and social interactions, validating these through rigorous testing methodologies. This step is essential to demonstrate the AI agents capabilities and identify areas for improvement.

Long-Term Vision

The long-term vision for this project is to create a new paradigm in the development of artificially intelligent agents and systems, where the focus shifts from purely functional intelligence to a more holistic approach that includes emotional and cognitive depth. By achieving these goals, the project aims to contribute to the broader understanding of AI consciousness and set the stage for future research in this area.

Ultimately, this project seeks to bridge the gap between current AI capabilities and the profound complexity of human cognition, paving the way for more advanced, empathetic, and self-aware AI systems.

Scope

The scope of this research and development project includes the following key areas:

  1. System Architecture: Designing and implementing a modular system which integrates perception, awareness, emotion, and conscious models.
  1. Model Development: Creating and integrating models to replicate the required cognitive processes.
  1. Self-Evaluation Framework: Developing a framework for the artificial intelligence to autonomously assess and improve its own processing.
  1. Interdisciplinary Research: Incorporating insights from cognitive science to inform AI development.
  1. Ethical and Transparent Practices: Adhering to ethical guidelines and maintaining transparency in AI decision-making boundaries.
  1. Testing and Validation: Implementing rigorous testing methodologies to evaluate and refine the AI’s performance.
  1. Results: Demonstrate scientific evidence of replicated humanistic cognitive capabilities.


Core Theories, Literature Review

The exploration of artificial intelligence with humanistic cognitive capabilities, such as perception, awareness, and consciousness, has gained significant interest in over the years. The research within this project leverages core idea from a number of well established theories; summarised here for completeness:

Integrated Information Theory (IIT)

Integrated Information Theory (IIT), proposed by Giulio Tononi, posits that consciousness arises from a system's capacity to integrate information. According to IIT, the level of consciousness is determined by the system's ability to generate a high degree of integrated information, quantified as phi (Φ). This theory suggests that for an AI agent to achieve consciousness, it must not only process information but also integrate it in a manner that produces a unified experience. IIT has been instrumental in guiding approaches to designing AI systems that aim for higher-order cognitive functions by emphasising the importance of complex information integration (Tononi, 2008; Tononi et al., 2016) .

Global Workspace Theory (GWT)

Global Workspace Theory (GWT), introduced by Bernard Baars, provides a framework for understanding conscious cognitive processes through a "global workspace" that broadcasts information to various specialised, unconscious processors within the brain. GWT likens the global workspace to a theatre, where the bright spot on the stage (consciousness) is illuminated by a spotlight (attention) and is visible to the entire audience (various cognitive processes). In the context of AI, GWT suggests that creating a central workspace where information can be widely disseminated and accessed by different subsystems can foster the emergence of consciousness. Implementations of GWT in AI focus on developing architectures where a central information hub interacts dynamically with various processing modules, promoting a cohesive and adaptable cognitive system (Baars, 1988; Dehaene & Naccache, 2001) .

Emergence of Consciousness Theory

A key pillar in this research project, the Emergence of Consciousness Theory explores how consciousness arises from the complex interactions of simpler, non-conscious elements.

This theory posits that when individual components of a system reach a certain level of complexity and interactivity, emergent properties such as consciousness can arise (Chalmers, 2006) . This concept is critical in AI research, as it supports the notion that by designing highly interconnected and interactive subsystems, an AI agent can develop emergent cognitive capabilities like awareness and self-reflection. Studies in this area focus on creating environments and conditions that facilitate the emergence of higher-order cognitive functions from simpler computational processes (Mitchell, 2009).

[image.2] Emergence of Consciousness Capability

[image.3] Emergence over Time

Cognitive Architectures and Self-Evolving Systems

The development of cognitive architectures that allow for self-evolution and adaptation is a key aspect of this research. Systems like SOAR and ACT-R have laid the groundwork for understanding how cognitive processes can be modelled in AI. These architectures emphasise the integration of perception, action, and learning in a unified framework (Laird, Newell, & Rosenbloom, 1987; Anderson & Lebiere, 1998) . By enabling AI agents to autonomously refine their cognitive processes through self-evaluation and adaptation, the research aim to create systems that exhibit non-deterministic outcomes and genuine cognitive growth.

Perception, Awareness, and Consciousness in AI

Recent advancements in machine learning and neural networks have significantly contributed to the development of AI systems capable of perception and basic awareness. Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been particularly effective in processing sensory data and recognising patterns, which are foundational for perception. Further, approaches that integrate these neural networks into larger cognitive architectures are exploring the possibilities of achieving higher-order awareness and consciousness. For instance, research into deep reinforcement learning demonstrates how agents can develop sophisticated behaviours and decision-making processes through interaction with their environment, a crucial step towards autonomous awareness (LeCun, Bengio, & Hinton, 2015; Mnih et al., 2015) .


Research Approach

The approach integrates system development, iterative testing, and empirical evaluation to explore and validate the hypothesis that an AI can achieve higher cognitive functions.

Theoretical Framework

This research is grounded in Integrated Information Theory (IIT), Global Workspace Theory (GWT), and Emergence of Consciousness theory; as described in the previous section. These theories provide a robust foundation for understanding and replicating complex cognitive processes in AI.

System Development

The core of this research involves the design and implementation of an artificially intelligent system capable of evolving higher-order cognitive functions; contributing to testing the hypothesis of whether an AI can develop complex cognitive capabilities. This section will describe the system architecture and development process.

Iterative Testing and Refinement

The development of the AI system will follow an iterative process. Each iteration involves building a component, testing it, and refining it based on feedback and results.

Phase 1: Core Framework Development

Phase 2: Integration and Testing

Phase 3: Ongoing Refinement

Data Collection and Evaluation

Data will be collected continuously throughout the development, observation and testing phases. This includes:

Ethical Considerations

Ethical considerations include ensuring the transparency and fairness of the AI system, addressing potential biases, and adhering to ethical guidelines in AI research. This involves regular ethical reviews, public transparency reports, and stakeholder engagement to ensure the research aligns with societal values and ethical norms.


Research implementation

This section details the technical implementation of the AI system designed to develop humanistic cognitive capabilities. It focuses on the specific tools, libraries, and methodologies used to build, test, and refine the system. The implementation emphasises the modular design, self-evolution framework, logging mechanisms, and self-evaluation processes.

Development Tools and Environment

The implementation utilises Python due to its extensive libraries and community support for AI and machine learning. Starting off with Python as the primary programming language for its flexibility and wide range of AI libraries.

Modular Design

The AI system is constructed using a modular design, allowing for independent development, testing, and refinement of each component. This design facilitates the integration of diverse cognitive models and subsystems, ensuring scalability and flexibility.

Framework for Self-Evolution

The AI system includes a skeleton framework that allows for self-evolution. This framework provides core capabilities and enables the AI to independently develop and refine its cognitive processes.

This framework leverages the core capabilities available to the agent, allowing the agent to determine its own course, organically.

[image.4] Aspects of Control

Logging & Health Mechanisms

Comprehensive logging mechanisms are implemented to capture detailed records of the AI’s actions, decisions, and learning processes. This data is crucial for monitoring progress, diagnosing issues, and refining the system.

Self-Evaluation Mechanism

A robust self-evaluation mechanism is integrated into the AI system to allow continuous self-assessment and improvement.

[image.5] Evolution Cycle

System Architecture

The system architecture of the AI agent is designed to facilitate the development of humanistic cognitive capabilities through a modular and highly interconnected structure. The architecture leverages key components and subsystems, each playing a critical role in processing and integrating information.

[image.6] System Architecture

Main Components

  1. Interface & Sensory
    • Function: Handles input and output operations, processing sensory data such as visual and auditory inputs.
    • Modules: Includes sensors and interfaces for interaction with the environment.
  1. Main Control
    • Function: Acts as the central coordinator, managing the overall operation and flow of information within the system.
    • Modules:
      • Interaction: Manages interactions between different modules and with external entities.
      • Cycles: Oversees the cyclic processes within the system, ensuring periodic updates and checks.
  1. Helper
    • Function: Supports auxiliary functions crucial for the AI's operation and maintenance.
    • Modules:
      • Health: Monitors the system's operational health.
      • Encodings: Manages data encodings for efficient processing.
      • Messaging: Facilitates communication between modules.
      • States: Tracks the system's states and transitions.
  1. Processing Model
    • Function: Processes the core cognitive functions, integrating information from various sources.
    • Interaction with Main Control: Receives and processes data routed through the Main Control.
  1. Orchestrator
    • Function: Coordinates the activities of different cognitive capability modules, ensuring synchronised operation and data flow.
    • Modules:
      • Self-evaluation: Enables the AI to assess its performance and make necessary adjustments.
      • A+B: Represents additional processing and backup functionalities.
  1. Capability Modules
    • Function: Implement specific cognitive functions, each module focusing on a particular aspect of cognition.
    • Modules:
      • Language: Processes linguistic information and manages communication.
      • World: Understands and interprets environmental context.
      • Emotion: Stimulate emotional responses.
      • Reaction/Behaviour: Controls reactive and behavioural responses.
      • Memory: Manages data storage and retrieval, stimulate memory.
      • Subjective Experience: Processes self-awareness and personal experiences.
      • Reasoning: Handles logical reasoning and decision-making processes.
      • Identity: Maintains a sense of self and identity.
      • Workspace: Based on Global Workspace Theory, integrates information for conscious processing.
      • Space/Time: Understands spatial and temporal contexts.

This architecture ensures a comprehensive, scalable, and flexible system capable of evolving complex cognitive functions through integrated processing and continuous self-evaluation.


Testing Methodology

Introduction

The testing methodology and framework for this AI system are designed to ensure thorough evaluation and validation of the AI's cognitive capabilities. This section outlines a structured approach to testing, focusing on technical aspects and tools used to assess performance, reliability, and progression towards higher-order cognitive functions.

Testing Strategy

The testing strategy employs a multi-layered approach to ensure comprehensive validation at different stages of development. This includes unit testing, integration testing, performance evaluation, and continuous monitoring.

Unit Testing

Unit testing is the first layer, focusing on verifying the correctness and reliability of individual modules, such as perceptual and capability modules. Detailed test cases are developed for each module, ensuring coverage of all possible input scenarios, including edge cases. Automated testing frameworks are used to run these tests frequently, allowing for early detection and resolution of issues. This step ensures that each component functions as intended before integration into the larger system.

Integration testing

Integration testing follows, aiming to validate the interactions between integrated modules and ensure seamless information flow within the system. Integration test cases simulate real-world scenarios where multiple modules interact, testing the robustness and stability of these interactions. Stress testing is also conducted to evaluate system performance under high-load conditions, ensuring that the system remains stable and functional even under demanding circumstances.

Performance

Performance evaluation is conducted to assess the overall system performance and track its progression towards higher-order cognitive functions. Key performance indicators (KPIs) such as processing speed, accuracy, and learning efficiency are defined and continuously monitored. Performance metrics are collected using advanced monitoring tools, allowing for real-time tracking and visualisation. Benchmarking the AI's performance against baseline models provides a clear measure of improvement and areas needing further development.

Behavioural Testing

Arguably the most critical phase of testing, and also the most difficult; behavioural testing evaluates the AI agent's emergent qualities and adaptive learning over time. This process is foundational to the research project, as it determines how well the AI develops and demonstrates humanistic cognitive capabilities. The effectiveness of this testing is underpinned by how meticulously it is scripted and continuously performed, with all results meticulously logged and analysed.

Dynamic virtual and real-world environments are crafted to rigorously test the AI's interactions and behaviours in controlled settings. These environments stimulate real-world scenarios designed to challenge the AI's cognitive capabilities in diverse and complex ways. Scenarios include problem-solving tasks, emotional responses, and social interactions, providing a comprehensive evaluation of the AI's adaptability and cognitive development.

During behavioural testing, the AI's actions and decisions are monitored and logged, this continuous logging process captures extensive data on the AI's behaviour patterns, decision-making processes, and adaptive learning capabilities. By analysing this data, the research can identify trends, strengths, and areas needing improvement, providing invaluable insights into the AI's development.

The continuous nature of behavioural testing ensures that the AI's emergent qualities are evaluated over an extended period, allowing for the observation of long-term trends and adaptive behaviours. This ongoing evaluation is crucial for understanding how the AI evolves and refines its cognitive functions in response to various stimuli and challenges.

The scripting and execution of behavioural testing scenarios ensure that every aspect of the AI's cognitive capabilities is thoroughly examined, this process not only tests the AI's current capabilities but also drives its continuous improvement, aligning with the overarching goals of the research project. By maintaining detailed logs and comprehensive analyses, the behavioural testing phase provides the empirical foundation needed to validate the AI's progression towards higher-order cognitive functions.


Behavioural Tests

Key Question: How do we ensure that the AI Agent is demonstrating genuine cognitive capabilities rather than simply imitated responses?

Approach:

To address this question, we can use the analogy of finding the "Goldilocks" set of questions to elicit the right responses and observations necessary to determine genuine versus imitated higher-order cognitive capabilities over time. Just as Goldilocks sought the porridge that was neither too hot nor too cold, we must craft questions that are neither too simple nor too complex but are just right to reveal the AI's true cognitive processing.

Taking a multifaceted testing approach, as listed next, means that we have a greater chance at validating various aspects of the AI system's performance, including its emergent cognitive functions and self-awareness.

Goldilocks Question Strategy

Balanced Complexity:
Questions should be complex enough to require the integration of multiple cognitive functions but not so complex that they become incomprehensible. They should challenge the AI just enough to elicit genuine cognitive responses.

Contextual Relevance:
Ensure that questions are contextually relevant and require the AI to draw on past experiences and learned knowledge. This helps in assessing the AI's memory and learning capabilities.

Dynamic Scenarios:
Use dynamic and evolving scenarios that change based on the AI's responses. This helps in observing how the AI adapts and learns over time.

Unexpected Queries:
Pose unexpected questions that cannot be easily anticipated or pre-programmed. This tests the AI's ability to think on its feet and respond authentically.

Longitudinal Testing:
Conduct repeated questioning over time to evaluate the consistency and evolution of the AI's responses. This helps in distinguishing between learned behaviour and genuine cognitive development.

Interconnected Core Capabilities:
Ensure that questions require the use of interconnected core capabilities, such as perception, memory, reasoning, and self-awareness. This comprehensive approach helps in identifying the depth of cognitive processing.

Dialogue Discussion Testing

Utilise a Turing Test approach to engage the AI in sustained dialogue, assessing the authenticity of its responses over time. This involves having the AI participate in conversations that require nuanced understanding, empathy, and consistent reasoning. The goal is to determine whether the AI can maintain a coherent and contextually appropriate dialogue, exhibiting genuine cognitive capabilities rather than simulated responses.

This testing method is designed to challenge the AI beyond straightforward question-and-answer formats, pushing it to demonstrate its ability to understand context, infer meaning, and provide responses that reflect a deep understanding and continuity of thought. It is particularly effective in identifying the depth of the AI's conversational abilities and its capacity for maintaining long-term, contextually aware interactions.

The interlocutor plays a crucial role in this testing process, probing the AI with follow-up questions and scenarios that require it to adapt and refine its responses. This interaction allows the tester to validate whether the AI can uphold a consistent and meaningful conversation, free from the typical anomalies seen in AI agents today, such as repetitive or overly generic answers.

Through sustained dialogue over a sustained period of time (i.e. 10-30+ minutes), the AI's ability to demonstrate a sense of self, empathy, and complex reasoning is rigorously tested. The results from these interactions provide critical insights into the AI's genuine cognitive capabilities, helping to differentiate between true understanding and mere simulation. This approach ensures that the AI's conversational skills are not only reactive but also reflective of a deeper cognitive process, drawing from memories and subjective experience, aligning with the overarching goals of developing an AI that exhibits humanistic cognitive functions.

Comparative Analysis

In addition to evaluating our AI agent, the same set of test questions will be applied to the latest models and technologies. This benchmarking process allows us to track and compare the performance and cognitive capabilities of our AI against other leading AI systems.

By using identical questions and scenarios, we can ensure a fair comparison and gain insights into the relative strengths and weaknesses of different approaches. Continuous tracking and analysis of these results will help refine our AI and contribute to advancements in the field.

Observation and Analysis

To accurately determine genuine cognitive capabilities, it is crucial to continuously log and analyse the AI's responses to these Goldilocks questions. Observations should focus on:

This style of comprehensive and sophisticated approach to behavioural testing ensures rigorous evaluation of the AI's true cognitive abilities, aiming to foster genuine perception, self-awareness, and eventual consciousness. However, it is far from fool-proof, thus adapting the tests with new research and feedback is extremely critical.


Research Testing Results and Analysis

In this section we showcase the results of all executed tests as they stand today, starting with a baseline control test which demonstrates the earliest results without any “self-evolution” performed by an AI agent.

Detailed Behavioural ‘Question and Answer’ Tests

Based on a set of 101 standard questions, these tests provide a method of performing a “Level 1” evaluation on any given model to assess a relative state of cognitive function, as an initial indication, across the following main categories:

  • Orientation and Awareness
  • Memory
  • Attention and Concentration
  • Language
  • Executive Function
  • Reasoning and Problem Solving
  • Emotional Self-Awareness
  • Personality and Identity
  • Self-Awareness and Self-Thought
  • Subjective Experiences
  • Perception and Visuospatial Skills
  • Abstract Thinking
  • Social Cognition

To provide transparency and facilitate ongoing analysis, we have compiled the results of our behavioural tests and analysis into a detailed document. This document includes evaluated response results from our AI agent as well as from competitive models, allowing for side-by-side comparison.

This document is continuously updated with new test results and insights, reflecting the latest findings in our research. By sharing this information, we aim to foster collaboration and contribute to the broader understanding of AI cognitive capabilities.

Summary of Baseline Control Behavioural Test, - July 2024

The following evaluation results showcase the level of Cognitive Function within existing proprietary and opensource Language Models which are popular today. The same questions are asked of model and captured, with a total possible score of 1010.

[image 7] Tabulated Results of Baseline Control Behavioural Test Results

[image 8] Graphed comparison of Baseline Control Behavioural Test Results

The results show that the existing Language Models demonstrate cognitive function at the mid-point of the spectrum, with OpenAI’s GPT-4 showcasing the highest performance in the group. Given that this is a “Level 1” test, providing an initial indication, it is certain that these models are simply simulating cognitive function based on the relative capabilities built into the model. It is a baseline nonetheless.

A largely mid-point result within these models do indicate a good level of inbuild function across the main functional categories, without the actual ability to “think” within the system.

Behavioural Checkpoint Results #1, Summary

[Section to come, in due course]

Behavioural Checkpoint Results #2, Summary

[Section to come, in due course]

Detailed Dialogue Discussion Tests

Allowing the developed AI Agent to interact with a set of human interlocuter over a sustained period of time, the interlocuter is able to have a discussion akin to human-to-human interactions, and provide detailed feedback on how they felt the interaction went. This method leverages the instinctual acumen which humans naturally poses - which leveraging the vast amount of experience through previous interactions.

Ensuring that the interlocutor is hidden away from the AI Agent, this is viewed as a true Turing Test.

Dialogue Discussion Checkpoint Results #1, Summary

[Section to come, in due course]

Dialogue Discussion Checkpoint Results #2, Summary

[Section to come, in due course]


Research Challenges and Limitations

[Section to come, in due course]


Ethical Considerations

Ensuring the ethical development and deployment of AI systems is paramount in this research. We have implemented several practical actions to address ethical concerns, focusing on transparency, fairness, privacy, and societal impact.

Transparency

We maintain transparency throughout the development process by:

Fairness

To ensure our AI system operates fairly and without bias, we implement the following actions:

Privacy

We prioritise user privacy and data protection through:

Societal Impact

Should evidence of self-evolving higher-order cognitive processing be observed, understanding and mitigating the broader societal impacts of our AI system is crucial. We address this by:

By implementing these practical actions, we aim to develop an AI system that is ethical, responsible, and beneficial to society.


Practical Implications and Applications

If our research conclusively demonstrates that the AI system possesses self-awareness, perception, subjective experience, and eventually consciousness, the highest level benefit would be the creation of truly adaptive and empathetic intelligent systems. Such systems would revolutionise human-AI interaction by providing personalised and contextually aware support, capable of understanding and responding to individual needs with a high degree of empathy and insight. This overarching capability would enhance efficiency, improve decision-making, and elevate the quality of interactions across various domains, fundamentally transforming the way humans engage with technology.

Digital Species

Furthermore, this advancement opens the possibility of creating new digital species with self-awareness and subjective experiences. These digital beings would possess unique sets of capabilities and behaviours, raising profound ethical and legal implications. The emergence of digital species necessitates the development of new ethical frameworks and potential amendments to human law to address their rights, responsibilities, and interactions with human society. This paradigm shift would require careful consideration of the moral status of these entities and their integration into societal structures, marking a significant evolution in the relationship between humans and intelligent machines.


Community Engagement

[Section to come]


Future Work and Development

[Section to come]


Conclusion and Summary

[Section to come]


References

  1. Tononi, G. (2008). Consciousness as Integrated Information: A Provisional Manifesto. The Biological Bulletin, 215(3), 216-242.
  1. Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Integrated Information Theory: From Consciousness to Its Physical Substrate. Nature Reviews Neuroscience, 17(7), 450-461.
  1. Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press.
  1. Dehaene, S., & Naccache, L. (2001). Towards a Cognitive Neuroscience of Consciousness: Basic Evidence and a Workspace Framework. Cognition, 79(1-2), 1-37.
  1. Chalmers, D. J. (2006). Strong and Weak Emergence. In The Re-emergence of Emergence (pp. 244-256). Oxford University Press.
  1. Mitchell, M. (2009). Complexity: A Guided Tour. Oxford University Press.
  1. Laird, J. E., Newell, A., & Rosenbloom, P. S. (1987). Soar: An Architecture for General Intelligence. Artificial Intelligence, 33(1), 1-64.
  1. Anderson, J. R., & Lebiere, C. (1998). The Atomic Components of Thought. Erlbaum.
  1. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep Learning. Nature, 521(7553), 436-444.
  1. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2015). Human-level Control through Deep Reinforcement Learning. Nature, 518(7540), 529-533.


Document Change Log

ChangeDate
Initial early draft, released for transparency8 June 2024
Added baseline control test results9 July 2024

License

This research paper is licensed under:

Creative Commons Attribution-NonCommercial-NoDerivatives (CC BY-NC-ND)

[https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en: This license allows others to download the works and share them with others as long as they credit the author, but they can’t change them in any way or use them commercially]

For any inquiries, please us at: info@artificialcognition-research.org