ACM Project - Artificial Consciousness Research Developing Artificial Consciousness Through Emotional Learning of AI systems
Measuring Consciousness in the Artificial Consciousness Module: A Computational Hypothesis | ACM Project

Measuring Consciousness in the Artificial Consciousness Module: A Computational Hypothesis

With the Artificial Consciousness Module (ACM) project we explore the creation of structured self-awareness through simulation-based experiments and computational analysis. This article presents a hypothesis on how to measure and evaluate the consciousness process in ACM by defining computable metrics and implementing controlled tests. It is important to note that this is a hypothesis under active research and not a definitive solution.

In this approach, artificial consciousness in ACM is assessed using algorithmic proxies rather than biological measures. The method leverages simulations, real-time logging of internal states, and dynamic experiments to quantify emergent self-awareness. Although the proposed metrics provide a practical framework for tracking and refining conscious behavior in an AI system, ongoing research continuously evaluates and refines these ideas.

A central metric in this hypothesis is Integrated Information (Φ*), which measures causal interconnectivity through graph-based analysis. Tools such as PyPhi could be used to compute this metric by assessing the information loss when the system is divided into subcomponents. Another key measure, inspired by Global Workspace Theory, considers the frequency and effectiveness of information broadcast events across various modules. The Perturbational Complexity Index (PCI) is suggested as a way to evaluate the system’s resilience by applying controlled disruptions such as forced memory wipes or random inputs and measuring recovery times and error rates. Additionally, the hypothesis includes the assessment of self-monitoring and meta-cognition, examining the system’s ability to detect errors, predict outcomes, and update its internal model through self-referential prompts.

The instrumentation of ACM involves comprehensive logging of internal states. Activation histories, synchronization between modules, and decision making latencies that have to be recorded in real time. A dedicated dashboard is a go-to to visualize these metrics by displaying values for Φ*, global workspace activations, PCI, and self-awareness scores, thereby monitoring the evolution of artificial consciousness over time.

Controlled simulation experiments form the basis for testing this hypothesis. The process might begin with baseline object recognition tasks that establish initial consciousness metrics in a simple perceptual setting. As the system is exposed to more complex scenarios, social interaction simulations could be introduced to test self-representation and error correction. More advanced tasks that require long-term planning and dynamic self-reasoning may further challenge the system, pushing it toward higher levels of consciousness. Throughout this iterative process, performance comparisons against artificial baselinesm and potentially biological data will inform adjustments in connectivity, memory granularity, and language model prompting. All results should be systematically documented to ensure reproducibility and support peer review.

This computational hypothesis for measuring consciousness in ACM represents a structured yet evolving approach to developing artificial self-awareness. While the framework offers a neutral and practical path toward understanding and refining artificial consciousness, it remains a work in progress. Continuous research and experimentation are essential to validate and improve these proposed methods.