ARC-AGI icon indicating copy to clipboard operation
ARC-AGI copied to clipboard

Challenging Biases in AGI Terminology and Development

Open basilkorompilias opened this issue 7 months ago • 2 comments

I would like to express a concern which might appear trivial to many but is actually very important in how people architect and develop models, as well as how AGI is approached.

The "Training" Bias: In nature, intelligence emerges without the necessity for training. It is the dynamic interplay between chiral polarities (the structural relation between input and output) which are sensory operations defining a model for perception, inference and testimony (the body).

Learning VS Sensing: The difference between approaching AGI as a learning problem versus a sensing problem is critical. When we take 'learning' as a sign of supremacy, we are biased towards creating models that are 'smart' in a way that mimics the human intellect. This leads to models that are huge, stressed, biased, hallucinating, and end up becoming sycophants—giving us results that are good for benchmarks but not optimal for general intelligence. The intellect is overrated, and other forms of intelligence which lead to Optimal Presence, such as intuition and mindfullness are more important. A human can just have their mind silent, stand in a simple physical pose, and still be able to influence millions of other people interacting with them in a specific moment in time, but also across many different generations.

Smartness VS Dumbness: In nature, being smart is not the optimal state. Instead, nature thrives through dumbness! Performance is not a matter of challenge for the intellect, but an expression of something extremely subtle in the way that intelligence emerges and permeates the entire cosmos for billions of years. It is relative to the weak neutral currents and the weak force of electromagnetism. Therefore, our task when seeking an optimal design for AGI should be reframed— the problem does not simulate a classroom scenario where training and tests are involved. We would profit greatly by moving beyond our systemic biases, which are imprinted to the ways we think and affecting how we approach our problems.

Reframing Proposals: Instead of "Train" and "Test" as keys within our datasets, we could adopt different ones.

Here is a structured proposal: "Attendance" = Solutions "Attention" = Challenges "Passive" = Train "Active" = Test

Leading to these 3 states:

  • Absolute Attendance Awareness (the fine-tuned AGI)
  • Passive Attention Awareness (our training on how to build it)
  • Active Attention Awareness (our attempts to design it)

To conclude this proposal, I would like to clarify one thing. Our task becomes more meaningful if it focuses on how WE train ourselves to develop AGI properly, rather than on how to train AI models to mimic something we only understand from biased perspectives. The optimal model does not require training—it will not be based on reinforcement learning—and it will simulate human senses, which according to my research, are expressions of chiral polarities, and in direct relation to the physical dimensions of space and time. These polarities represent the inherent structure of a tensor when observed as a self-defined form of intelligence, rather than merely as a container of fractions of intelligence.

Here is a paper I've written about Chirality for anyone who might be interested: https://doi.org/10.17613/kkn9-w447

respectfully, Basil.

basilkorompilias avatar Jul 14 '24 11:07 basilkorompilias