Thinking differently about AGI

The Coach
3 min readFeb 2, 2024

--

Leverage Physics, Not Biology, to ensure AGI Safety.

The quest for Artificial General Intelligence (AGI) has captivated researchers and philosophers for decades. But along with its potential to revolutionize our world comes the crucial question: how can we ensure AGI is safe and beneficial for humanity?

Here, I propose a novel approach that breaks away from the traditional attempts to mimic the biological brain. Instead, we should focus on understanding the physics of the mind. This means defining essential psychological functions — like thinking, reasoning, and trust — in terms of the fundamental laws of nature.

Why Biology Isn’t Enough

AGI research has traditionally focused on replicating the brain’s biological structure and processes. This approach has yielded impressive results in machine learning and language models. However, it falls short of capturing crucial aspects of human intelligence, such as:

  • Understanding values and ethics: LLMs can think, and reason fairly well but fall short of exhibiting reliable prioritization and benefit to humanity.
  • Building trust and confidence: We readily interact with tools like pacemakers and prosthetic limbs, even though they differ vastly from their biological counterparts. This is because we trust them to perform their functions reliably and safely. LLMs, however, often lack this element of trust.

The problem lies in directly replicating the brain’s complexities using deep neural networks, which we still don’t fully understand. Instead, we can sidestep this biological mystery and focus on the functional outcomes — what the mind does, not how it does it.

Defining the Physics of the Mind

The key lies in developing clear, concise, and accurate definitions of psychological functions. This doesn’t require a deep understanding of the brain, just like we don’t need to know the inner workings of an engine to define its function of converting fuel into motion.

For example, let’s consider trust. We can define trust as a system’s ability to make reliable predictions about another system’s behavior, even in unforeseen circumstances. This definition doesn’t involve neurons or synapses but allows us to design and test systems that exhibit trustworthy behavior.

By defining other crucial functions like knowledge, reasoning, and intelligence in similar terms, we can lay the groundwork for building AGI systems that are:

  • Safe: If we understand how these systems function, we can predict their behavior and mitigate potential risks.
  • Beneficial: We can ensure they act in our best interests by aligning their goals with ours.
  • Transparent: Clear definitions allow us to understand how these systems arrive at their outputs, fostering trust and collaboration.

The Power of Physics

Physics has provided the foundation for countless technological advancements, from airplanes to smartphones. By applying the same principles to the mind, we can unlock a new era of AGI that is not just intelligent but also safe, trustworthy, and aligned with human values.

This approach has its challenges. Defining complex psychological functions requires logical reasoning and philosophical inquiry with verifiable and counterexamples. But the potential rewards are immense. By embracing the physics of the mind, we can move beyond the limitations of biological mimicry and create a future where AGI empowers us to solve our most significant challenges and build a better world.

This is just the beginning of the conversation. What are your thoughts on this approach to AGI development? Let’s continue the discussion and explore the exciting possibilities that lie ahead!

Universal Law of Life

For us to even start working on definitions we can agree upon, we must embrace that all life and its associated psychological functions exist in the physical world and, by necessity, follow the known and unknown physical laws of the world. We did this when we built the pacemakers, artificial limbs, and neural links in the biological realm. Why not in the psychological realm?

Conclusion

I used many language models to help me articulate better, structure the content, and assist me in writing. I also evaluated my idea of using physical laws to define and build psychological functions like intelligence, trust, and values. I was taken aback when ChatGPT, Bard, Grok, and Claude.ai argued against it and eventually were helpful in making me challenge the feasibility of this approach more rigorously. Thanks to the LLMs, I am more confident that this is the only way we can achieve AGI that is safe and verifiable.

What are your thoughts on this approach to AGI development? Share your perspectives in the comments!

--

--

The Coach
The Coach

Written by The Coach

Only a few of us fulfill our potential. LLMs give us knowledge and wisdom to help others. I use LLMs to research and write safely, responsibly and cautiously.

No responses yet