Elon Musk has launched a new AI company called xAI with the goal of understanding the true nature of the universe. The team at xAI includes AI researchers who have worked at companies like OpenAI, Google, Microsoft and DeepMind. Little is known about xAI currently except that Musk seeks funding from SpaceX and Tesla to start it. The xAI team plans to host a Twitter Spaces discussion on July 14th to introduce themselves. xAI is separate from Musk’s X Corp but will work closely with his other companies like Tesla and Twitter.
Given that all of the existing “AI” models are in fact not intelligent at all, and are basically glorified predictive text… any insights an AI could come up with about the true nature of the universe would likely be like one of those sayings that initially sounds deep and meaningful, but is in fact completely inane and meaningless. Calling it now: it’ll come out with “if you immediately know the candlelight is fire, then the meal was cooked a long time ago”.
While I don’t think an AI will figure out some great Truth behind the universe, and ChatGPT is indeed just a convincing text predictor, if what they claim about GPT-4 is true then it’s possible it has some level of general intelligence.
When tested, GPT-4 was asked to do things that would not have appeared in its training data. For example, it was asked to provide instructions to stack a book, laptop, six eggs a bottle and a nail – ChatGPT gave the predictable “dumb” answer of just saying something like “place the laptop on the ground, then stack one egg on top, then stack the second egg on the first, then the book etc.”
But GPT-4 gave correct instructions to place the book on a flat surface, place the eggs evenly spaced on the book to distribute the weight, then put the laptop, then the bottle and nail on the bottle cap with the point upward. This implies that the chat understands the shapes of the objects and how they would interact physically in the world.
It was also able to do stuff like create code that would draw a unicorn, something it would never have seen in any of its text data. It could even take a modified version of that code that creates a unicorn without the horn, in a mirrored position and correct the code to add the horn on the unicorn’s head. Suggesting it knew what the head was, even when reoriented.
It even potentially shows an understanding of theory of mind, being able to theorize about where a person might think an object is if it were moved while they were out of the room.
So it’s possible AI may show us that a lot of intelligence, even our own, actually manifests from much simpler rules and conditions than we initially thought.
To me, what is surprising is that people refuse to see the similarity between how our brains work and how neural networks work. I mean, it’s in the name. We are fundamentally the same, just on different scales. I belive we work exactly like that, but with way more inputs and outputs and way deeper network, but fundamental principles i think are the same.
The difference is that somehow the nets in our brains are creating emergent behaviour while the nets in code, even with a lot more power aren’t. I feel we are probably missing something pivotal in constructing them.
I’m not so sure we’re missing that much personally, I think it’s more just sheer scale, as well as the complexity of the input and output connections (I guess unlike machine learning networks, living things tend to have a much more ‘fuzzy’ sense of inputs and outputs). And of course sheer computational speed; our virtual networks are basically at a standstill compared to the paralellism of a real brain.
Just my thoughts though!