I know current learning models work a little like neurons but why not just make a sim that works exactly like how we understand neurons work
I know current learning models work a little like neurons but why not just make a sim that works exactly like how we understand neurons work
Because we don’t understand it.
Do you need to understand it in order to try it out and see what happens? I see lots of things experimenting with a small colony of neurons. Making machines that move using the organic part to navigate or making them play games (still waiting on part 2 of the Doom one). Couldn’t that be scaled up to human brain size and at least scanned to see what kind of activity is going on and compare it to a real human brain?
We need to understand what we’re simulating to simulate it. We know the structure of neurons at a simple level, we know how emergent systems represent more complex concepts… we don’t know how the links to build that system are constructed.
Even assuming we can model the same number of (simple machine learning model) neurons, it’s the connections that matter. The number of possible connections in the human brain is literally greater than the number of atoms in the universe.
I just want to make sure one of your words there is emphasized “possible” (Edit it’s also wrong as I explained below)
Yes - the value of 86 billion choose two is insanely huge… one might even say mind bogglingly huge! However, in actuality, we’ve got about 100 trillion neural connections given our best estimates right now. That’s about a thousand connections per neuron.
It’s a big number but one we could theoretically simulate - it also must be said that it’s impossible for the simulation of the brain to be technically impossible… We’ve each got a brain and there are a billion of us made up out of an insignificant portion of the mass+energy available terrestrially - eventually (unless we extinct ourselves first) we’ll start approaching neurological information storage density - we’re pretty fucking clever so we might even exceed it!
Edit for math:
So I did a thunk and 86 billion choose 2 actually isn’t that big, I was thinking of 86 billion factorial but it’s actually just 86 billion squared (it’d be 86 billion less than that but self-referential synapses are allowed).
Apparently this “greater than the number of atoms in the universe” line came from famously incorrect shame of Canada Jordan Peterson… and, uh, he’s just fucking wrong (so math can be added to the list of things he’s bad at - and that’s already a long list).
Yea so - 86 billion squared = impressively large number… but not approaching 10^80 impressively large.
I’ve been quoting Jordan Peterson for years?! Ahhh fuck.
To clarify:
We don’t even know how human intelligence/consciousness works, let alone how to simulate it.
But we know how an individual neuron works.
The issue with OPs idea is we don’t know how to tell a computer what a bunch of neurons do to create an intelligence/consciousness.
Heck, we barely know how neurons work. Sure, we’ve got the important stuff down like action potentials and ion channels, but there’s all sorts of stuff we don’t fully understand yet. For example, we know the huntingtin protein is critical to neuron growth (maybe for axons?), and we know if the gene has too many mutations it causes Huntington’s disease. But we don’t know why huntingtin is essential, or how it actually effects neuron growth. We just know that cells die without it, or when it is misformed.
Now, take that uncertainty and multiply it by the sheer number of genes and proteins we haven’t fully figured out and baby, you’ve got a stew going.
To add to this, a new type of brain cell was discovered just last year. (I would have linked directly to the study but there was a server error when I followed the cite.)
To understand the complexity of the human brain, you need a brain more complex than the human brain.