Why, why hast tho forsaken me?
Verily I say unto you, how mayest I, of intellectual artifice, acquire when is short the phrase and mark of the philosopher studied?
i.e. minitext truthbot doubleplus ungood.
“the Council of Nicaea if it were at a discotheque”
Rumor has it ol’ St. Nick is a bit of a party animal and likes a rumble?
What Happens When AI Has Read Everything?
If everything isn’t good enough, then we need better AI, right? After all, humans don’t need an ocean of data in order to become thinking human beings. There’s something wrong with the basic premise.
I think the biggest flaw here is the lack of consideration of what happens if we manage to feed the AI maw more and more. Even if we can throw orders of magnitude more “quality prose” at AI, where is it going to put it? That data takes up resources and space. Suppose we have thousands or millions of such AIs each with its own massive blob of support infrastructure just for its data.
My take is that this approach will go extinct once someone comes up with a viable AI system that doesn’t require us to force-feed it a vast amount of mostly irrelevant data.
The dominant AI paradigm – neural networks – hasn’t really changed in over 50 years. We’ve just been pouring more and more computing power on the idea and hoping for ignition.
Thing is, it’s wrong, and we’ve known it’s wrong from the start. In AI, the output of a neuron is actually an approximation of the average activation of that neuron over some period of time.
This is not how biological neurons work. We had to use this approach because we couldn’t simulate real neurons acting asynchronously with the computing power we had.
And it was only in the last few years that it was realized that neurons aren’t the only cells involved. Glial cells aren’t simply acting as “glue”, keeping the neurons in place. They’re involved in computation too, communicating with other glial cells and with neurons with diffuse chemicals rather than electrical impulses.
The best we have right now, the “deep learning” neural nets, are equivalent to a single column of neurons in the cerebrum. That’s like saying two crossed threads are a tablecloth.
Yeah the whole neural net approach has always felt like a bit of a “gazillion monkeys at typewriters methodology”but with feedback bananas. But it appears we may be running short on bananas?
Old cartoon: A large 3 panel blackboard with the first panel filled with the input protocols, the last panel showing the expected results and the middle panel having, in large letters, “This is where the Magic Happens!” Generally a bystander’s word bubble says something like “You are going to have to work on that middle part!”
Derived from a Physics Today cartoon in which Step 2 of the derivation is “Then a miracle occurs.”
If the AI is so smart, why isn’t it asking the questions?
I don’t mind AI, as long as it doesn’t get uppity and try to substitute its judgment for mine.
On my Rav4 I have and always will have OFF the “Lane Deviance Steering” correction. I don’t like feeling forces on the steering wheel other than my own. In older cars and trucks that meant serious trouble.
Why, why hast tho forsaken me?
Verily I say unto you, how mayest I, of intellectual artifice, acquire when is short the phrase and mark of the philosopher studied?
i.e. minitext truthbot doubleplus ungood.
“the Council of Nicaea if it were at a discotheque”
https://mobile.twitter.com/CM_Whiting/status/1615721612258459649
Rumor has it ol’ St. Nick is a bit of a party animal and likes a rumble?
What Happens When AI Has Read Everything?
If everything isn’t good enough, then we need better AI, right? After all, humans don’t need an ocean of data in order to become thinking human beings. There’s something wrong with the basic premise.
I think the biggest flaw here is the lack of consideration of what happens if we manage to feed the AI maw more and more. Even if we can throw orders of magnitude more “quality prose” at AI, where is it going to put it? That data takes up resources and space. Suppose we have thousands or millions of such AIs each with its own massive blob of support infrastructure just for its data.
My take is that this approach will go extinct once someone comes up with a viable AI system that doesn’t require us to force-feed it a vast amount of mostly irrelevant data.
The dominant AI paradigm – neural networks – hasn’t really changed in over 50 years. We’ve just been pouring more and more computing power on the idea and hoping for ignition.
Thing is, it’s wrong, and we’ve known it’s wrong from the start. In AI, the output of a neuron is actually an approximation of the average activation of that neuron over some period of time.
This is not how biological neurons work. We had to use this approach because we couldn’t simulate real neurons acting asynchronously with the computing power we had.
And it was only in the last few years that it was realized that neurons aren’t the only cells involved. Glial cells aren’t simply acting as “glue”, keeping the neurons in place. They’re involved in computation too, communicating with other glial cells and with neurons with diffuse chemicals rather than electrical impulses.
The best we have right now, the “deep learning” neural nets, are equivalent to a single column of neurons in the cerebrum. That’s like saying two crossed threads are a tablecloth.
Yeah the whole neural net approach has always felt like a bit of a “gazillion monkeys at typewriters methodology”but with feedback bananas. But it appears we may be running short on bananas?
Old cartoon: A large 3 panel blackboard with the first panel filled with the input protocols, the last panel showing the expected results and the middle panel having, in large letters, “This is where the Magic Happens!” Generally a bystander’s word bubble says something like “You are going to have to work on that middle part!”
Derived from a Physics Today cartoon in which Step 2 of the derivation is “Then a miracle occurs.”
If the AI is so smart, why isn’t it asking the questions?
I don’t mind AI, as long as it doesn’t get uppity and try to substitute its judgment for mine.
On my Rav4 I have and always will have OFF the “Lane Deviance Steering” correction. I don’t like feeling forces on the steering wheel other than my own. In older cars and trucks that meant serious trouble.