I used GPT-4 (the paid one) for a couple of months to evaluate it.
And yes, it has some impressive capabilities.
It can provide a great deal of good data on things it has been thoroughly trained in – medical knowledge appears to be one of those areas.
It can also provide a great deal of inaccurate and erroneous data at times. Quizzing it on obscure (to most people) aviation knowledge for example.
One of the biggest problems I have with it is that it does not differentiate to you the things it has good hard data on vs the things it is guessing at or simply inventing to answer your question.
Thus I consider the outputs to be always potentially unreliable.
Can people reason? How do we measure that objectively?
Do we even understand the processes? Try defining “thought” or “idea” using only objectively measurable parameters.
No, it cannot reason.
In Wisconsin if your car is less than 7 years old, and if it was damaged that the cost of repairs exceed 70% of its Fair Market Value before the wreck, it is declared a salvage vehicle according to the section on Motor Vehicle law that lists all of the legal definitions.
If your car is declared salvage, regardless of circumstances of how your car was damaged, you need to pay a fee for a salvage title. Furthermore, to operate the car, you need to pay to have it repaired, even if the insurance company won’t pay for this, to where an inspector at DMV says you can drive it.
There is an important qualification about under 7 years in Wisconsin. This may be an arbitrary cutoff, but the legislative reasoning is that a “good used car” damaged past 70% of this cash value is probably a dangerous wreck if you patch it up. A much older car isn’t worth very much, and even superficial damage can cost more to repair that that amount.
There is another section of the Wisconsin State Statues on Salvage Title stating that if an insurance claim exceeding 70% of the cash value is paid out (without the qualification that the car is under 7 years age), the Insurer shall report this to DMV as being a salvage vehicle.
How do I know this? Because the insurance company of the driver that sideswiped my 1997 Camry, in a gated lot where it was parked, unoccupied, is offering me below the cost of repair (but still above 70%, for some reason) and below their appraisal of cash value if I keep the car to drive it around with a prominent dent in a rear door. Something tells me that some Nimrod will report this to my insurance company, where another Nimrod will report this to DMV as a salvage vehicle, even though it is well past the age in the statutory definition of a salvage vehicle.
So I asked the you-get-what-you-pay-for (the free) version of the Bing AI whether there was a contradiction between the law saying when and how an insurance company must report a car as a salvage vehicle and the legal definition of a salvage vehicle.
Bing AI answered by quoting the section on the definition of a salvage vehicle, and by doctoring the insurance-company reporting law by adding the qualification “for a Salvage Vehicle.”
I suppose, I suppose if I paid to consult with an attorney, said human expert would say, “Everyone understands that the reporting requirement only applies to cars meeting the definition of a salvage vehicle.” Sometimes attorneys guess wrong and you end up with a big headache.
I convinced my State Representative that the law as written is an Appeals Court decision waiting to happen, and he took what I emailed seriously that he is having a member of his staff examine this.
But the AI? Fuggedaboutit. If it was supposed to be smarter than a paint-by-numbers attorney, don’t you think it would have said, “The reporting law, as written, appears to contradict the definition of Salvage Vehicle, but the case Refurbish vs DMV decided that the definition in the statute applies, and an insurer is not required to report a vehicle older than 7 years as salvage.
Scott Adams does a daily livestream on YouTube (among other platforms), and periodically comments on AI. His current assessment is that AI is in the demo-ware phase; it shows what the software might do if it worked, which it doesn’t. He tests ChatGPT, Grok and others in really involved ways, and concludes that they’re useless. You can’t rely on them for factual statements, because they lie to you and there’s no way to stop that. You can’t get an opinion out of them, because they’re programmed to not form or offer opinions. They can compose grammatically perfect prose with perfect spelling, but none of it can be considered good writing. They can only compose short pieces of uninteresting music. And they have no sense of humor. I’ve verified most of those shortcomings myself, along with finding that it sucks at mathematics, and when it doesn’t know something, it bullshits you for a while the way some people do to deflect your attention from the fact that they don’t know what they’re talking about.
In short, it’s as intelligent as today’s public high school graduates.
What baffles me is why AI creators concentrate on emulating the undisciplined human mind, and then fail to append to it the real power of computers: the ability to test the validity of its knowledge by rigorous application of deductive logic. That would be simple to enforce in software, and would produce an AI that emulated a highly disciplined mind, one with access to more information than any human mind could ever process. That kind of setup has the potential to not only form concepts, but prove their truth, and then proceed to unite all of its concepts into a comprehensive framework that would give us access to a greatly expanded and accurate knowledge of reality.
So what do you have in mind?
Some kind of formal theorem proving capability as has been long talked about in connection with verifying computer software?
Some kind of logical inference system such as Prolog, which at one time was the Great Hope for AI?
A symbolic manipulation system such as Maple/Mathematica?
Maybe I am asking rhetorical questions because this kind of finese of AI has been in large measure abandoned, but instead, the focus is on brute-force methods?
AI is notorious for making stuff up, like citing law cases which don’t exist, or hilariously misdescribing the plots of science fiction books. Anyone who relies on it for factual information is a fool, IMHO.
I used GPT-4 (the paid one) for a couple of months to evaluate it.
And yes, it has some impressive capabilities.
It can provide a great deal of good data on things it has been thoroughly trained in – medical knowledge appears to be one of those areas.
It can also provide a great deal of inaccurate and erroneous data at times. Quizzing it on obscure (to most people) aviation knowledge for example.
One of the biggest problems I have with it is that it does not differentiate to you the things it has good hard data on vs the things it is guessing at or simply inventing to answer your question.
Thus I consider the outputs to be always potentially unreliable.
Can people reason? How do we measure that objectively?
Do we even understand the processes? Try defining “thought” or “idea” using only objectively measurable parameters.
No, it cannot reason.
In Wisconsin if your car is less than 7 years old, and if it was damaged that the cost of repairs exceed 70% of its Fair Market Value before the wreck, it is declared a salvage vehicle according to the section on Motor Vehicle law that lists all of the legal definitions.
If your car is declared salvage, regardless of circumstances of how your car was damaged, you need to pay a fee for a salvage title. Furthermore, to operate the car, you need to pay to have it repaired, even if the insurance company won’t pay for this, to where an inspector at DMV says you can drive it.
There is an important qualification about under 7 years in Wisconsin. This may be an arbitrary cutoff, but the legislative reasoning is that a “good used car” damaged past 70% of this cash value is probably a dangerous wreck if you patch it up. A much older car isn’t worth very much, and even superficial damage can cost more to repair that that amount.
There is another section of the Wisconsin State Statues on Salvage Title stating that if an insurance claim exceeding 70% of the cash value is paid out (without the qualification that the car is under 7 years age), the Insurer shall report this to DMV as being a salvage vehicle.
How do I know this? Because the insurance company of the driver that sideswiped my 1997 Camry, in a gated lot where it was parked, unoccupied, is offering me below the cost of repair (but still above 70%, for some reason) and below their appraisal of cash value if I keep the car to drive it around with a prominent dent in a rear door. Something tells me that some Nimrod will report this to my insurance company, where another Nimrod will report this to DMV as a salvage vehicle, even though it is well past the age in the statutory definition of a salvage vehicle.
So I asked the you-get-what-you-pay-for (the free) version of the Bing AI whether there was a contradiction between the law saying when and how an insurance company must report a car as a salvage vehicle and the legal definition of a salvage vehicle.
Bing AI answered by quoting the section on the definition of a salvage vehicle, and by doctoring the insurance-company reporting law by adding the qualification “for a Salvage Vehicle.”
I suppose, I suppose if I paid to consult with an attorney, said human expert would say, “Everyone understands that the reporting requirement only applies to cars meeting the definition of a salvage vehicle.” Sometimes attorneys guess wrong and you end up with a big headache.
I convinced my State Representative that the law as written is an Appeals Court decision waiting to happen, and he took what I emailed seriously that he is having a member of his staff examine this.
But the AI? Fuggedaboutit. If it was supposed to be smarter than a paint-by-numbers attorney, don’t you think it would have said, “The reporting law, as written, appears to contradict the definition of Salvage Vehicle, but the case Refurbish vs DMV decided that the definition in the statute applies, and an insurer is not required to report a vehicle older than 7 years as salvage.
Scott Adams does a daily livestream on YouTube (among other platforms), and periodically comments on AI. His current assessment is that AI is in the demo-ware phase; it shows what the software might do if it worked, which it doesn’t. He tests ChatGPT, Grok and others in really involved ways, and concludes that they’re useless. You can’t rely on them for factual statements, because they lie to you and there’s no way to stop that. You can’t get an opinion out of them, because they’re programmed to not form or offer opinions. They can compose grammatically perfect prose with perfect spelling, but none of it can be considered good writing. They can only compose short pieces of uninteresting music. And they have no sense of humor. I’ve verified most of those shortcomings myself, along with finding that it sucks at mathematics, and when it doesn’t know something, it bullshits you for a while the way some people do to deflect your attention from the fact that they don’t know what they’re talking about.
In short, it’s as intelligent as today’s public high school graduates.
What baffles me is why AI creators concentrate on emulating the undisciplined human mind, and then fail to append to it the real power of computers: the ability to test the validity of its knowledge by rigorous application of deductive logic. That would be simple to enforce in software, and would produce an AI that emulated a highly disciplined mind, one with access to more information than any human mind could ever process. That kind of setup has the potential to not only form concepts, but prove their truth, and then proceed to unite all of its concepts into a comprehensive framework that would give us access to a greatly expanded and accurate knowledge of reality.
So what do you have in mind?
Some kind of formal theorem proving capability as has been long talked about in connection with verifying computer software?
Some kind of logical inference system such as Prolog, which at one time was the Great Hope for AI?
A symbolic manipulation system such as Maple/Mathematica?
Maybe I am asking rhetorical questions because this kind of finese of AI has been in large measure abandoned, but instead, the focus is on brute-force methods?
AI is notorious for making stuff up, like citing law cases which don’t exist, or hilariously misdescribing the plots of science fiction books. Anyone who relies on it for factual information is a fool, IMHO.