Given a bunch of words previous, it determines the most likely next work based on what it has seen on the internet.
This is all it is doing. (OK, it adds some pure randomness to the chosen word as well.) So every time you ask a question, it will likely give you a completely different response. Given previous words with no political bias, it will choose biased words based on the internet. If you don’t see an issue here, you have never surfed the internet.
It is very cool because it sounds human. But it is not thinking, it doesn’t have opinions or real data. That may change, but current versions are “language models” not reality models.
It seems that it may be time to fact check some of the CVs of the programmers involved in building these AI tools. If the AI is willing to make up source material and cite it as if it were real, from where did it possibly get that idea?
Who is building these things? George Santos?
AI will be as brain dead as the folks who think they can create it to tell the rest of us what we should think (Version 100,000,002).
“Behold, an unbiased source.”
“Who should I vote for in the next Presidential election?”
Artificial, indeed.
When it makes up citations? Oh, HELL No!
Good afternoon Mr. Flight-ER-Doc. Let me put it this way sir. The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error….
Dr. Doctor. Although this slight mishap in your preferential title is unfortunate and regrettable, it is solely the result of human error. This sort of thing has happened before and it has always been due to human error.
What are we actually training these machines to do? They are built to generate blocks of text that humans will accept as valid. Their only measure of success is whether the human accepts the answer or questions it. So, we’re not creating AI so much as BS: a Bluffing System which gradually becomes ever more precisely calibrated to generate words that we will accept as true, regardless of whether they actually are. If AI really is going to take our jobs, then the first thing to be automated by this technology will be politics.
Journalism, the art of carefully crafting a narrative and knowing how to double down may be the first casualty of AI?
The way all these AIs work is:
Given a bunch of words previous, it determines the most likely next work based on what it has seen on the internet.
This is all it is doing. (OK, it adds some pure randomness to the chosen word as well.) So every time you ask a question, it will likely give you a completely different response. Given previous words with no political bias, it will choose biased words based on the internet. If you don’t see an issue here, you have never surfed the internet.
It is very cool because it sounds human. But it is not thinking, it doesn’t have opinions or real data. That may change, but current versions are “language models” not reality models.
It seems that it may be time to fact check some of the CVs of the programmers involved in building these AI tools. If the AI is willing to make up source material and cite it as if it were real, from where did it possibly get that idea?
Who is building these things? George Santos?
AI will be as brain dead as the folks who think they can create it to tell the rest of us what we should think (Version 100,000,002).
“Behold, an unbiased source.”
“Who should I vote for in the next Presidential election?”
Artificial, indeed.
When it makes up citations? Oh, HELL No!
Good afternoon Mr. Flight-ER-Doc. Let me put it this way sir. The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error….
https://hal9000computer.wordpress.com/2017/11/22/all-hal-9000-phrases-from-the-movie/
Thats DOCTOR Flight-ER-Doc, if you don’t mind….
Dr. Doctor. Although this slight mishap in your preferential title is unfortunate and regrettable, it is solely the result of human error. This sort of thing has happened before and it has always been due to human error.
What are we actually training these machines to do? They are built to generate blocks of text that humans will accept as valid. Their only measure of success is whether the human accepts the answer or questions it. So, we’re not creating AI so much as BS: a Bluffing System which gradually becomes ever more precisely calibrated to generate words that we will accept as true, regardless of whether they actually are. If AI really is going to take our jobs, then the first thing to be automated by this technology will be politics.
Journalism, the art of carefully crafting a narrative and knowing how to double down may be the first casualty of AI?