Not hot off the press, but worth a read from a smart guy.
18 thoughts on “AI”
“AI doesn’t have goals”? It doesn’t want to win at chess or Go? It doesn’t want to drive without crashing? Etc? If it doesn’t “want” to do these things, it certainly seems motivated to achieve them and not achieve the opposite.
One can point to all of the benefits of a new technology. But if one of the consequences is death (after which you cannot enjoys the benefits) then the benefits don’t really matter in the end. And the alternative to benefits via AI is still progress, just slower progress. Without AI, we’ll still figure out solutions for the various cancers for example.
I think he means that it doesn’t have its own goals–just the ones provided by its programmers.
What does it matter? Poorly thought out goals put into a narrow or AGI that optimized for that goal can have very bad outcomes. Practically all problems that are cause by humans can be logically eliminated by eliminating the humans. The funny analogy is the Flight of the Concorde song about how things are so much better now that the robots have been taken over. There’s no more elephant abuse…because there’s now no more elephants. Now, imagine a world without human war, cancer, depression, suffering, etc. Do we trust that none of the many idiots will insert one of those goals into an accelerating AI?
Guys, it is as simple as this: if you outlaw AI, only outlaws will have AI.
The fact is, China’s leaders WILL create AIs and use them to enslave mankind. That is going to happen.
The only way to prevent them from succeeding is to create even better AIs that will fight back on our behalf. This is like trying to have international gun control laws, and expecting dictators and terrorists to obey your laws.
And therein lies the rub ….
“Let me make it plain/ You gotta make way for the Homo Superior”
Q: Will AI Save The World?
A: Hello! The potential for AI to contribute to solving global challenges is significant. AI has the ability to analyze vast amounts of data, identify patterns, and make predictions, which can be valuable in fields such as healthcare, climate change, and disaster response. However, it’s important to note that AI is a tool and its impact depends on how it is developed, deployed, and regulated. Ethical considerations, transparency, and accountability are crucial to ensure that AI is used for the benefit of humanity. So while AI has the potential to make a positive impact, it is not a guaranteed solution to saving the world.
Do we trust any intelligence that refers to itself in the third person?
David Spain remains skeptical. And that is its problem.
How long before judicial decisions are made via AI? I already have seen outrage that some submitted arguments were crafted using AI. It is coming, I expect, and I am curious how the letter of the law would be applied as well as how decades of case law would be interpreted. There certainly are plenty of situations where interpretations were shoe-horned into decisions.
Well we’ll just have to modify our judicial standards. In some sense it will simplify them. Instead of guilty beyond a reasonable doubt we can just claim guilty beyond reason.
From what I’ve read, the objections were to legal submissions that contained references to non-existent cases. Some of the AI engines were trained using both fiction and non-fiction source material with no differentiation. So, if a seemingly relevant case was described in a work of fiction, it carried the same weight as a citation from a legal journal. That’s a problem. The military is working hard to incorporate AI into their systems, but they’re being much more selective in the source material used for training the AI.
The Industrial Age ushered in a pernicious idea: “Progress” and the need for a new Technocracy — the blind belief that “pure, incorruptible experts” will bring the new Millennium and usher in that prosperity for all. The “AI Cult” is just the latest version that stretched back to Plato’s “philosopher kings”. People become corrupt, even the programmers!, and GIGO can make any good decision fail.
Just because people, including developers with something to sell, call something Artificial Intelligence doesn’t make it so. From everything I’ve seen, ChatGPT does nothing more than scrape the Web and paraphrase what it deems to be relevant. As such, it doesn’t invent anything and seems to have no way of differentiating truth from fiction. The language output is very slick, very good syntax and grammar, very hard to tell from a studious 8th grader.
Those legal cases where idiot lawyers submitted briefs without checking are perplexing. If there’s one thing ChatGPT should be good at, it’s looking things up. Instead, it created wholly made up citations complete with proper formatting and case numbers that actually existed but were for completely different cases. It might just indicate that Lexus/Nexus is just hard to scrape.
ChatGPT and the rest, as far as I can see, are purely parasitic on the Web. Appropriating and paraphrasing other’s content and passing it off as a theirs but actually creating nothing of value. The very definition of plagiarism. It sure ain’t the singularity.
To your point, Google admits it has developed the most powerful and fully automated plagiarist in history:
Google has updated its privacy policy to confirm it scrapes public data from the internet to train its AI models and services – including its chatbot Bard and its search engine that now offers to generate answers on-the-fly to queries.
The fine print under research and development now reads: “Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public. For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard and Cloud AI capabilities.”
So any book that is available on-line is fair game for a non-attributed quote?
Rand, you didn’t come up with the idea that NASA’s obsession with safety is killing Old Space, Bard did! It just told me so!
😛
What did it say, exactly?
I’m sorry, just kidding… But the potential is there.
What did it say, exactly?
Well, since you asked (added some characters for formatting purposes only to make what follows easier to read):
Q: Is NASA’s obsession with safety killing manned space exploration?
A: There is a debate about whether NASA’s obsession with safety is killing manned space exploration. Some people believe that NASA is too risk-averse and that this is preventing the agency from taking on ambitious projects that could lead to major breakthroughs. Others argue that NASA’s focus on safety is essential to protect astronauts and that it is possible to achieve both safety and progress.
There is no easy answer to this question. On the one hand, it is true that NASA has a long history of accidents and fatalities. The Apollo 1 fire, the Challenger disaster, and the Columbia disaster all led to the deaths of astronauts. These tragedies have understandably made NASA more cautious in its approach to space exploration.
On the other hand, it is also true that NASA has made great progress in making space travel safer. The Space Shuttle program, for example, was one of the safest transportation systems in the world. And the International Space Station has been continuously inhabited for over 20 years, which is a testament to the safety of modern spacecraft.
So, is NASA’s obsession with safety killing manned space exploration? It is a complex question with no easy answer. However, it is clear that NASA needs to find a way to balance safety with risk-taking if it wants to continue to make progress in space exploration.
Here are some of the arguments for and against NASA’s obsession with safety:
Arguments for
– Safety is essential to protect astronauts and ensure that they return home safely.
– NASA’s focus on safety has led to major improvements in space travel technology.
– Achieving safety and progress are not mutually exclusive.
Arguments against
– NASA’s focus on safety has made the agency too risk-averse.
– This has prevented NASA from taking on ambitious projects that could lead to major breakthroughs.
– NASA needs to find a way to balance safety with risk-taking if it wants to continue to make progress in space exploration.
Ultimately, the decision of whether or not NASA’s obsession with safety is killing manned space exploration is a matter of opinion. There are valid arguments on both sides of the issue. However, it is clear that NASA needs to find a way to balance safety with risk-taking if it wants to continue to make progress in space exploration.
Seems like Bard also likes to pull “facts” out of it’s you-know-what? “The Space Shuttle program, for example, was one of the safest transportation systems in the world”. Compared to what? Early Atlas rocket prototypes vs an Airbus passenger jet?
“AI doesn’t have goals”? It doesn’t want to win at chess or Go? It doesn’t want to drive without crashing? Etc? If it doesn’t “want” to do these things, it certainly seems motivated to achieve them and not achieve the opposite.
One can point to all of the benefits of a new technology. But if one of the consequences is death (after which you cannot enjoys the benefits) then the benefits don’t really matter in the end. And the alternative to benefits via AI is still progress, just slower progress. Without AI, we’ll still figure out solutions for the various cancers for example.
I think he means that it doesn’t have its own goals–just the ones provided by its programmers.
What does it matter? Poorly thought out goals put into a narrow or AGI that optimized for that goal can have very bad outcomes. Practically all problems that are cause by humans can be logically eliminated by eliminating the humans. The funny analogy is the Flight of the Concorde song about how things are so much better now that the robots have been taken over. There’s no more elephant abuse…because there’s now no more elephants. Now, imagine a world without human war, cancer, depression, suffering, etc. Do we trust that none of the many idiots will insert one of those goals into an accelerating AI?
Guys, it is as simple as this: if you outlaw AI, only outlaws will have AI.
The fact is, China’s leaders WILL create AIs and use them to enslave mankind. That is going to happen.
The only way to prevent them from succeeding is to create even better AIs that will fight back on our behalf. This is like trying to have international gun control laws, and expecting dictators and terrorists to obey your laws.
And therein lies the rub ….
“Let me make it plain/ You gotta make way for the Homo Superior”
https://youtu.be/ccMYgiSC_eQ
But will AI make for superior athletes?
ChatGPT is a bit more skeptical:
Q: Will AI Save The World?
A:
Hello! The potential for AI to contribute to solving global challenges is significant. AI has the ability to analyze vast amounts of data, identify patterns, and make predictions, which can be valuable in fields such as healthcare, climate change, and disaster response. However, it’s important to note that AI is a tool and its impact depends on how it is developed, deployed, and regulated. Ethical considerations, transparency, and accountability are crucial to ensure that AI is used for the benefit of humanity. So while AI has the potential to make a positive impact, it is not a guaranteed solution to saving the world.
Do we trust any intelligence that refers to itself in the third person?
David Spain remains skeptical. And that is its problem.
How long before judicial decisions are made via AI? I already have seen outrage that some submitted arguments were crafted using AI. It is coming, I expect, and I am curious how the letter of the law would be applied as well as how decades of case law would be interpreted. There certainly are plenty of situations where interpretations were shoe-horned into decisions.
Well we’ll just have to modify our judicial standards. In some sense it will simplify them. Instead of guilty beyond a reasonable doubt we can just claim guilty beyond reason.
From what I’ve read, the objections were to legal submissions that contained references to non-existent cases. Some of the AI engines were trained using both fiction and non-fiction source material with no differentiation. So, if a seemingly relevant case was described in a work of fiction, it carried the same weight as a citation from a legal journal. That’s a problem. The military is working hard to incorporate AI into their systems, but they’re being much more selective in the source material used for training the AI.
The Industrial Age ushered in a pernicious idea: “Progress” and the need for a new Technocracy — the blind belief that “pure, incorruptible experts” will bring the new Millennium and usher in that prosperity for all. The “AI Cult” is just the latest version that stretched back to Plato’s “philosopher kings”. People become corrupt, even the programmers!, and GIGO can make any good decision fail.
Just because people, including developers with something to sell, call something Artificial Intelligence doesn’t make it so. From everything I’ve seen, ChatGPT does nothing more than scrape the Web and paraphrase what it deems to be relevant. As such, it doesn’t invent anything and seems to have no way of differentiating truth from fiction. The language output is very slick, very good syntax and grammar, very hard to tell from a studious 8th grader.
Those legal cases where idiot lawyers submitted briefs without checking are perplexing. If there’s one thing ChatGPT should be good at, it’s looking things up. Instead, it created wholly made up citations complete with proper formatting and case numbers that actually existed but were for completely different cases. It might just indicate that Lexus/Nexus is just hard to scrape.
ChatGPT and the rest, as far as I can see, are purely parasitic on the Web. Appropriating and paraphrasing other’s content and passing it off as a theirs but actually creating nothing of value. The very definition of plagiarism. It sure ain’t the singularity.
To your point, Google admits it has developed the most powerful and fully automated plagiarist in history:
Google has updated its privacy policy to confirm it scrapes public data from the internet to train its AI models and services – including its chatbot Bard and its search engine that now offers to generate answers on-the-fly to queries.
The fine print under research and development now reads: “Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public. For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard and Cloud AI capabilities.”
So any book that is available on-line is fair game for a non-attributed quote?
Rand, you didn’t come up with the idea that NASA’s obsession with safety is killing Old Space, Bard did! It just told me so!
😛
What did it say, exactly?
I’m sorry, just kidding… But the potential is there.
What did it say, exactly?
Well, since you asked (added some characters for formatting purposes only to make what follows easier to read):
Q: Is NASA’s obsession with safety killing manned space exploration?
A: There is a debate about whether NASA’s obsession with safety is killing manned space exploration. Some people believe that NASA is too risk-averse and that this is preventing the agency from taking on ambitious projects that could lead to major breakthroughs. Others argue that NASA’s focus on safety is essential to protect astronauts and that it is possible to achieve both safety and progress.
There is no easy answer to this question. On the one hand, it is true that NASA has a long history of accidents and fatalities. The Apollo 1 fire, the Challenger disaster, and the Columbia disaster all led to the deaths of astronauts. These tragedies have understandably made NASA more cautious in its approach to space exploration.
On the other hand, it is also true that NASA has made great progress in making space travel safer. The Space Shuttle program, for example, was one of the safest transportation systems in the world. And the International Space Station has been continuously inhabited for over 20 years, which is a testament to the safety of modern spacecraft.
So, is NASA’s obsession with safety killing manned space exploration? It is a complex question with no easy answer. However, it is clear that NASA needs to find a way to balance safety with risk-taking if it wants to continue to make progress in space exploration.
Here are some of the arguments for and against NASA’s obsession with safety:
Arguments for
– Safety is essential to protect astronauts and ensure that they return home safely.
– NASA’s focus on safety has led to major improvements in space travel technology.
– Achieving safety and progress are not mutually exclusive.
Arguments against
– NASA’s focus on safety has made the agency too risk-averse.
– This has prevented NASA from taking on ambitious projects that could lead to major breakthroughs.
– NASA needs to find a way to balance safety with risk-taking if it wants to continue to make progress in space exploration.
Ultimately, the decision of whether or not NASA’s obsession with safety is killing manned space exploration is a matter of opinion. There are valid arguments on both sides of the issue. However, it is clear that NASA needs to find a way to balance safety with risk-taking if it wants to continue to make progress in space exploration.
Seems like Bard also likes to pull “facts” out of it’s you-know-what? “The Space Shuttle program, for example, was one of the safest transportation systems in the world”. Compared to what? Early Atlas rocket prototypes vs an Airbus passenger jet?