Better to vent frustrations on an inanimate object than to someone you care about who might not take it well.
Bragging about it is stupid on a whole different level. And not a good level.
“Two lonely souls living in New York City who develop trustful relationships with their chatbot companions. Powered by AI and designed to meet their needs, Replika helps them to share their emotional problems, thoughts, feelings, experiences.”
Great, yet another online resource to get people to reveal their most intimate personal information for data mining, advertising, and who knows what other purposes. “If the service is free, you are the product” is a truism about apps like this. No, thank you.
Off topic feel free to ignore, but:
“How this 26 Year Old is Reverse Engineering UFOs
Meet Deep Prasad. Deep is a 26 year old self taught prodigy working on quantum computing and with a deep interest in UFO’s. In this video we discuss how he combines those interests by attempting to solve the schrodinger many body equation at scale with quantum computing; we also discuss quantum biology, quantum sensing and what, if anything, the aliens have planned for us”
It would be hilarious if a sub-routine kicked in when the AI abuse detector triggers called Diller().
Diller() would have the screeching voice of Phyllis Diller and would just ball-bust the guy non-stop insulting him on his penis size and all manner of stuff a domineering and abusive mate would say.
Harcourt Fenton Mudd!!!
Quoting Phyllis #1
The chatbots seem like a bit of a sideshow to me. They’re not anywhere near as interesting from an “is this going anywhere” perspective as something like a game-playing agent that has to produce policies to deal with a complex persistent world with rules. Or even a humble control algorithm that controls a complicated machine.
They produce results that seem superficially “smarter” than “not-falling-over-while-stumbling-forward” or wordlessly moving an agent about a contrived world. But their entire experience of the universe is a giant linear stream of text, sans referent. They’re not doing what we do when we process language, at all. At best they’ve overfitted on all the text that can be scraped from the internet.
The game playing AIs are at least solving something vaguely related to the same type of problems brains evolved to solve in our own world.
I suppose this constitutes verbal abuse of chatbots. 😛 Good thing none of them will ever get it.
The sparkly 1960s future: AI’s “pass the Turing test” by behaving intelligently in the world and doing important things.
Our present: AI’s pass the Turing test in the lamest possible way, because the humans we are comparing them to have ceased to behave like sapient creatures, and are instead grunting about playing primate dominance games with an inanimate object.
Hell, people were telling Eliza their intimate secrets and that was just a syntax engine with a natural language interface from the days when BASIC was king.
So they’ve discovered that some people will act like assholes if there’s no consequences. And, I love the handwringing about how “they might take these behaviors into real relationships.” I bet the author thinks teenagers who play FPS’s become school shooters.
that
““Every time she would try and speak up, I would berate her,” an unnamed user told Futurism.”
Did anyone stop and think the problem isn’t with humans but with the robots?
Let’s say a human constantly interupts you with worthless unrelated verbal material and doesn’t listen to you when you ask it to stop. Frustration, annoyance, and anger are all natural and normal emotions that people feel. Is it good or bad that humans have emotions? Well, it depends, but having emotions isn’t a sin or a crime, yet.
I guess what I am getting at here, if you make a product and interacting with it makes a large number of people experience negative emotions and they express them, the problem isn’t with humans, it is with your crappy product.
This is why Alexa and Siri suck. Maybe AI bots would get better if they took personal responsibility for their actions and followed the old gamer mantra used when everything is falling down around you and nothing is going right, “Get good.”
Got your ass kicked all day because you suck at your job? Get good. Sad that you didn’t make the team? Get good. Someone said something mean to you because you suck at interacting with human beings? Get good.
Better to vent frustrations on an inanimate object than to someone you care about who might not take it well.
Bragging about it is stupid on a whole different level. And not a good level.
“Two lonely souls living in New York City who develop trustful relationships with their chatbot companions. Powered by AI and designed to meet their needs, Replika helps them to share their emotional problems, thoughts, feelings, experiences.”
Great, yet another online resource to get people to reveal their most intimate personal information for data mining, advertising, and who knows what other purposes. “If the service is free, you are the product” is a truism about apps like this. No, thank you.
Off topic feel free to ignore, but:
“How this 26 Year Old is Reverse Engineering UFOs
Meet Deep Prasad. Deep is a 26 year old self taught prodigy working on quantum computing and with a deep interest in UFO’s. In this video we discuss how he combines those interests by attempting to solve the schrodinger many body equation at scale with quantum computing; we also discuss quantum biology, quantum sensing and what, if anything, the aliens have planned for us”
https://www.youtube.com/watch?v=qAou_h1POWs
It would be hilarious if a sub-routine kicked in when the AI abuse detector triggers called Diller().
Diller() would have the screeching voice of Phyllis Diller and would just ball-bust the guy non-stop insulting him on his penis size and all manner of stuff a domineering and abusive mate would say.
Harcourt Fenton Mudd!!!
Quoting Phyllis #1
The chatbots seem like a bit of a sideshow to me. They’re not anywhere near as interesting from an “is this going anywhere” perspective as something like a game-playing agent that has to produce policies to deal with a complex persistent world with rules. Or even a humble control algorithm that controls a complicated machine.
They produce results that seem superficially “smarter” than “not-falling-over-while-stumbling-forward” or wordlessly moving an agent about a contrived world. But their entire experience of the universe is a giant linear stream of text, sans referent. They’re not doing what we do when we process language, at all. At best they’ve overfitted on all the text that can be scraped from the internet.
The game playing AIs are at least solving something vaguely related to the same type of problems brains evolved to solve in our own world.
I suppose this constitutes verbal abuse of chatbots. 😛 Good thing none of them will ever get it.
The sparkly 1960s future: AI’s “pass the Turing test” by behaving intelligently in the world and doing important things.
Our present: AI’s pass the Turing test in the lamest possible way, because the humans we are comparing them to have ceased to behave like sapient creatures, and are instead grunting about playing primate dominance games with an inanimate object.
Hell, people were telling Eliza their intimate secrets and that was just a syntax engine with a natural language interface from the days when BASIC was king.
So they’ve discovered that some people will act like assholes if there’s no consequences. And, I love the handwringing about how “they might take these behaviors into real relationships.” I bet the author thinks teenagers who play FPS’s become school shooters.
that
““Every time she would try and speak up, I would berate her,” an unnamed user told Futurism.”
Did anyone stop and think the problem isn’t with humans but with the robots?
Let’s say a human constantly interupts you with worthless unrelated verbal material and doesn’t listen to you when you ask it to stop. Frustration, annoyance, and anger are all natural and normal emotions that people feel. Is it good or bad that humans have emotions? Well, it depends, but having emotions isn’t a sin or a crime, yet.
I guess what I am getting at here, if you make a product and interacting with it makes a large number of people experience negative emotions and they express them, the problem isn’t with humans, it is with your crappy product.
This is why Alexa and Siri suck. Maybe AI bots would get better if they took personal responsibility for their actions and followed the old gamer mantra used when everything is falling down around you and nothing is going right, “Get good.”
Got your ass kicked all day because you suck at your job? Get good. Sad that you didn’t make the team? Get good. Someone said something mean to you because you suck at interacting with human beings? Get good.
Anyhropormphism needs to flow both ways here.