Was it a senseless tragedy, or was it a slice of richly deserved jungle justice when the hitchBOT, a hitchhiking “robot” (actually just a vaguely humanoid prop with a computer chip in it) got curbstomped by some jackass in an Eagles jersey, ending a sojourn over two continents?
We can probably all agree that an adult who beats up someone else’s robot for no reason is a jerk. Fine. But what about when kids beat up robots? Are they just jerks in training? Or are they onto something?
Tech Insider reports that Japanese researchers are working on programming assistive robots to avoid large groups of young children, because they’ve noticed that little kids have a tendency to push, hit, kick and throw things at robots they find wandering around public spaces:
If there’s one other thing we learned from the study, it’s that young kids may possess frightening moral principles about robots.
… nearly three-quarters of the 28 kids interviewed “perceived the robot as human-like,” yet decided to abuse it anyway. That and 35% of the kids who beat up the robot actually did so “for enjoyment.”
The researchers go on to conclude:
From this finding, we speculate that, although one might consider that human-likeness might help moderating the abuse, humanlikeness is probably not that powerful way to moderate robot abuse. […] [W]e face a question: whether the increase of human-likeness in a robot simply leads to the increase of children’s empathy for it, or favors its abuse from children with a lack of empathy for it
“Frightening moral principles”? “Yet decided to abuse it anyway”? I’m not going to argue that it’s okay for kids to go whacking stuff (although the “abuse” caught on the videos in the Tech Insider post don’t strike me as horrific). I’m just curious about what would motivate kids to want to do it. Is it just because they’re bad kids, and do these findings point to some terrifying truth about man’s inhumanity to manlike robots?
The kids weren’t just indiscriminately violent toward robots; they were more violent toward robots who were designed to be more humanlike, and the story implies that this phenomenon demonstrates some moral failing in the kids. But I think it demonstrates that kids are smarter than adults.
Adults tend to speak as if machines may actually be able to achieve humanity once they’ve been programmed to imitate it well enough. We’ve been prodding this idea since Isaac Asimov was in short pants. When adults are confronted with something that looks and sounds as behaves like a human, but which may or may not be human, we may laugh, or shake our heads, or feel ashamed, or meekly submit, asking ourselves, “Well, what does it mean to be human, after all?”
But when it happens to kids, it makes them mad.
I remember being that kid. I didn’t just dislike ventriloquist’s dummies; I was angryat them. I felt that someone was trying to pull the wool over my eyes, and I resented it. The world was very confusing, and I was expected to swallow unreasonable and nonsensical ideas every single day (the world is round? The sun isn’t really moving? Baking chocolate doesn’t taste good, even though it’s chocolate? What the hell, adults?). The last thing I needed was more flim flammery.
The story continues:
[T]he more human a robot looks — and fails to pass out of the “uncanny valley” of robot creepiness — the more likely it may be to attract the tiny fisticuffs of toddlers. If true, the implications could be profound, both in practical terms (protecting robots) as well as ethical ones (human morality).
Practical implications, sure. If an old lady needs a robot to carry her groceries, they should protect the robot. But ethical implications? Also yes — but not the ones that the researchers seem to think.
For kids, “is this a real person or isn’t it?” isn’t just an unsettling intellectual exercise – it’s a matter of life and death. They don’t know what is real and what is not real, what is human and what is not human, and it’s very important that they figure it out. When they can’t, it makes them angry. A robot is not a person, and when adults expect them to behave as if it is one, they will put the impostor in its place with their small fists and feet. And, God help us, they’re onto something that so many of us are allowing ourselves to ignore.
We stroke our chins over how much autonomy human-shaped robots ought to have … and at the same time, we shrug and speak of “line item” costs when what we have in our hands — literally, in our hands, dripping with real human blood — is demonstrably a human being.
It is a good thing to rebel against bad things, and blurring the line between human and non-human is a very bad thing indeed. Is this human, or is this not? It matters. It matters. If we can decide, by fiat, that a manufactured thing should be treated with the respect that we owe to human beings, then we can decide, by fiat, that a human being can be treated like a sack of parts. This is happening now, today, constantly, horribly. It matters.
And the robot researchers talk about troubling ethical implications. Well, show a fifteen-week fetus to those Japanese children, and ask them if it’s okay to slice it up and sell the pieces. Then we can talk about “ethical implications.” Then we can talk about who has “frightening moral principles.”
Hold the line, kids. It really is a matter of life and death.
image is a screenshot grabbed from the video found here.