|
Post by Aris Katsaris on Apr 15, 2010 16:54:58 GMT
The author can be authoritative about the *facts* of the story, but in the case of most moral issues, not even the author's opinion can be authoritative. Because it's not the facts that are really in dispute, it's their emotional impact on different readers, and their compatibility with the readers' values.
How much the robot's destruction should matter isn't a thing that Tom can answer. E.g. even if Tom says that the robots don't get an afterlife (which some people here wondered/disagreed about) -- there may exist people who will argue that if the robots don't get an afterlife, then their deaths isn't a crime because they're transient and thus expendable. Others may argue that if the robots don't get an afterlife, then robotic deaths are a BIGGER crime.
|
|
|
Post by tyler on Apr 15, 2010 21:20:44 GMT
For those of you still in the "it's just a dumb robot" category... I assume you haven't seen this question/answer on Formspring: Indeed. Jack sees them as toasters. But Tom's own storytelling has show that they are more than that.
|
|
|
Post by todd on Apr 15, 2010 22:19:54 GMT
I got the impression that some of the people in the "what Jack did to the robot wasn't that big a deal" camp are so from a response of "comic-relief robots have been overused in sci-fi amd we're tired of them".
|
|
|
Post by Casey on Apr 16, 2010 4:21:49 GMT
How much the robot's destruction should matter isn't a thing that Tom can answer. I would partially disagree with you... more of a clarification I guess. I would say that how much the robot's destruction -should- matter in a general sense is precisely something that Tom can answer, and he spent a chapter doing it... I think that was his point. But, as you said, how much the robot's destruction actually ends up mattering to any specific individual person is going to be up to no one but that individual... despite Tom's efforts. Now whether or not it is "right" for a reader to disregard Tom's efforts, and apply real-world logic and ethics to the issue, is another conversation altogether.
|
|
|
Post by Aris Katsaris on Apr 16, 2010 9:52:05 GMT
I disagree. I don't think that Tom even attempted to do such a thing. If Tom had attempted such, I don't believe that he'd have us feel sorry about Jeanne who smashed whole shelves of robots, or to feel hatred/contempt towards the person that caused her death soon after her mass-murderous actions.
He instead showed us perspectives: We can better tell how Jeanne felt towards things, how Diego felt towards things, and how the *robots* felt towards things: Jeanne treated robots as things, Diego treated them as children, and the robots have their own unique interests and personalities.
That he gave us a viewpoint of a robot meant for one that robots do have viewpoints, and that now people are more likely to empathize with them, and detest harm done to them. This *explains* people's sympathy for the robot, but it doesn't by itself make it a moral imperative to despise Jack's actions, or to describe it as murder -- any more than it made it a moral imperative to despise Jeanne's actions or describe that as mass murder...
I think that readers here are too often demanding a moral imperative from the story. But in Gunnerkrigg court, moral imperatives come from characters, rarely from the story itself. E.g. Muut hates the killing of an ant, while Annie is utterly unconcerned about it. Here Annie hated the killing of a robot, but *Jack* was unconcerned about it. And all the protagonists were unconcerned about the Nobodies.
I like this in the story. Readers can each feel what they want about the ant, or about the robots, or about the Nobodies - the story itself doesn't demand that you take sides.
|
|
|
Post by Casey on Apr 16, 2010 16:11:37 GMT
Well I don't know exactly where you got the term "moral imperative" from, but I can say that neither what I posted nor what Tom said ever addressed whether or not there was a moral imperative that people -must- hate Jack for killing the guard bot.
What I do believe Tom has attempted to make clear is that the robots, to some degree between a primate and a human, are alive. I think that the 'moral imperative' is up to the individual reader, as I said in my post, to decide the morality of killing the guard bot. I just think that Tom made it clear that the universal moral weight that should be applied to that decision is equal to something between killing a guard dog and killing a human guard. One can still decide for themselves whether killing a canine or human guard is "wrong" or "justified" or "bad-ass" or whatever... I think the only point being made here though is that that decision should be made with the appropriate amount of recognition of the robot's sentience. I mean, there are plenty of people who would say that killing a guard dog is okay because it's "just" a dog. ...I personally wouldn't be friends with any such people, but still, that's their moral decision to make; no "moral imperative" has been cast, and I don't know where you get that from.
|
|
|
Post by Aris Katsaris on Apr 16, 2010 16:42:44 GMT
I assumed that was the meaning of the word "should". I understand your viewpoint a bit better because of your last post, though still not completely agreeing with you.
This I completely agree with -- and Tom can be authoritative about how sentient the robot is. I'm not sure however that the moral weight of the decision hinges ONLY on the robot's sentience.
How about uniqueness for example or irreversability? Destroying the only copy of the Mona Lisa isn't the same as destroying a random rock. If Guard-robots are mass-produced with the exact same personality does the act of destroying one mean less? What if the robots can back-up personalities and create new CPUs to house them in, does this also reduce the moral weight of the robot's destruction?
Or even what if they experience no desire for self-preservation, so that they don't actually mind getting destroyed? Etc, etc... I can imagine many ways in which even a fully sentient being's murder might be slightly less horrible than another sentient being's murder.
|
|
|
Post by Casey on Apr 16, 2010 17:52:21 GMT
Well I agree that there is some room for subjectivity there. I don't think there's all that much room for subjectivity but that's up to the individual, and we just have differing viewpoints on that which is fine. I think we understand each other's position.
|
|
|
Post by tyler on Apr 16, 2010 18:04:52 GMT
There is a wrinkle here that we would need to iron out before we really got into moral issues. Are the robots actually sentient? Do they actually possess consciousness? Or are all of their behaviors programmed?
If the Court is advanced enough to produce a robot that can pass a Turing test, then that doesn't necessarily mean it has a soul. It simply means that it has programming sufficient enough to behave in an appropriately anthropomorphic fashion.
Now, the thing that's throwing a wrench into that is the current revelation of the shrine to Jeanne. The robot's behavior is beginning to seem emergent. That's possible. They're asking questions, looking for data to answer gaps in their programming. However, the adoration of Jeanne seems also to be a byproduct of Diego's obsession with her. It could be that without someone's hand on the switch to make sure whatever heuristic developments arise that begin sending them down program loops that cause problems.
The question is whether they're alive, or behaving in a fashion that mimics sentience due to programming errors. We tend to anthropomorphize all sorts of technology, particularly the sort that can talk back to us. If a robot is stronger than us, faster than us, and can speak, we want to think it is like us because we want the reassurance that it is operating under the same moral ideals we could expect from humans.
These robots act like humans. If they were programmed that way, that doesn't mean they have souls or even feelings. Not in the sense we think about them. Acting out a "sorrow" subroutine is not the same thing as actually feeling it. Taking robotic behavior at face value and assuming that because they act like people we should treat them as such is the sort of thing that often comes up in stories and can lead to some horrible misunderstandings.
|
|
|
Post by Casey on Apr 16, 2010 18:08:42 GMT
The strongest argument in favor of the robots being alive is Sky Watcher's internal monologue on the last two pages of Sky Watcher and the Angel, where it is revealed that they are capable of introspection, emotion, metacognition, and... whatever the word is for "seeking to understand one's purpose and destiny".
|
|
|
Post by tyler on Apr 16, 2010 18:19:10 GMT
The internal dialog is definitely a good thing to point out. It does tip the scales in favor of true emergent sentience. He isn't thinking in any sort of code, he's thinking in words. A machine that was simply programmed to behave in a certain function probably would not be programmed to think that way at all. There would be no functional reason for it.
Given that, while Jack may have just ruthlessly eliminated a thinking being, it doesn't necessarily mean he knows it. Not that ignorance of the fact would entirely cleanse him of the responsibility. Right now he thinks of them as toasters. If he had the opportunity to see what we've seen, then his reaction to that knowledge would tell us what sort of person he really is.
|
|
|
Post by Mr Pitchfork on Apr 16, 2010 18:21:17 GMT
The strongest argument in favor of the robots being alive is the fact that Tom explicitly stated they were alive.
|
|
|
Post by Casey on Apr 16, 2010 19:18:28 GMT
The strongest argument in favor of the robots being alive is the fact that Tom explicitly stated they were alive. Oops, did he? I read where he said they were greater than the sum of their parts... if he explicity said they were alive, I would have simply quoted that rather than all the expository *grin* A quote would be great, that would clear up a lot of things.
|
|
|
Post by Mr Pitchfork on Apr 16, 2010 21:49:58 GMT
|
|
|
Post by tyler on Apr 16, 2010 22:08:28 GMT
And the Q&A from the formspring, so it's in the thread and people can read it here: That's not an answer that says "The Court Robots are alive." From the Judeo-Christian viewpoint, and many other spiritual belief systems, our brains didn't make our souls, God did. From an atheist standpoint, the answer to Tom's reply is "No." Which means his answer is "No." Depending on the philosophy you hold, his response could mean many things. That's not a definitive anything. Thanks.
|
|
|
Post by Refugee on Apr 16, 2010 22:35:49 GMT
Excellent comment on Tom's storytelling style. We can still judge the characters, based on their actions, but all of them are acting reasonably given what they know, and given who they are. Tom does not judge, in general; he simply tells us what happened, and how the characters judged those events, each other, and themselves.
|
|
|
Post by TBeholder on Apr 17, 2010 4:44:29 GMT
I think that readers here are too often demanding a moral imperative from the story. But in Gunnerkrigg court, moral imperatives come from characters, rarely from the story itself. [...] I like this in the story. Readers can each feel what they want about the ant, or about the robots, or about the Nobodies - the story itself doesn't demand that you take sides. Though in no way it's limited to GKC. In other words, a lot of people either a) want to see a Disney-tale pulling viewers' eyes into inane "aesops" even in a work that clearly isn't one, or b) fail to understand any art not containing at least some sort of propaganda. And/or apt to, as you politely put it, "take sides". It's rather widespread plague, sadly... no "moral imperative" has been cast, and I don't know where you get that from. ...in the case of Casey, however, i wouldn't try to find anything more than a bad habit of Authoritatively Proclaiming instead of conversing. He got better already, no point in badgering him... too much. A machine that was simply programmed to behave in a certain function probably would not be programmed to think that way at all. There would be no functional reason for it. It's not like Diego's robots are built "only what's needed" way. Nor are laser cows... That's not a definitive anything. But of course. Just like with any other "please chew it and put in our beaks" question Tom dodged or reflected so far. ;D
|
|
|
Post by Mr Pitchfork on Apr 17, 2010 4:56:14 GMT
It's hardly diplomatic to encourage the barraging of tomatoes at a fellow board member.
|
|
|
Post by hal9000 on Apr 17, 2010 5:44:40 GMT
It's hardly diplomatic to encourage the barraging of tomatoes at a fellow board member. What? Your claims were refuted without the employment of tomato artillery (or any other implements of vegetable warfare, for that matter). I think you are overreacting.
|
|
|
Post by legion on Apr 17, 2010 10:07:33 GMT
I think that readers here are too often demanding a moral imperative from the story. But in Gunnerkrigg court, moral imperatives come from characters, rarely from the story itself. [...] I like this in the story. Readers can each feel what they want about the ant, or about the robots, or about the Nobodies - the story itself doesn't demand that you take sides. Though in no way it's limited to GKC. In other words, a lot of people either a) want to see a Disney-tale pulling viewers' eyes into inane "aesops" even in a work that clearly isn't one, or b) fail to understand any art not containing at least some sort of propaganda. That's a rather narrow choice, are all creations containing any kind of moral statement expected to always fall under a) or b), that is, to always be either a disneian aesop or propaganda? Linking tvtropes, someone had posted an interesting text in the "analysis" article about Gunnerkrigg Court, showing that the comic had systematically portrayed violence in a negative way (though sometimes necessary as a last ressort).
|
|
|
Post by tyler on Apr 17, 2010 12:44:57 GMT
It's hardly diplomatic to encourage the barraging of tomatoes at a fellow board member. What? Your claims were refuted without the employment of tomato artillery (or any other implements of vegetable warfare, for that matter). I think you are overreacting. I think he may be referring to TBeholder's comment regarding Casey. I think. And, yeah, he did get some tomatoes, but I don't think anyone's going to actually go there again. Better not. I was just thinking to myself how nice it's been since I finally got my internet back and people aren't all crazy.
|
|
|
Post by TBeholder on Apr 17, 2010 16:51:04 GMT
It's hardly diplomatic to encourage the barraging of tomatoes at a fellow board member. Tomatos already took wing. It sort of inevitable. I only ask to remove them from the boxes first. That's a rather narrow choice, are all creations containing any kind of moral statement expected to always fall under a) or b), that is, to always be either a disneian aesop or propaganda? I see (a) and (b) as forms of the same, anyway. Not "all creations", but i'm sure it's true for the subset of them that would fully satisfy those who aren't comfortable until they get a Moral Of The Story. On the second thought, it can be expanded a lot simply by refusing to notice anything that doesn't fit the expectations, but this can't be helped, it's how any filter works, starting from ones used while mushrooming. someone had posted an interesting text in the "analysis" article about Gunnerkrigg Court, showing that the comic had systematically portrayed violence in a negative way (though sometimes necessary as a last ressort). There's also a difference between ethics and morals.
|
|
|
Post by legion on Apr 17, 2010 17:32:53 GMT
There's also a difference between ethics and morals. Is there, really?
|
|
|
Post by Aris Katsaris on Apr 17, 2010 20:20:29 GMT
There's also a difference between ethics and morals. Is there, really? Sure. "morals" is the word that's etymologically derived from Latin, while "ethics" is the word that's etymologically derived from Ancient Greek. No other difference I can think of, mind you. :-)
|
|
|
Post by Mr Pitchfork on Apr 17, 2010 21:35:15 GMT
It's hardly diplomatic to encourage the barraging of tomatoes at a fellow board member. What? Your claims were refuted without the employment of tomato artillery (or any other implements of vegetable warfare, for that matter). I think you are overreacting. Try following TBholder's links :/
|
|
|
Post by TBeholder on Apr 18, 2010 10:31:42 GMT
There's also a difference between ethics and morals. Is there, really? Using the definitions i prefer, the first is an "algebra of behaviour" (the same way as a logic is an "algebra of statements") - which as functions applied to chosen priorities may be ingrained in behaviour stereotypes, the second is a subset of behaviour stereotypes (real or imaginary) that get sanctified as an ideal in itself, but used as arguments as opposed to leading goals (as with ideals-visions like "la belle France", "for Prophet!" or "don't tread on me and don't let anyone tread on you"). In the first case it's about as good to be put on a flag as a ruleset of binary or complex logic or Euclidean axioms In the second... not that it's "unherently bad" or something, but there are valid reasons to expect problems, and usually needless ones at that.
|
|