|
Post by Aspen d'Grey on Mar 28, 2007 5:00:36 GMT
Hmm, do robots even recognize a god, or is it something that he picked up from being around humans too long? I like how we see through his eyes, with that green tinted robot glow. Reminds me of Slashdot. =D It would appear that, while being studied, Robot didn't really have an idea of where he was or how much time had passed. Lovely comment, Reynardine. x.x I assume that Tom's comment was reffering to Reynardine, as well. Edit: Woops! 223, not 113. x.x PS: Remember, the Drunk Duck mirror updates a few hours early. Grab it at www.drunkduck.com/Gunnerkrigg_Court
|
|
|
Post by mrw on Mar 28, 2007 5:06:47 GMT
Aww, Kat is Robot's angelic redeemer. (Hmm, what does that make Annie?)
For some reason, Reynardine really reminds me of a cat on this page. Oh, now I remember why. Because my cat just climbed on the table, looked at what I was working on, and casually dumped it on the floor. He had that exact same expression as Reynardine. Darn evil, I tell ya.
Welcome back, Aspen! Feels like you've been away for a while.
Edit: *slaps forehead* ...duh, you're right, Annie is his mommy. Sorry, momentary lapse of reason...
|
|
|
Post by Aspen d'Grey on Mar 28, 2007 5:09:38 GMT
Annie is Robot's mother, of course. Everyone needs a mother and a gaurdian angel.
Yeah, he does kind of seem liek a cat while he's tring to crawl up there, and he's got those smalll ears... And what your cat does probably helped. (Can you tell I'm writing this as I read what you wrote?)
As for absence, work has been insane-crazy and stuff. Slight case of burnout to go along with it.
|
|
aoeniac
Full Member
Dodecahedron!
Posts: 112
|
Post by aoeniac on Mar 28, 2007 10:43:00 GMT
Jeeze, what is there NOT to like about Robot?
He's sweet, caring, cheerful, and very good at walking!
|
|
|
Post by todd on Mar 28, 2007 10:52:14 GMT
A very sweet and funny page (I especially liked Reynardine's cynical comment over Robot's response to discovering that he's now a boxful of paper clips).
|
|
|
Post by absentbabinski on Mar 28, 2007 12:45:53 GMT
I'm intrigued what makes leaving the site so "criminal"
|
|
|
Post by La Poire on Mar 28, 2007 17:35:17 GMT
Robot was dismantled long before that, his criminal behaviour is probably from before the start of the comic. Eglamore seems to have just handed him over to the robotic authorities, which deemed him a criminal. Remember Robot's questio: "Do I have to come back?" It seems he didn't enjoy the Court very much. That in itself is probably criminal for a robot.
|
|
|
Post by Aspen d'Grey on Mar 28, 2007 18:10:04 GMT
I'm not even sure if Eglamore hadned him over. The Robots seem to have pretty free rein over Gunnerkrigg Court, so ong as they aren't seen by students, so it's possible they picked Robot up themselves.
Also, think of Asimov's 3 laws of robotics. Maybe he broke one of those? (Then again, there's no telling that these robots will follow those) A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Do you think the GC Robots are bound by these?
|
|
|
Post by mrw on Mar 28, 2007 19:39:08 GMT
I suspect Eglamore did hand him over, because of his "it's been disposed of" comment. At least that's that it implies to me.
Asimov's 2nd law of robotics (as stated) is troublesome unless it is expanded upon more. I.e. by placing all humans above robots it does not address the issue of conflicting orders given sequentially. For example, if Doorbot was instructed by the adminbot not to admit humans, he would be breaking the 2nd law by refusing to admit Annie. If it was a human who had given the instruction not to admit humans, would Doorbot be breaking the 2nd law to refuse Annie entry in that case? And if you make the argument that allowing entry would endanger Annie through action/inaction, upon whom does the burden of determining what constitutes danger and inevitable harm fall upon? Doorbot? How would Doorbot know all possible sources of harm within?
To me Asimov's 3 laws of robotics, as written, are a logical trap. One only needs to look at the lawsuit-happy nature of America to see why things cannot be simplified that much. "By serving that lady hot coffee, Ronald McRobot allowed her to come to harm and has therefore violated the First Law. Paperclip art thy name."
And how would a "logical" creation like a robot be able to interpret the "spirit" of the law? (Although I personally have faith in our good Robot...)
So in a nutshell, I doubt Gunnerkrigg's robots follow Asimov's laws that closely.
|
|
|
Post by nickgoodway on Mar 28, 2007 20:12:58 GMT
I think Rey might well be jealous of the attention Annie is paying to Robot, look at his expressionin panel 5. He could be just showing general cat-like malevolence, but I wonder. And for all her politeness to robotkind has Annie thanked Rey for his help?
|
|
|
Post by La Poire on Mar 28, 2007 22:27:53 GMT
Only Laws 1 and 3 are really necessary for a functional society with robots (although law 1 couldn't be extreme, or humans wuld eventually be kept permanently in stasis); this would solve the whole conflicting orders thing, and the robots would need to work to survive anyway, and I'd like to expand on thsi some more but I need to go to bed.
And I'm pretty sure Reynardine is just malevolent. We'll likely see him doing something of that nature pretty soon, I'll wager. I'm sure Annie's thanked him between frames as well.
|
|
|
Post by UbiquitousDragon on Mar 28, 2007 22:56:25 GMT
Only Laws 1 and 3 are really necessary for a functional society with robots (although law 1 couldn't be extreme, or humans wuld eventually be kept permanently in stasis); this would solve the whole conflicting orders thing, and the robots would need to work to survive anyway, and I'd like to expand on thsi some more but I need to go to bed. And I'm pretty sure Reynardine is just malevolent. We'll likely see him doing something of that nature pretty soon, I'll wager. I'm sure Annie's thanked him between frames as well. Don't forget the Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm". But I agree, I don't think these robots work by Asimov's Laws, and I'd rather see something new rather than derivative to previous sci-fi ( /steampunk) works. I agree Reynardine's probably going to do something horrible, probably try and break Robot's spirit. Or maybe do something useful like ask him about the 'very nice man', Reynardine didn't trust Robot nor Shadow2's impersonator. And Robot knew something about Reynardine, 117 and 118. Erm, I'm rambling now sorry, folks.
|
|
|
Post by Aspen d'Grey on Mar 28, 2007 23:23:05 GMT
The 0th law only applied in some of the later books, and only to certain higher level robots who could grasp a broader concept of 'harm' than standard robots. I don't think it really applies to our robots.
|
|
aoeniac
Full Member
Dodecahedron!
Posts: 112
|
Post by aoeniac on Mar 28, 2007 23:24:52 GMT
Asimov's 2nd law of robotics (as stated) is troublesome unless it is expanded upon more. I.e. by placing all humans above robots it does not address the issue of conflicting orders given sequentially. For example, if Doorbot was instructed by the adminbot not to admit humans, he would be breaking the 2nd law by refusing to admit Annie. If it was a human who had given the instruction not to admit humans, would Doorbot be breaking the 2nd law to refuse Annie entry in that case? And if you make the argument that allowing entry would endanger Annie through action/inaction, upon whom does the burden of determining what constitutes danger and inevitable harm fall upon? Doorbot? How would Doorbot know all possible sources of harm within? To me Asimov's 3 laws of robotics, as written, are a logical trap. One only needs to look at the lawsuit-happy nature of America to see why things cannot be simplified that much. By serving that lady hot coffee, Ronald McRobot allowed her to come to harm and has therefore violated the First Law. Asimov's laws of robotics are much more complex than that, and they are only summarized as such so as to be easy to understand. As far as I can tell, having read a lot of Asimov, conflicting orders given by humans are dealt with in terms of chronological priority. If somebody gives an order and then somebody else gives an order which very clearly conflicts with the first order and cannot be satisfied as long as the first order is in effect, then the second order will be ignored barring the involvement of the 1st law in said order. Also, the second law only pertains to orders given by humans. The priority of orders given by other robots is not determined by laws. Whether a robot follows the orders of another robot is determined purely by the circumstances, as in whether or not this order involves the laws or whether the ordering robot has been authorized to give orders by humans or simply whether the robot being ordered thinks it would be a good idea to follow the order. If the Admin-Bot had ordered Door-Bot not to admit humans, it would be a pointless order unless it was authorized by a human. If a human ordered Door-Bot not to admit Annie, and afterwards Annie ordered Door-Bot to let her in, Door-Bot would not let her in, barring any circumstances where not letting her in would potentially cause her or other humans harm. If Annie threatened to harm herself if she was not let in, Door-Bot would let her in unless letting her in would cause greater harm to another human or humans. If it was within his power to restrain her until help arrives, he would probably do that so as to avoid too much damage to any humans. As you can imagine, robots can have difficulty assigning quantitative values to humans and human conditions in order to determine which of two choices is for the "greater good". But still, yes, the task of determining what constitutes danger does indeed fall upon Door-Bot. Asimov style robots are designed to base their actions on what is, to the best of their knowledge and calculations, the best choices as far as preventing human harm. Robots in Asimov's books have even been known to quote the percentage chance that something harmful might happen in a situation, such as when somebody asks if they're in any danger and the robot replies that to its knowledge the chances of harm coming to him at this point in time is some extremely small fraction of a percent which represents the chances of an undetected meteor smashing into him before a robot can come to his aid. Psychological harm is generally not a concern to robots unless it falls within well known conditions, although there have been books which involve telepathic robots who have a whole new host of problems to deal with in those situations because they have to figure out what causes the least harm both physically and mentally. And if Ronald McDonald was a robot, he would not be violating the laws by serving hot coffee which subsequently burned somebody unless he was there when the coffee was being spilled and did not attempt to protect the human from the spilling coffee.
|
|
|
Post by mrw on Mar 29, 2007 1:41:24 GMT
Cool, thanks for elaborating on that. I kind of suspected there would be more to it than what Aspen posted, especially since Asimov was nothing if not thorough. He was one of my favorite authors, but now that I'm thinking about it, I never read the "I, Robot" series. One of these days I'll have to do something about that.
And man, I would hate to be the one who had to program their behavior.
|
|
|
Post by Count Casimir on Mar 29, 2007 2:12:08 GMT
All you have to do is program one robot to build other robots and program them.
|
|
|
Post by mrw on Mar 29, 2007 2:36:15 GMT
Well, then you would end up with a bunch of clones, which would be kind of boring. What I was really referring to was the programming required to make the robots not only sentient, but be able to interpret law with such skill. Which is something most humans can't even do. Just fyi, my limited experience is with real-world robots, not sci-fi robots. Being an electrical engineering major, I tend to see them as "dumb" machines that are only as good as their creator, and I base everything around today's technology. It kind of limits my imagination regarding robots. Given that we humans have yet to create a robot as capable as a housefly, (even the best Japanese technology doesn't even come close) a robot with a true personality seems as far-fetched to me as a minotaur in the basement. That being said, I really like our good Robot.
|
|
|
Post by Aspen d'Grey on Mar 29, 2007 2:45:14 GMT
Cool, thanks for elaborating on that. I kind of suspected there would be more to it than what Aspen posted, especially since Asimov was nothing if not thorough. He was one of my favorite authors, but now that I'm thinking about it, I never read the "I, Robot" series. One of these days I'll have to do something about that. And man, I would hate to be the one who had to program their behavior. I like to make people think rather thank posting all of my ideas. Call 'em hooks, if you will. My intention was to bring up the Laws of Robotics, not to provide the very good detailed description and analysis that aoeniac was kind enough to provide. That said, those three laws (plus the 0th law) are what Asimov explicetly defined in his books. There's a lot of how the robots act and interpret those that go along with it, but those are the things to know- these are very intelligent robots, so the hard level coding of the 3 (or 4) laws will be interpreted by them very thoroughly. Also remember that only a few robots, very 'late' in the chronology of Asimov's writings, have the ability to grasp a concept of what 'harm' means well enough to act or consider the 0th law. (No harm to humanity)
|
|
aoeniac
Full Member
Dodecahedron!
Posts: 112
|
Post by aoeniac on Mar 29, 2007 4:50:45 GMT
Somehow, I suspect that Tom's robots aren't Asimov style...
They're too cute and silly!
|
|
|
Post by La Poire on Mar 29, 2007 14:44:29 GMT
Yeah, the laws are more interesting as a thought experiment than anything practical. Truely intelligent robots would have to be self-learning (or whatever the correct term is) which would mean they would have to be raised as well as programmed (the human brain, after all, takes up to 25 years to fully form). It doesn't seem anyone has been doing a very good job in that department at Gunnerkrigg, hence all the childish robots.
|
|
|
Post by Aspen d'Grey on Mar 29, 2007 19:27:02 GMT
Self learning doesn't mean you have to start from scratch, you could easily give robots a preset ammount of experiences that would allow them to funciton well and do a specified job, and any learning capability is on top of that.
|
|