|
Post by speedwell on Jun 7, 2024 12:20:06 GMT
If it works, they should already know. Probably? Supposing the new Omega device has a basket of rules for making predictions from inputs which is modeled from Omega's perceptions and consciousness of same, if it has been making accurate predictions for some time in a small closed system it should in theory be able to do so indefinitely. The problem is some of the rules aren't really rules. They're shortcuts developed from deep and long-term observation of Omega. They're mostly based on causality... but the GKU doesn't entirely work on causality. As time passes some of these will cause the device to begin to make wrong predictions. They can modify the rules through continuous verification and synthesis of new shortcuts, and presumably the new Omega device already does that, and if they do that enough it should continue for a very long time before something that the AI can't compensate for comes up... either a shortcut that can't be identified as bad because of a blind spot in sampling* or can't be replaced with a new one. Even then, maybe they can brute force it if they throw enough computing power at the problem. However, that means only that the device's utility could outlast the closed system. Locality means something to the ether so travelling an effectively-mythical distance to an otherwise empty planet only presents something of an obstacle. It isn't an inviolate barrier, and even with scrubbing the memories of people who remain and with (anti)etheric tech isolating the really real new planet, they're still within the same overall system. The ether is going to find their Story insufferable and that's assuming they're not deluding themselves from day one. The only questions are how long until it fails and in what manner. I think a sudden catastrophic Roanoke colony disappearance is most likely but a natural disaster or an alien attack is always a possibility. The latter is what I'd most like to see in the comic; I'd really dig seeing the Court leadership desperately trying to figure out why these aliens are attacking, why they always wear porcelain gorgets, why they keep saying, "Skibidi" and how they can invent new weapons and increase the production of robots to fight them. *I think over time the errors would be chased into blind spots where they can't be recognized as such, and from there cause problems with other new rules that would also be hard to deal with. This might go undetected because they wouldn't always result in an identifiably wrong prediction, in fact they'd probably only rarely result in something obviously bad. At first. I feel pretty good about saying that Omega doesn't have a proper idea where her powers actually derive from in the first place. It would be deliciously ironic if they only worked in a context that included the Ether.
|
|
|
Post by speedwell on Jun 7, 2024 12:23:11 GMT
I cannot be the only one reminded of how Coyote was a being with potential knowledge of everything who wanted it to end. A closed system with a built-in predictive function. What could possibly go wrong? Hey, Speedwell! Long time no see. 👋 Out preparing for and having surgery! Recovering well and getting a little energy back. My surgeon was no Tony, but he did all right 😁 Glad to be back 🩵 As for Coyote, he's a bit like the Christian God, who planned and implemented exactly the world he wanted and knew everything that would happen in it from its first to its last moment, and then had the audacity to get mad at his own creation from time to time and wound up killing himself for a long weekend. SMH.
|
|
|
Post by Hatredman on Jun 7, 2024 16:22:21 GMT
I think a sudden catastrophic Roanoke colony disappearance is most likely but a natural disaster or an alien attack is always a possibility. The latter is what I'd most like to see in the comic; I'd really dig seeing the Court leadership desperately trying to figure out why these aliens are attacking, why they always wear porcelain gorgets, why they keep saying, "Skibidi" and how they can invent new weapons and increase the production of robots to fight them. Two things come to mind: - The Super Nintendo game Earthbound - Mars Attacks - The ending of Don't look up Thee, three things! Nobody expects the Spanish Inquisition!
|
|
|
Post by pyradonis on Jun 8, 2024 11:15:44 GMT
She does have the special speech bubbles that only NP have, and on some panels you can see she has the typical extra lines on her hands/wrists (most obvious here). Very interesting. Another data point is how the new people and Omega look in the ether. In 2825, you can see the chips in the new people, near where the heart would be. In 2853, Omega doesn't have one. That makes sense, since she never was a robot... "my body has long since died, but the distortion allowed me to form a new one." –Omega ( 2947) This makes me think that she didn't go through the process that the new people underwent to obtain a body. She also is depicted differently than the other NP there. But if she really looked differently in the Ether, I would have expected Annie to notice that, so maybe it was just an artistic choice...?
|
|
|
Post by drmemory on Jun 8, 2024 17:17:42 GMT
We don't know for sure that Omega has an NP body. It could be a normal human body, or even something entirely different, and she has just put on an NP sweater to blend in more easily. It probably isn't an official, Kat-created NP body, but I'll be surprised if she didn't use Kat's factory to make it. She wouldn't need any help moving her mind into it so wouldn't need the usual procedure.
Part of why I say this is that we haven't seen her do anything resembling shape-shifting. Another part is, her NP or NP-like body strongly resembles her original form. It just makes sense to me that she would have used Kat's manufacturing capabilities and form selection software to make that body.
It has to be easier for Omega to hack Kat's factory than for Kat to hack Omega's mind!
|
|
|
Post by drmemory on Jun 8, 2024 17:20:06 GMT
I'm hoping that Omega will be able to keep the body after the distortion shuts down. If she made herself an NP body, as I believe, this should not be a big issue.
Unless the court objects, of course. Which could happen. For example, if they don't agree that her obligation to them has been met...
|
|
|
Post by blahzor on Jun 8, 2024 19:41:55 GMT
i mean Omega and the NP's are smiliar enough in ether vision what makes her stand out is how detailed her ether version is. I guess this is to show in retrospect her entire body is an ether creation. and there's no differences
|
|
|
Post by Hatredman on Jun 14, 2024 18:28:59 GMT
If it works, they should already know. Probably? Supposing the new Omega device has a basket of rules for making predictions from inputs which is modeled from Omega's perceptions and consciousness of same, if it has been making accurate predictions for some time in a small closed system it should in theory be able to do so indefinitely. The problem is some of the rules aren't really rules. They're shortcuts developed from deep and long-term observation of Omega. They're mostly based on causality... but the GKU doesn't entirely work on causality. As time passes some of these will cause the device to begin to make wrong predictions. They can modify the rules through continuous verification and synthesis of new shortcuts, and presumably the new Omega device already does that, and if they do that enough it should continue for a very long time before something that the AI can't compensate for comes up... You just des ribed a Large Language Model (LLM). So Omega is ChatGPT's cousin?
|
|
|
Post by imaginaryfriend on Jun 14, 2024 20:28:49 GMT
Probably? Supposing the new Omega device has a basket of rules for making predictions from inputs which is modeled from Omega's perceptions and consciousness of same, if it has been making accurate predictions for some time in a small closed system it should in theory be able to do so indefinitely. The problem is some of the rules aren't really rules. They're shortcuts developed from deep and long-term observation of Omega. They're mostly based on causality... but the GKU doesn't entirely work on causality. As time passes some of these will cause the device to begin to make wrong predictions. They can modify the rules through continuous verification and synthesis of new shortcuts, and presumably the new Omega device already does that, and if they do that enough it should continue for a very long time before something that the AI can't compensate for comes up... You just des ribed a Large Language Model (LLM). So Omega is ChatGPT's cousin? The GCU works on a continuum between matter and ether, causality and metaphor, so yeah sort of though the ether presents an additional problem that an irl LLM wouldn't face, specifically changes that appear spontaneous and inexplicable within the system. Sure, people communicate without using devices but in theory someone could collect that data and pour it in, it's understandable and happens for reasons. The ether has retroactive causality some of the time and does things for reasons that no normal person in "reality" can understand. If they've got a snapshot of the start of the closed system that's perfect (and Omega should have been able to verify the Omega device's starting set assuming they allowed it) and they do a lot of deep hi res sampling and verification then all things held equal they should be able to model the closed system on the really-real new world to such extent that they only rarely have to change anything in the model. However, the farther they get in time the farther they get from the perfect snapshot and the more the "rules" used to make predictions degrade; catching issues earlier helps but errors are going to start creeping into wherever the sampling offers shadow and start breeding, and the more the rules change the harder it is to roll forward and back in the model. It's not impossible, but the rules will get more and more convoluted and the amount of computational power needed to prove whys and make predictions successfully will increase and increase. This isn't just about making correct predictions, the predictions have to be useful. Even if you've got a set of rules that's making unfalsifiable predictions if they're useless it doesn't matter.
|
|