Comic Discussion > QUESTIONABLE CONTENT
WCDT strips 3836-3840 (24 to 28 September 2018)
jwhouk:
--- Quote from: Morituri on 30 Sep 2018, 09:09 ---But why a light-rail commuter system has such a need for its own police force, in addition to all of the above, is a bafflement to me. They're essentially serving in the role of private security, except that because BART is owned by a government institution they have to be officially police?
--- End quote ---
Drivers can only do so much to curb unruly (read: drunk/high/mentally incapacitated) passengers. And since much of the BART system is in areas inaccessible to other law enforcement agencies, having security guards on each train makes sense. Valley Metro here in Phoenix has such a system as well, and IIRC there may have been a huge overlap in jurisdictions when a high speed car chase ran cross-valley from Phoenix into Tempe earlier this year that involved the driver running on the light rail tracks for a good portion.
That would mean Phoenix police, Tempe police, Maricopa County sheriffs, the Arizona Department of Public Safety (because part of the chase was on the Loop 202 freeway), the Valley Metro security department, and even though it didn't reach the campus proper, it could theoretically have included ASU campus police as well.
BenRG:
Poll Results Post
What next for Roko?
1. Roko Basilisk P.I. (with May as her capable and tough receptionist) - 15 (28.8%)
2. SpookyBot makes a personal appearence to tell her about ways she can make a difference - 9 (17.3%)
3. Asking Elliott (stammering and blushing) if The Secret Bakery has any open jobs working with b... b... bread - 7 (13.5%)
4. Bubbles (clued in by May) gives her the 'one good Synthetic' speech - 6 (11.5%)
5. May personally pleas with her to stick with the force because they need some good cops - 5 (9.6%)
=6. A prolonged whodunnit story guest-starring Clinton, Melon and Emily - 4 (7.7%)
=6. Door security at The Horrible Revelation (which morphs into the prolonged whodunnit arc) - 4 (7.7%)
=8. "Basilisk, good job on the fight club case. Here's your gold detective's badge!" - 1 (1.9%)
=8. The Kirouac Option - She quits, buys a bike and rides off to find herself - 1 (1.9%)
x. Other (please specify in a comment)
Well, a fairly even distribution this week but I find it interesting that a lot of readers would like to see Roko still performing an investigative/enforcer role of some sort, even if it is closer to being a vigilante of some sorts (maybe acting as SpookyBot's enforcer). Maybe there is still room for a good cop in the cast?
P.S.: I've got nothing right now, so someone else will need to start next week's WCDT. Sorry about that!
Nycticoraci:
Ok, I just have to say that I'm getting a hint of smugness that both of the last two weeks polls #1 results were ideas I put forward.
_____
Also, in case any of the people who were discussing how AI learn things are still mulling things over:
I'd posit that the AGI seen here would use similar (but more advanced) machine learning techniques that we tend to use for today's ANI.
AGI= Artificial General Intelligence, ANI = Artificial Narrow Intelligence.
It's worth understanding there is a distinct difference (as has been stated above) between storing a knowledge bank, and understanding what to do with the knowledge.
That understanding is what distinguishes an intelligence from a textbook, wikipedia page, or other "file". The challenge is making the AI understand what it's making a decision on, when it's making the decision, what the options are, and which decisions it should be making. Often this boils down to knowing how to locate "useful" data.
There's a few techniques, and techniques can be either "unsupervised" or "supervised", where the intelligence is guided to make the correct decisions.
The basic idea though, is when trying to teach it how to do something, you give it the situation as many times as possible, and let it see the patterns. Depending on the type of situation, you might be asking it to recognise something, or to make an A,B decision, or something more complex. In general, you'd either give it the correct solution, and let it attempt to figure out Why it's correct, or you'd let it draw it's own conclusions, and possibly after it's built up a number of possible patterns, tell it which ones to disregard (if any).
As an example, for my thesis recently, I was using object recognition software to teach an AI how to locate sharks in aerial view photographs of shallow waters. It not only had to locate them, but distinguish them from other possible objects, like boats, dolphins, people, seals, etc.
In practice, an AGI would likely be doing similar things, except spread over every situation it encounters, rather than trying to solve 1 task.
Years ago, in some of the dialogue, Jeph described that the initial spark of life was discovered by chance. Scientists were looking for it, but had no idea what would create it, and still (I assume) didn't exactly know what it was about that combination/arrangement of processing patterns in a neural network that created the sentience. However, unlike an organic brain, machine brains can be far more easily "paused" without affecting ongoing computation. This would allow every single piece of the extremely complicated network to be mapped and duplicated.
This would mean that every AI (except possibly spooky) begins "life" as a cloned "seed". Variances in learning following that would then create the unique individuals that we see in the comic. It's likely that some portion of that learning would happen before putting the AI into a mobile physical body. While not an exact analogue, I'd describe it as being quite similar to human development. Initially connected to the "life support" of a mother being, the mind is developed to handle all autonomous functions (which should simply be able to be downloaded as pre-learned subroutines), as well as the capability to Learn a huge amount later. Eventually it's awareness and understanding can be developed to an infancy, where the real learning and experience can begin. I doubt you'd see robots running around as true infants, stumbling on everything, as most of those kinds of things could probably be learnt once on one chassis, then transferred as firmware to all duplicate models. Any knowledge that is guaranteed to be common could probably be dealt with in a similar way, but they may choose to learn Everything manually, to maximise the ability to create unique environmental learning, and avoid iRobot type clones.
Just a few thoughts from a computer scientist/electronics engineer.
OldGoat:
--- Quote from: sitnspin on 29 Sep 2018, 21:01 ---There are plenty of books who's intent and experience is wholly dependent upon absorbing the text in order. The sequential process of reading them reveals things and ellicits particular responses based on the timing of when specific elements are presented. Just downloading the information directly into your brain would fail to provide the experience of the book. Often it is not the information that matters as much as how the information is presented
--- End quote ---
Unquestionably true for us meat noggins, but Roko and company are robo-noggins. As Momo demonstrated, at least some AIs do it the slow, human way for pleasure, but even they no doubt retain the ability to take info in the robot-conventional way (UltraMegaZippityfastUSB, etc) if the situation warrants.
sitnspin:
They can absorb the information, sure, but that was not my point. My point was that simply downloading the data is insufficient. It is the process, in these cases, that's important. For purely factual information downloading would be fine, but for a lot of fiction, it is the process by which the narrative unfolds that provides the intended experience of the book far more than the events themselves.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version