74901 (Title WIP) [Sci-fi, mystery, thriller, KVN]

Ideas and games that are not yet publicly in production. This forum also contains the pre-2012 archives of the Works in Progress forum.
Post Reply
Message
Author
User avatar
Kuroi_Usagi
Regular
Posts: 40
Joined: Thu Sep 03, 2015 1:12 am
Projects: Princess Otome, Serumeito
Organization: Self
Soundcloud: ragingwhiterabbit
Contact:

74901 (Title WIP) [Sci-fi, mystery, thriller, KVN]

#1 Post by Kuroi_Usagi » Sat Jan 27, 2018 6:14 am

Setting
The setting of the story is set sometime around twenty to thirty years from now. Social service and civil service androids are a common sight, with 3 in every 5 households possessing at least one android.

Our protagonist, android manufacturing corporation Sovereign Firm's Chief Software Engineer and Strategist, head of the R&D department by day and beer guzzler by night, Bonnie Tadahashi wants to close that gap and take their androids to the next stage to become weaponized agents of the United States military.

In order to achieve that, Bonnie and her team has developed an update to the operating system that would improve their androids for combat. The update enables the capacity to learn and adapt through a process that organizes relevant and redundant data. However, their plans take a turn when androids begin to demonstrate odd behavioral patterns.
Image Art by: Hyuei http://bit.ly/2DOJLBU


The idea

I was mostly inspired by Do Androids Dream of Electric Sheep, Nier:Automata, Mass Effect, and smart devices such as the google home, Alexa, cleverbot. The story is an attempt to tackle the moral issue of dealing with artificial and virtual intelligence through the lens of people within their field of expertise. I also wanted to follow the thread of how modern perception of AI, ie robots will take over the world, entertainment has sensationalized them to appear subjectively evil and out to enslave/preserve mankind by any means, etc and weave it into how my protagonist reacts to such a thing happening.

I have written half the script so far, and wanted to share with you the first 16 pages which would, if I were to continue with this project, be the demo release.

https://docs.google.com/document/d/1R4s ... sp=sharing

I want to get people's opinion as I continue writing this script, how I should approach it. What pitfalls should I avoid from other existing works that deal with AI? Are there any literature you would suggest I take a read before continuing?
"I've always seen failure as a possibility, but never as an option."
Discord: Bunny of Wrath#3201
ImageImageImage

User avatar
arachni42
Veteran
Posts: 341
Joined: Mon Feb 25, 2013 6:33 pm
Organization: no, I'm pretty messy
Location: New York
Contact:

Re: 74901 (Title WIP) [Sci-fi, mystery, thriller, KVN]

#2 Post by arachni42 » Sun Jan 28, 2018 9:11 pm

I think your idea has potential. It's a good sign you're paying attention to possible pitfalls in dealing with the subject. I think the greatest difficulty is in keeping it interesting, as it's a popular subject these days.

I skimmed your script and did some have thoughts.

First, this feels like less than half the script. This isn't inherently a problem, but it may take you longer to finish than you anticipate. You've said the real meat of it is supposed to be the moral issues and perception of AI. What you have so far is focused on worldbuilding, establishing the background in which the plot takes place, including the role AI has in the world and how different people view it. This is essential to the story... but it's not the story itself. Yes, there have been signs of unexpected behaviors from androids, but this is pretty much a given in any story dealing with AI. (Moreover, because it's in a story, the unexpected behavior is expected by the reader to be significant.) You're to the point of introducing the plot, but the plot is not too far along yet.

I will mention that the script uses the term AI wrong. Artificial intelligence just means that a program or robot or whatever works in a seemingly intelligent way, but it's not what we call "intelligence" because a given AI only works for a specific purpose. Hence, it's artificial. The concept you're getting at is REAL intelligence. However, I think the thing people are most concerned with is sentience -- regardless of whether it's intelligent, can it feel pain, pleasure, boredom, etc? Consciousness. It's separate from intelligence, though the two are both issues when we talk about making humanlike AIs.

I think the worldbuilding concepts you have are solid. I like the backdrop of the weaponization of AI. But you'd benefit from more research on the state of AI, especially when it comes to a test for intelligence. The purpose won't be to make large changes to the script, but you can think of worldbuilding in a story as like an iceberg. Most of what you learn won't show up in the script, but the tip of it that does will have more depth.

I had a computer science professor that did research on computer creativity. He wrote a computer AI that could write a short story. The result was pretty uninspiring. And his conclusion was that computers are not, and CAN not, be truly creative. I mean, he had written a program that looked at existing (human) short stories, "learned" what the elements are, and put together something based on the inputs it had.

I also had a sociology professor who pointed out that if that's the standard, humans cannot be "truly creative" either; our outputs may be more complex and more appealing to other humans, but it's very much a culmination of the ideas, stories, and other inputs we've been exposed to throughout life. Personally I felt that the comp sci professor not a very good writer himself and probably couldn't write a good story, much less write an AI that writes a good story. I mean, good writing doesn't come easy even for a human.

This was 18 years ago. AIs have gotten far more advanced, but there is still a very big pitfall: garbage in, garbage out. Many modern AIs are based on a neural network model, trained using massive datasets. They learn from the data. However, what they learn may not be what you intended. There's a story about an AI that was designed to learn how to identify camouflaged tanks in a picture. It seemed to work well, but then failed in further tests. Why? Because the pictures it was trained on had tanks taken on cloudy days and non-tank pictures taken on sunny days. The AI hadn't learned how to identify tanks. It had learned how to distinguish cloudy days from sunny days.

There are similar concerns about AIs like Watson. Watson has advanced enough language processing to win Jeopardy. There are several ways IBM has been marketing this technology. But last fall a scathing critique was released about one of its pursuits: cancer diagnosis and treatment. Human doctors require years of training and don't have the time to read and keep up with every medical paper published. So the idea is that a trained AI could "read" the papers and identify better, evidence-based treatments than human doctors can. Of course, even from the start there is a chicken-and-egg problem. Watson has to be trained by human doctors, so the results are going to be... the same things human doctors come up with. And that's only the surface of the rabbit hole. (Medicine is *incredibly* complicated.) IMO, AI is not ready to tackle medicine. Not even close. (I do believe it will be... in the future. MUCH farther in the future than your story.)

If I were going to try to convince someone that their seemingly smart AI who beat humans in Jeopardy was not really intelligent, I'd use examples like that. Just because something has unexpected behavior doesn't make it intelligent nor sentient. That's the kind of thing your characters should hammer home for any customers that make claims about the humanity of their androids.

As far as literature, if you haven't seen the TV series Westworld, you should definitely check it out. (I haven't read the book, but the TV series is much more recent and has a modern view.)
Also, this xkcd: https://what-if.xkcd.com/5/

I hope that helps! :D

User avatar
Kuroi_Usagi
Regular
Posts: 40
Joined: Thu Sep 03, 2015 1:12 am
Projects: Princess Otome, Serumeito
Organization: Self
Soundcloud: ragingwhiterabbit
Contact:

Re: 74901 (Title WIP) [Sci-fi, mystery, thriller, KVN]

#3 Post by Kuroi_Usagi » Mon Jan 29, 2018 3:51 am

arachni42 wrote:
Sun Jan 28, 2018 9:11 pm
I think your idea has potential. It's a good sign you're paying attention to possible pitfalls in dealing with the subject. I think the greatest difficulty is in keeping it interesting, as it's a popular subject these days.

I skimmed your script and did some have thoughts.

First, this feels like less than half the script. This isn't inherently a problem, but it may take you longer to finish than you anticipate. You've said the real meat of it is supposed to be the moral issues and perception of AI. What you have so far is focused on worldbuilding, establishing the background in which the plot takes place, including the role AI has in the world and how different people view it. This is essential to the story... but it's not the story itself. Yes, there have been signs of unexpected behaviors from androids, but this is pretty much a given in any story dealing with AI. (Moreover, because it's in a story, the unexpected behavior is expected by the reader to be significant.) You're to the point of introducing the plot, but the plot is not too far along yet.
Sorry for the confusion! The script is halfway along at 19k words, but the passage in the link only the first 7k words of it. I didn't want to link the whole thing for reasons.
I will mention that the script uses the term AI wrong. Artificial intelligence just means that a program or robot or whatever works in a seemingly intelligent way, but it's not what we call "intelligence" because a given AI only works for a specific purpose. Hence, it's artificial. The concept you're getting at is REAL intelligence. However, I think the thing people are most concerned with is sentience -- regardless of whether it's intelligent, can it feel pain, pleasure, boredom, etc? Consciousness. It's separate from intelligence, though the two are both issues when we talk about making humanlike AIs.
Thanks for clearing that up, it seems I have been using AI, sentience, and intelligence interchangeably and that is a major mistake on my part!
I also had a sociology professor who pointed out that if that's the standard, humans cannot be "truly creative" either; our outputs may be more complex and more appealing to other humans, but it's very much a culmination of the ideas, stories, and other inputs we've been exposed to throughout life. Personally I felt that the comp sci professor not a very good writer himself and probably couldn't write a good story, much less write an AI that writes a good story. I mean, good writing doesn't come easy even for a human.

This was 18 years ago. AIs have gotten far more advanced, but there is still a very big pitfall: garbage in, garbage out. Many modern AIs are based on a neural network model, trained using massive datasets. They learn from the data. However, what they learn may not be what you intended. There's a story about an AI that was designed to learn how to identify camouflaged tanks in a picture. It seemed to work well, but then failed in further tests. Why? Because the pictures it was trained on had tanks taken on cloudy days and non-tank pictures taken on sunny days. The AI hadn't learned how to identify tanks. It had learned how to distinguish cloudy days from sunny days.
The idea of "creativity" is the source of the concept behind the "Derringer Test" which, similar to the AI identifying the tanks, is an assessment of an android's capacity to identify subtext. So, when you told me about the comp sci professor's research and AI writing a short story, I can see I am somewhat on the right track. However, as the socio professor points out there are other factors involved with how natural intelligence receives data compared artificial intelligence. I see it like, an AI can't precisely understand how a human feels about the Uncanny Valley effect.

Thanks for taking the time to write all that you did! And, I think I will check out Westworld, seeing as one of my shows is now on break ;_;. I also thought about watching Ex Machina, but I didn't know if its story leaned too much on the evil robots trope (which I don't want in my idea-project at all) or rightly portrays robotic sentience under the duress of self-preservation.
"I've always seen failure as a possibility, but never as an option."
Discord: Bunny of Wrath#3201
ImageImageImage

User avatar
arachni42
Veteran
Posts: 341
Joined: Mon Feb 25, 2013 6:33 pm
Organization: no, I'm pretty messy
Location: New York
Contact:

Re: 74901 (Title WIP) [Sci-fi, mystery, thriller, KVN]

#4 Post by arachni42 » Mon Jan 29, 2018 10:05 pm

Kuroi_Usagi wrote:
Mon Jan 29, 2018 3:51 am
Sorry for the confusion! The script is halfway along at 19k words, but the passage in the link only the first 7k words of it. I didn't want to link the whole thing for reasons.
Ahh, yup, that makes more sense!
Kuroi_Usagi wrote:
Mon Jan 29, 2018 3:51 am
The idea of "creativity" is the source of the concept behind the "Derringer Test" which, similar to the AI identifying the tanks, is an assessment of an android's capacity to identify subtext. So, when you told me about the comp sci professor's research and AI writing a short story, I can see I am somewhat on the right track. However, as the socio professor points out there are other factors involved with how natural intelligence receives data compared artificial intelligence. I see it like, an AI can't precisely understand how a human feels about the Uncanny Valley effect.
Yeah, I like the "Derringer Test" idea. I think it works well for the dismissiveness the builders have about claims.
Although, it occurs to me, too, that a test that actually shows something (like that the android can't come up with new answers) might be different from a test that is convincing to a customer. I think even after the test they'd want an alternate explanation of why it was feeding the birds. That's just a thought though. :)

Uncanny Valley... I wonder what it would mean for an android to "understand" that. Humans have a lot of dedicated brain hardware going into recognizing faces. (I don't know, does a human with prosopagnosia really understand it, either?) Our experience of the Uncanny Valley effect is very helpful to our survival -- IRL, if something is "off," it's a red flag signaling that it might be a threat.

I've had pets that noticed the difference between me looking at them and me staring at them. (I'll start to stare if I notice a potential health problem.) They'd act self-conscious, and then go try to hide. It surprised me; it was more sophisticated than I expected. But, it made sense from an evolutionary point of view. You are probably right, that AIs will probably never experience something in that way -- why would they need to?
Kuroi_Usagi wrote:
Mon Jan 29, 2018 3:51 am
Thanks for taking the time to write all that you did! And, I think I will check out Westworld, seeing as one of my shows is now on break ;_;. I also thought about watching Ex Machina, but I didn't know if its story leaned too much on the evil robots trope (which I don't want in my idea-project at all) or rightly portrays robotic sentience under the duress of self-preservation.
IMO Ex Machina doesn't lean much on the "evil robots" trope; there's room for interpretation. I don't think it would add much to your approach on this project, but you may find it worth watching anyway. It gets a lot of points for atmosphere.

BTW, are you familiar with the OpenWorm project?

User avatar
Kuroi_Usagi
Regular
Posts: 40
Joined: Thu Sep 03, 2015 1:12 am
Projects: Princess Otome, Serumeito
Organization: Self
Soundcloud: ragingwhiterabbit
Contact:

Re: 74901 (Title WIP) [Sci-fi, mystery, thriller, KVN]

#5 Post by Kuroi_Usagi » Wed Jan 31, 2018 11:26 pm

arachni42 wrote:
Mon Jan 29, 2018 10:05 pm
Uncanny Valley... I wonder what it would mean for an android to "understand" that. Humans have a lot of dedicated brain hardware going into recognizing faces. (I don't know, does a human with prosopagnosia really understand it, either?) Our experience of the Uncanny Valley effect is very helpful to our survival -- IRL, if something is "off," it's a red flag signaling that it might be a threat.

I've had pets that noticed the difference between me looking at them and me staring at them. (I'll start to stare if I notice a potential health problem.) They'd act self-conscious, and then go try to hide. It surprised me; it was more sophisticated than I expected. But, it made sense from an evolutionary point of view. You are probably right, that AIs will probably never experience something in that way -- why would they need to?
This brings up an interesting point, because eventually along the story I want to demonstrate androids interacting with each other. In what I've written so far, their interactions are only indirectly implied. If an android has been fitted to look human how should an android recognize fellow androids and be able to distinguish them from humans, in spite of lacking cognitive functions?
BTW, are you familiar with the OpenWorm project?
I have not. This would be my first time hearing about it.
"I've always seen failure as a possibility, but never as an option."
Discord: Bunny of Wrath#3201
ImageImageImage

User avatar
arachni42
Veteran
Posts: 341
Joined: Mon Feb 25, 2013 6:33 pm
Organization: no, I'm pretty messy
Location: New York
Contact:

Re: 74901 (Title WIP) [Sci-fi, mystery, thriller, KVN]

#6 Post by arachni42 » Wed Feb 07, 2018 12:57 am

Kuroi_Usagi wrote:
Wed Jan 31, 2018 11:26 pm
This brings up an interesting point, because eventually along the story I want to demonstrate androids interacting with each other. In what I've written so far, their interactions are only indirectly implied. If an android has been fitted to look human how should an android recognize fellow androids and be able to distinguish them from humans, in spite of lacking cognitive functions?
That's an interesting question... I hadn't thought about that. I mean, they could communicate using tools humans don't have, such as wireless networking. They could send out signals humans couldn't detect without the equipment for it.

Of course, networking does inherently open vulnerabilities. I mean, human communication opens up "vulnerabilities," too. We can lie to and trick each other. But it's more limited by our lack of telepathy, haha. Androids would added power (to communicate in ways we can't, and FASTER) and added vulnerabilities to be hacked... whether by other androids or other humans. Hmm, that would be reason to not be open to networking after all, but then they couldn't take advantage of it as a tool. They would need to "know" (detect) who to trust.

When it comes right down to it, they would face the same issue humans face, that it's advantageous to reveal information about themselves to other entities (who might share their interests and help them), but it's also risky because there are entities who might be out for their own interests and harm them.
Kuroi_Usagi wrote:
Wed Jan 31, 2018 11:26 pm
BTW, are you familiar with the OpenWorm project?
I have not. This would be my first time hearing about it.
Basically it's an open-source computer project to completely model the nervous system of a simple roundworm that only has 302 neurons. They're just focusing on movement. FOR NOW ;)
I, Miku (NaNoRenO 2014)
Vignettes (NaNoRenO 2013)
_________________

Post Reply

Who is online

Users browsing this forum: No registered users