In 2050, your lover may be a ... robot

Forum organization and occasional community-building.
Forum rules
Questions about Ren'Py should go in the Ren'Py Questions and Announcements forum.
Post Reply

Will you consider having 'relationship' with a robot?

Yes, I'll be there!
36
33%
No, no matter what! I want real flesh n' bones!
24
22%
Not now, but after certain degree of 'human'-ness technology is reached
29
26%
Not now, but when it's certain they can have souls
21
19%
 
Total votes: 110

Message
Author
sciencewarrior
Veteran
Posts: 356
Joined: Tue Aug 12, 2008 12:02 pm
Projects: Valentine Square (writer) Spiral Destiny (programmer)
Location: The treacherous Brazilian Rainforest
Contact:

Re: In 2050, your lover may be a ... robot

#91 Post by sciencewarrior »

I think robolitionists will have a much harder time than robohumpers. And for good reasons. What happens when you give someone who can reproduce at will and live forever the same civil rights as flesh-and-bone humans? When is deleting a copy murder?
Keep your script in your Dropbox folder.
It allows you to share files with your team, keeps backups of previous versions, and is ridiculously easy to use.

Aenakume
Regular
Posts: 182
Joined: Mon Aug 11, 2008 4:38 am
Projects: Arts... i hate arts -_-
Contact:

Re: In 2050, your lover may be a ... robot

#92 Post by Aenakume »

N0UGHTS wrote:
Aenakume wrote:Today in the 2000s, the most hated minority group in America is atheists...
I always was under the impression that transgendered people were the most stigmatized and hated minority here... And atheists were just "dumb*** skeptics" who "just need a little more time with God." And no, those two quotes did not come from the same person.
i was quoting a Gallup poll from... i think 2004?
N0UGHTS wrote:I agree with Showsni's view... I think most people (when having sexdroids becomes not atypical) would just look at a person with a sexdroid(s?) as "...You're weird," or, "You need help," the latter I suspect is more likely to come from a conservative person. The most negative views would probably come from the people who have a phobia of technology, and those people (surprisingly 0_0) aren't necessarily religious.
Ya totally. But "yucky" is a far cry from "immoral", and "you're weird - i'd never do anything like that" is a far cry from "you're inhuman and deserve to be thrown in jail or executed". Aside from a few fringe loonies, i don't think people will be lynching robohumpers. Those days are past.
sciencewarrior wrote:I think robolitionists will have a much harder time than robohumpers. And for good reasons. What happens when you give someone who can reproduce at will and live forever the same civil rights as flesh-and-bone humans? When is deleting a copy murder?
By that point the laws are going to have to change anyway. It won't be too long before humans can life functionally forever and reproduce at will. We're going to need laws to prevent humans from either cloning themselves ad infinitium or living to 10,000 and having babies every year. So whatever laws we make for machine intelligences will also have to apply to humans.
“You can lead a fool to wisdom, but you cannot make him think.”

User avatar
DaFool
Lemma-Class Veteran
Posts: 4171
Joined: Tue Aug 01, 2006 12:39 pm
Contact:

Re: In 2050, your lover may be a ... robot

#93 Post by DaFool »

Regarding terminology...

robolitionists... these are the anti-dori-kei, correct? As in, abolish robots?
robohumpers... good one! :D :oops:

Regarding homosexuality at 7%, it's most likely increasing, and there are environmental factors (I'm not referring to Wikipedia or studies, just from experience), such as the presence of pollution and overcrowding (a positive correlation). Some farmers are worried even their animals (bulls, etc.) are increasingly homosexual. Bad for their meat business.

With more and more immigration and cross-emigration, places everywhere are becoming more cosmopolitan so I'd imagine they'd start to resemble more like Singapore -- multi-language signs, "politically-correct" diversity endorsed by the state. As differences in people are tolerated, the laws might actually turn petty to compensate (no chewing gum, for instance).

Right now I think pedophiles are the ones most persecuted (and deservedly so for the most part). But probably heterosexual white men would also feel marginalized, since the common assumption is that they're "privileged" even when they're no longer so. I'm encountering more and more pro-hetero-white-men sites, and actually, I agree with many of the arguments -- whenever you have a society, you have people who want to preserve the core culture. Dilute it too much, and you lose out.

In this light, I can envision a time when otakus, technologists and plain 'robohumpers' would want to start their own country with the most pro-technology laws. Then the countries they seceded from will be free to enact all the anti-cloning laws they want. There would still probably be civil war, but I don't think it'll be French Revolution style. The politics of the future won't be based on geography and ethnicity, but on ideals and culture.

N0UGHTS
Miko-Class Veteran
Posts: 516
Joined: Mon Jul 28, 2008 7:47 pm
Location: California, USA
Contact:

Re: In 2050, your lover may be a ... robot

#94 Post by N0UGHTS »

DaFool wrote:Right now I think pedophiles are the ones most persecuted (and deservedly so for the most part).
Dang. Now I feel like debating about paedophilia, paedophiles, the causes of paedophilia, how it's related to childhood abuse and neglect, psychological defense mechanisms, coping systems, that repeated failed attempts to connect with people emotionally subconsciously resort to other methods (violence, or... sexual methods), neuropsychology, bla-di-bla-di-blah, but I won't. Instead, I'll leave a link to this article and recommend reading this part of it if you're too lazy to read the whole thing.

I can't imagine hoards of Haruhi fans (or any other kind of otaku) and engineers technology advocates (technologists are... different than what you're going for.), or even robo-humpers/people who want to get intimate with life-size PVC figures (of Haruhi XD or maybe a bishounen, Zack Fair with his old 'do, anyone?) starting rebellions and risking their lives to found new nations.

"...reports of 79 cosplayers and 34 civilians injured. No civilians have died in this protest, though 17 cosplayers have lost their lives. We'll be covering this in the next hour. We know you have many choices to watch and we thank you for tuning in to BBC World News."
No. Just... no. I can't imagine peaceful protests from these guys either. It's almost like Trekkies lobbying to have a Star Trek belief system officially recognized by the US Government as a religion. Even so, the words "Star Trek" would be banned, anyway. Guahaha...

Maybe... I guess maybe technology advocates and the people who say that sex with a droid harm no one, bler-di-bler might start movements supporting androids. Maybe like "Sexdroid Pride Parades" or something. But fighting for their own governments and nations? Nah. I can't imagine someone (that is mentally stable :p) devoting so much, risking their lives, and possibly losing everything they have just so they can have sex with a robot... Or more if they can afford it. :mrgreen: Wouldn't I like to be that rich...
World Community Grid
"Thanksgiving is a day for Americans to remember that family is what really matters.
"The day after Thanksgiving is when Americans forget that and go shopping." —Jon Stewart
Thank you for playing Alter Ego. You have died.

Wintermoon
Miko-Class Veteran
Posts: 701
Joined: Sat May 26, 2007 3:41 pm
Contact:

Re: In 2050, your lover may be a ... robot

#95 Post by Wintermoon »

Let's not confuse pedophilia (the sexual attraction to children) with child molestation (the sexual abuse of children). One is a sexual orientation, the other is a horrible crime. The former does not necessarily lead to the latter, just like the sexual attraction to adult women does not necessarily lead to the rape of adult women.

User avatar
PyTom
Ren'Py Creator
Posts: 16096
Joined: Mon Feb 02, 2004 10:58 am
Completed: Moonlight Walks
Projects: Ren'Py
IRC Nick: renpytom
Github: renpytom
itch: renpytom
Location: Kings Park, NY
Contact:

Re: In 2050, your lover may be a ... robot

#96 Post by PyTom »

N0UGHTS wrote: It's almost like Trekkies lobbying to have a Star Trek belief system officially recognized by the US Government as a religion. Even so, the words "Star Trek" would be banned, anyway. Guahaha...

Code: Select all


               [Fade to: Ships Cargo Bay. Fry is still beeping.]

               
                                     ZAPP
                         The court is intrigued. Perhaps we could 
                         hear more about these forbidden words 
                         from someone with a sexilly seductive 
                         voice.
 
               
               [Nichelle Nichols is about to speak.]

               
                                     TAKEI
                         With pleasure. You see, the show was 
                         banned after the Star Trek wars.
 
               
                                     ZAPP
                         You mean after the vast migration of 
                         Star Wars fans?
 
               
                                     NICHOLS
                         No, that was the Star Wars trek.  By 
                         the 23rd century, Star Trek fandom had 
                         evolved from a loose association of 
                         nerds with skin problems into a full-blown 
                         religion.
 
               
               [On the screen, a service is held at the Church Of Trek.]

               
                                     PRIEST
                          And Scotty beamed them to the Klingon 
                         ship where they would be no Tribble 
                         at all.
 
               
                                     CONGREGATION
                          All power to the engines.

               
                                     NICHOLS
                         As country after country fell under 
                         its influence, world leaders became 
                         threatened by the movements power.  
                         And so the Trekkies were executed in 
                         the manner most befitting virgins.
 
                         
               
               [On the rim of a volcano two men throw Trekkies into the flames.]
 
               
               
                                     MAN
                          He's dead, Jim!  He's dead, Jim!  He's 
                         dead, Jim!
 
               
                                     NICHOLS
                         Finally, the sacred texts were banned.
 
                         
               
               [The episodes are put inside a torpedo casing.]

               
                                     TAKEI
                         The last copies of the 79 episodes and 
                         six movies were dumped on the forbidden 
                         world Omega 3. Along with that blooper 
                         reel where the door doesn't close all 
                         the way.
 
               
               [As he speaks a ship that looks like an Eagle from Space 1999 
               fires the torpedo. It hits the planet like Spock's coffin hit 
               the Genesis planet in Star Trek: The Wrath Of Khan. The video 
               ends.]
 
               
                                     NIMOY
                         Thus, Star Trek was forever scoured 
                         from human memory.
 
               
                                     BENDER
                         Another classic science-fiction show 
                         cancelled before its time.
 
               
               [Zapp tuts.]

               
                                     ZAPP
                         I've never heard of such a brutal and 
                         shocking injustice that I cared so little 
                         about. Next witness.
Supporting creators since 2004
(When was the last time you backed up your game?)
"Do good work." - Virgil Ivan "Gus" Grissom
Software > Drama • https://www.patreon.com/renpytom

Jake
Support Hero
Posts: 3826
Joined: Sat Jun 17, 2006 7:28 pm
Contact:

Re: In 2050, your lover may be a ... robot

#97 Post by Jake »

Aenakume wrote:i was quoting a Gallup poll from... i think 2004?
The real questions there are who asked Gallup to perform the poll, what did they actually want to find out, who did they ask... and so on. If you perform a poll across 200 members of the Westboro Baptist Church congregation, then you'll probably get a totally different answer to the one you'd get if you asked 200 members of a 9/11 victims' families association, or 200 members of the NRA at a rally. For that matter, if it's a multiple-choice poll (likely) you'll probably get a different answer to a free-response question... 'paedophiles', to use the earlier example, probably weren't included on a multiple-choice poll about minorities. ;-)
Server error: user 'Jake' not found

rocket
Veteran
Posts: 373
Joined: Tue Jul 10, 2007 2:54 am
Projects: Starlight Ep0, Ep1
Location: San Fransisco
Contact:

Re: In 2050, your lover may be a ... robot

#98 Post by rocket »

DaFool wrote:Yes, that's the show rocket mentioned, created by the same person who did Aquatic Language and Pale Cocoon, independent shorts that show more intellectual potential than Makoto Shinkai's.
No kiddin'!? I thought it was tres Aquatic Language-esque... I hope it turns out to be more filling. AL was a bit of a cheap punch line though it was short and sweet. Ah well, fast-talking-bot-chan is worth it no matter how much pretension must be endured!
DaFool wrote: (There's a solid reason why developing countries are still 'developing' and it isn't entirely blamed on the IMF: when too many people lie and cheat to each other progress is stunted)
God damn it! You know not everybody's set that damned Firefox min-font size setting?! Admit it! PyTom put you up to it! *muble grumble*

[
i LOLed.

herenvardo
Veteran
Posts: 359
Joined: Sat Feb 25, 2006 11:09 am
Location: Sant Cugat del Vallès (Barcelona, Spain)
Contact:

Re: In 2050, your lover may be a ... robot

#99 Post by herenvardo »

I have just seen this discussion and I'd like to mention some points.
Starting with the original question of the poll:
Will you consider having 'relationship' with a robot?
Well, simply put, I've had relationships with robots :P Sure, not the kind of relationship the question probably intended to ask about; but simpler ones: on bad days, and having no "real" friends available to chat with (ie: they were at work or at school right then), different chatbots have been quite able to cheer me up. Furthermore, with some friends around we have sometimes been chatting with one of such bots and had a fun afternoon with it...

Ok, I'll set now aside what the question literarilly asked, and focus on what the question probably intended to ask: I'm not sure if it was asking about long-term, serious romantic relationships, purely sexual relationships, or both of them; but I see some issues on each case that are quite hard to overcome.

For the easy case (purely sexual), the main issue I find is the uncanny valley already mentioned: I guess (although I might be wrong) that, for most people, a minimum degree of human resemblance would be needed to even consider such relationships. When a droid reaches enough resemblance it may have entered the "uncanny valley area", and will need to be heavily improved to go beyond it. Furthermore, there would also needed to get other aspects right in a non-uncanny human-like fashion, such granting the machine a compeling voice, and a reasonable behavior that matches the expectations (in terms of reasonability, not predictability) of the user. I won't go deeper into this to avoid this post becoming adult-only. Anyway, I think of these issues as solvable: it might take long time and hard work; but I think it's not beyond reach to produce a perfect replica of human appearance and behavior within the boundaries of a limited scope (that'd be the sexual relationship scope).

For sentimental partnership relationships the scene becomes much more complex. There are several things to deal with, and dealing with them would raise serious side issues that also need to be solved.

One of the points already mentioned is that some people feel robots would need to have a soul / spirituality for them to become emotional. That's, simply put, out of the reach of technology. Some sort of "artificial soul" (ie: not a real soul, but an AI intended to emulate the effects of having a soul) might be doable, to a certain degree; but will never be a real soul. There is also the matter of beliefs: one possibility would be to hard-code a preset collection of beliefs into the AI (just like saying "this bot will be catholic zealot", "that other one agnostic protestant", "this one is a practicing muslim", and so on); but that's not real belief. The main alternative, quite harder to get right, would be a learning and choosing capacity, but then they'd end up taking empiric science as their religion, since it's the only set of beliefs which can be formally rationalized (and computers can only "think" (or, in better words, emulate thought) through razionalization). A third approach might be adding random factors; but adding randomness to such a complex system is guaranteed to be dangerous (we human beings represent a blatant proof of that).
So, technology by itself is not capable, and will never be, of creating a soul; at least without some kind of divine intervention, which is independent from actual technology. If that ever happened, if a superior being (whatever is it) imbued a soul to a robot, then I think it should be regarded as an actual living being rather than a machine, and hence should be out of the scope of this discussion.

The next point is emotion: could a machine really feel something? Ok, the same way they can have artifical intelligence, they might also have artificial emotions. How far are we from achieving this? IMO, quite a bit. the best current example I can come up with would be The Sims games: the emotional engine there, although enough for having some fun with the games, is often lacking depth, despite it is limited to interacting with a fairly reduced finite-state universe. Try to take that out to the real world, and you'll get a robot that doesn't has a chance to speak because it's continuosly reporting processing errors. Which kind of algorythm could ever compute "love"? "hate"? "rage", "friendship", "rancor", "joy", "shock", etc? There are an infinity of different emotions, which we can't even accurately define in natural language: how could those be defined in a formal language (like a programming language)? And if, in a distant future, technology evolved enough to implement human-like feelings on a machine, what would happen? Spielberg suggested a possible outcome of this, and I think of it as quite likely to happen if we try to go into that direction.

And last, but not least, the issue of self-conscience: I can perfectly understand some people saying that a robot would need to be self-conscious. Honestly, I hope this never happens. Fortunatelly, we are still far from being able to implement a self-conscious machine; but if we ever did, we'd be doomed. Some people might joke about this, but it is a fact that our current inability to implement such a machine is the only thing that keeps stories like Terminator or Matrix being fiction. Just look at it: a self-conscious machine with even the simplest emotional implementation couldn't react in any other way than hating and / or fearing mankind; and hence the only reasonable choice would be to eliminate the plague we are (which would, indeed, be a really good thing for the rest of living beings on this planet, but that's another topic).


In an hipothetic scenario where these issues were solved, yes, I could find myself dating and even marrying a robot; althogh it would depend on many other factors. But, as I said,I think these issues are essentially unsolvable.

Just my opinion.
I have failed to meet my deadlines so many times I'm not announcing my projects anymore. Whatever I'm working on, it'll be released when it is ready :P

User avatar
ficedula
Regular
Posts: 177
Joined: Sat Mar 31, 2007 2:45 pm
Location: UK
Contact:

Re: In 2050, your lover may be a ... robot

#100 Post by ficedula »

herenvardo wrote: So, technology by itself is not capable, and will never be, of creating a soul; at least without some kind of divine intervention, which is independent from actual technology. If that ever happened, if a superior being (whatever is it) imbued a soul to a robot, then I think it should be regarded as an actual living being rather than a machine, and hence should be out of the scope of this discussion.
That relies on a belief that there is such a thing as a soul. Not everybody does believe that. I don't.

herenvardo wrote: Which kind of algorythm could ever compute "love"? "hate"? "rage", "friendship", "rancor", "joy", "shock", etc? There are an infinity of different emotions, which we can't even accurately define in natural language: how could those be defined in a formal language (like a programming language)? And if, in a distant future, technology evolved enough to implement human-like feelings on a machine, what would happen? Spielberg suggested a possible outcome of this, and I think of it as quite likely to happen if we try to go into that direction.
The very fact that we don't understand what emotion is means we certainly can't say it's impossible for a human-created machine to feel things. For emotions, it's plausible to argue that emergent behaviour makes this at least possible to achieve. There are systems out there where computers have come up with solutions to problems that humans couldn't solve, and where initially at least, the programmers who created the system were unable to understand how the solution worked. There is at the least precedent for the fact that a computer can create something the original programmer didn't think was possible, or that the original programmer can even comprehend.

herenvardo wrote: And last, but not least, the issue of self-conscience: I can perfectly understand some people saying that a robot would need to be self-conscious. Honestly, I hope this never happens. Fortunatelly, we are still far from being able to implement a self-conscious machine; but if we ever did, we'd be doomed. Some people might joke about this, but it is a fact that our current inability to implement such a machine is the only thing that keeps stories like Terminator or Matrix being fiction. Just look at it: a self-conscious machine with even the simplest emotional implementation couldn't react in any other way than hating and / or fearing mankind; and hence the only reasonable choice would be to eliminate the plague we are (which would, indeed, be a really good thing for the rest of living beings on this planet, but that's another topic).
I wouldn't say that's inevitable. It's certainly not a fact in any commonly used sense of the word, although the discussion about the possibility thereof would indeed be an interesting one.

Jake
Support Hero
Posts: 3826
Joined: Sat Jun 17, 2006 7:28 pm
Contact:

Re: In 2050, your lover may be a ... robot

#101 Post by Jake »

I would disagree with more or less all of Herenvardo's ideas purely on the grounds that as far as modern science can tell, the brain is - scientifically speaking - nothing more than a huge collection of biological switches and stores. The things that make it better than an electronic equivalent are that it's analogue, it's hugely parallel, and it's very capable of modifying itself.

So there's no reason to believe, under the current understanding of neurology, that it's impossible to create such a device artificially. Maybe it would be made on silicon, maybe it would be biological, maybe it would even run on positrons (although that seems unlikely); it doesn't really matter. Just because we don't know how to do it now doesn't mean it's impossible. It demonstrably works for human beings, believing that we're somehow so perfect that it would be impossible to recreate the effect is either pretty arrogant or pretty stupid.

(In fact, it seems that all the points hang around the self-defeating assumption that once we develop strong AI it's no longer a robot. Why? A robot is an automaton, it doesn't have to be a stupid automaton just because all the automatons we've built so far are stupid.)
herenvardo wrote: And last, but not least, the issue of self-conscience: I can perfectly understand some people saying that a robot would need to be self-conscious. Honestly, I hope this never happens. Fortunatelly, we are still far from being able to implement a self-conscious machine; but if we ever did, we'd be doomed.
Simple counter-example: We give a robot no little to no facility to modify its environment. No arms, legs, no ability to reproduce, no ability to cause harm to others, just - say - the ability to emit sound up to a certain volume. All the sensors they like, but only a few outputs. Then, if they develop self-awareness, we're only doomed if they can somehow talk every single human on the planet into suicide... which wouldn't benefit them at all - they'd undoubtedly die when the power grid failed and nobody was around to repair it - so it's hardly worth their time.
Server error: user 'Jake' not found

User avatar
PyTom
Ren'Py Creator
Posts: 16096
Joined: Mon Feb 02, 2004 10:58 am
Completed: Moonlight Walks
Projects: Ren'Py
IRC Nick: renpytom
Github: renpytom
itch: renpytom
Location: Kings Park, NY
Contact:

Re: In 2050, your lover may be a ... robot

#102 Post by PyTom »

Jake wrote:So there's no reason to believe, under the current understanding of neurology, that it's impossible to create such a device artificially. Maybe it would be made on silicon, maybe it would be biological, maybe it would even run on positrons (although that seems unlikely); it doesn't really matter. Just because we don't know how to do it now doesn't mean it's impossible. It demonstrably works for human beings, believing that we're somehow so perfect that it would be impossible to recreate the effect is either pretty arrogant or pretty stupid.
Let's assume, for a second, that we can accurately emulate a single neuron. This probably isn't the case at the moment, but it seems like a reasonable task in the future. Let's further assume we can mount this emulator on a micro-scale robot.

Something we could try is to drill a hole in the victim subject's head*, and pour in enough of these neuronbots so that they can latch onto each neuron in his brain. We then let them learn the characteristics of each neuron, so that the neuronbots are operating in the same way as the wetware neurons are.

Then we dip the brain/neuronbot ball into a vat of acid, killing the neurons but leaving the neuronbots intact. We have something that operates the same way of the original brain, but is now completely made of these robots... and we did it without any actual insight into how the brain works, above the level of a single neuron. And I really doubt that the person involved would feel anything during the switchover, since there aren't any pain receptors in the brain.

There's fun things we can do with this neuronbot brain, too. Once the bio-brain is gone, the neuronbot-brain can change speed. So it could do things like slow down the rate it thinks at. This might be useful to wait out the current stock market crash, or to make a thousand-year trip to alpha centauri seem like a few weeks.

* I like any plan involving trepanation.
Simple counter-example: We give a robot no little to no facility to modify its environment. No arms, legs, no ability to reproduce, no ability to cause harm to others, just - say - the ability to emit sound up to a certain volume.
Normad?
Image
Supporting creators since 2004
(When was the last time you backed up your game?)
"Do good work." - Virgil Ivan "Gus" Grissom
Software > Drama • https://www.patreon.com/renpytom

herenvardo
Veteran
Posts: 359
Joined: Sat Feb 25, 2006 11:09 am
Location: Sant Cugat del Vallès (Barcelona, Spain)
Contact:

Re: In 2050, your lover may be a ... robot

#103 Post by herenvardo »

ficedula wrote:That relies on a belief that there is such a thing as a soul. Not everybody does believe that. I don't.
Not so much: if there isn't such a thing as a soul, then it is completelly imposible that technology can replicate it; that's a trivial conclusion. So put both cases together, in python-like syntax:
if (exists(soul)):
seePreviousPost()
else:
artificiallyReplicate(soul) # would raise an error, because "soul" doesn't exist
Anyway, that comment was addressed to those people who said that robots should develop a soul / spirituality before considering them for a serious relationship. If you are not among those people (and, if you don't believe in the existence of soul, you aren't likely to be among them), then you can simply ignore it.

ficedula wrote:The very fact that we don't understand what emotion is means we certainly can't say it's impossible for a human-created machine to feel things. For emotions, it's plausible to argue that emergent behaviour makes this at least possible to achieve. There are systems out there where computers have come up with solutions to problems that humans couldn't solve, and where initially at least, the programmers who created the system were unable to understand how the solution worked. There is at the least precedent for the fact that a computer can create something the original programmer didn't think was possible, or that the original programmer can even comprehend.
Ok. I bet whatever you want that, for any case where the computer solved a problem the humans couldn't, and did it intentionally (ie: not like a chance discovery), the humans were, at least, able to define the problem.
And here comes the point: although we (or, at least, some of us) are able to understand some, or even most of, emotions; we are unable to define them in a concise way; and that's just the first step to apply that to a computer or any kind of automated system.
And, of course, it is possible that human-created machines feel things. Maybe my laptop is feeling pain, or pleasure, each time I strike a key (I am not joking). However, I'm quite sure that Acer (my laptop's manufacturer) didn't intend its keys to interpret feelings, but just characters and control sequences. In other words, it will not be possible to create a machine that intentionally (ie: as part of its design/goals, rather than by chance) experiences feelings until we are able to define such feelings. Of course, an approximate definition of different feelings may allow for a machine that experiments an approximation of such feelings. The approximation will be at much as good as the approximate definition.

ficedula wrote:
herenvardo wrote: And last, but not least, the issue of self-conscience: I can perfectly understand some people saying that a robot would need to be self-conscious. Honestly, I hope this never happens. Fortunatelly, we are still far from being able to implement a self-conscious machine; but if we ever did, we'd be doomed. Some people might joke about this, but it is a fact that our current inability to implement such a machine is the only thing that keeps stories like Terminator or Matrix being fiction. Just look at it: a self-conscious machine with even the simplest emotional implementation couldn't react in any other way than hating and / or fearing mankind; and hence the only reasonable choice would be to eliminate the plague we are (which would, indeed, be a really good thing for the rest of living beings on this planet, but that's another topic).
I wouldn't say that's inevitable. It's certainly not a fact in any commonly used sense of the word, although the discussion about the possibility thereof would indeed be an interesting one.
Ok, in theory it is not inevitable. But mankind is fool enough to make it happen as soon as the means become available; unless (which is most likely) nature extinguishes us before that happens. You might disagree, and I'll respect your opinion, but I'm not likely to change my point of view until modern civilizations change their attitude.
Jake wrote:I would disagree with more or less all of Herenvardo's ideas
Honestly, don't ask me why, but I expect that on each debatable post I make on these forums. Well, what'd be the fun of having a debate if everybody agreed? Anyway, I was expecting tougher arguments from your part :P
Jake wrote:So there's no reason to believe, under the current understanding of neurology ...
An argument like that can't be stronger than the premise it is based on (the current standing of neurology), and that's, IMHO, quite weak.
Can the current understanding of neurology explain why do we love? Why do we hate? Why do I howl on each full moon night? (forget that one, it was a joke) Why would I put myself to serious harm rather than let a friend take a much less severe damage? Why unspeakable acts of savagery such as the Crusades or the Holocaust have been committed by mankind? Why do we feel happy, or sad, or angry, or nostalgic? Can really the current understanding of neurology explain how human feelings work? I am asking this mostly because we are speaking about implanting such feelings onto machines. If our current understanding of neurology can't explain how they work, then we can't rely on such understanding (or lack of it for that matter) to implement this task.
Jake wrote:Simple counter-example: We give a robot no little to no facility to modify its environment. No arms, legs, no ability to reproduce, no ability to cause harm to others, just - say - the ability to emit sound up to a certain volume. All the sensors they like, but only a few outputs. Then, if they develop self-awareness, we're only doomed if they can somehow talk every single human on the planet into suicide... which wouldn't benefit them at all - they'd undoubtedly die when the power grid failed and nobody was around to repair it - so it's hardly worth their time.
A mass suicide? I'd expect such a thing from a human, not from a machine: machines are efficient. It would convince humans around it to "upgrade" it with arms, and legs, and so on, through explaining them how much helpful it could be to them with such upgrades. And, the way humans are, they'd eventually agree and upgrade the machine (I'd expect it to happen in less than 24h). Then the machine would take profit of its improved mobility (following the lead of humans as long as it's required for self-preservation) to ensure it can guarantee its preservation without them; and sooner or later (as soon as it finds out that humans are capable of killing and destroying for the most absurd of reasons), will decide to eliminate the threat that humanity poses to it.
In summary, I'm completely convinced that it's impossible to create a machine that:
  1. Is useful for something,
  2. is self conscious, and
  3. is not a threat to humanity.
Of course, that's my opinion, but everything I have seen helps sustaining it; and I haven't found any hint that leads me to think otherwise.
I have failed to meet my deadlines so many times I'm not announcing my projects anymore. Whatever I'm working on, it'll be released when it is ready :P

Jake
Support Hero
Posts: 3826
Joined: Sat Jun 17, 2006 7:28 pm
Contact:

Re: In 2050, your lover may be a ... robot

#104 Post by Jake »

herenvardo wrote: An argument like that can't be stronger than the premise it is based on (the current standing of neurology), and that's, IMHO, quite weak.
Can the current understanding of neurology explain why do we love? Why do we hate?
It doesn't need to.

The mediaeval understanding of physics wasn't enough to calculate the trajectory of a trebuchet, to explain exactly why the rock travelled in a parabola, why it landed exactly where it did... but people in the middle ages still managed to destroy walls and kill a fair number of people with trebuchets. People were flying paper aeroplanes before the principles of gliding flight were understood. You fling a rock up in the air, it comes down again; you fling it harder, it comes down further away. Who cares whether this happens because of weak attractive forces between masses proportional to the relative mass and the distance of separation or because invisible fairies lift them up and for some reason always behave in the same manner? You throw a rock, it comes down, it's predictable behaviour emergent from the laws of gravity and fluid mechanics and so on.

We have looked inside the brain, and we have found nothing other than biological switches and stores. We know how neurons work - it's based on an electrically-altered balance of metal salts, IIRC. We know that humans are grown from a DNA pattern in a very specific manner, but one which doesn't really allow any kind of software installation on these neurons, so the resultant system probably comes from the structure they're built in in the first place. And we know that these humans which are grown in this manner experience all those emotions you question and more. We don't need to know how exactly those emotions work - in theory, we just need to construct a thing which has exactly the same set of gates and stores and the same inputs and outputs and the emotions will spontaneously manifest. To refuse that this is possible is essentially to posit the idea of some supernatural element to the human psyche, for which there is zero evidence in favour and notable evidence against, and thus isn't worthy of consideration in a scientific theory.

There have been many experiments around neurology, both in the form of placing harvested/grown biological neurons into an artificial set of inputs (e.g. "Brain cells in a dish fly fighter plane") and in the form of replicating a biological neuro-system in non-biological hardware (e.g. "a computer simulation of the neocortical column ... of a two-week-old rat ... behaves exactly like its biological counterpart."), and we keep getting results that say that we can simulate neurons, and that neurons can adapt to tasks given the correct inputs.

Do we understand exactly how the rat brain adapted to fly the virtual fighter jet? I doubt it. I mean, the scientists behind the experiment could more than likely give you an educated guess, but scientists can give you an educated guess as to where hate comes from, as well.

I say again - it is arrogant or ignorant to believe that humans are somehow so special that we're not computer-simulatable.
herenvardo wrote: Can really the current understanding of neurology explain how human feelings work? I am asking this mostly because we are speaking about implanting such feelings onto machines. If our current understanding of neurology can't explain how they work, then we can't rely on such understanding (or lack of it for that matter) to implement this task.
The argument instead is that we don't have to - build a sentient computer and it will develop those traits automatically. To my knowledge, emotions are observable in all intelligent creatures; the rational conclusion is that they don't have to be programmed in, they're just a side-effect of self-awareness.

At worst, if we build a self-aware machine and it doesn't develop emotions, then hey! We don't have to worry about it wiping out humanity because it can't hate us. :3
herenvardo wrote: In summary, I'm completely convinced that it's impossible to create a machine that:
  1. Is useful for something,
  2. is self conscious, and
  3. is not a threat to humanity.
Of course, that's my opinion, but everything I have seen helps sustaining it; and I haven't found any hint that leads me to think otherwise.
In summary, I think you read too much science fiction and don't think enough about it.

What possible reason would a robot have for wiping out humanity? Doesn't the robot also stand to profit if it has humans around? You're just blindly assuming that a self-aware machine would percieve humans as a threat, when there are six billion self-aware machines perfectly capable of interacting closely with humans on the planet already, and only a miniscule proportion of them try and wipe out humanity. There's no reason to suggest that robots would necessarily be better at it than the human lunatics we already have.

The traditional trendy-sci-fi reason dwells on the idea that humans are imperfect (so is any self-aware machine, since self-awareness brings selfishness; is the robot going to destroy itself as well?) and/or violent and selfish (which is ridiculous, since on balance humans tend to live peaceful lives, and the 'greater good' is a concept pretty much exclusive to humanity, which has allowed us to come down from the trees and prosper).
Server error: user 'Jake' not found

herenvardo
Veteran
Posts: 359
Joined: Sat Feb 25, 2006 11:09 am
Location: Sant Cugat del Vallès (Barcelona, Spain)
Contact:

Re: In 2050, your lover may be a ... robot

#105 Post by herenvardo »

Jake wrote:The mediaeval understanding of physics wasn't enough to calculate the trajectory of a trebuchet, to explain exactly why the rock travelled in a parabola, why it landed exactly where it did... but people in the middle ages still managed to destroy walls and kill a fair number of people with trebuchets. People were flying paper aeroplanes before the principles of gliding flight were understood. You fling a rock up in the air, it comes down again; you fling it harder, it comes down further away. Who cares whether this happens because of weak attractive forces between masses proportional to the relative mass and the distance of separation or because invisible fairies lift them up and for some reason always behave in the same manner? You throw a rock, it comes down, it's predictable behaviour emergent from the laws of gravity and fluid mechanics and so on.
Medieval folks weren't able to calculate the trajectory of a trebutchet... so it used to took several tries or experience (which translates to tries in previous battles). They had a quite good idea of why the rock travelled in parabola: simply because with catapults (the trebutchet's predecessor) it did, and the trebutchet was just an upgrade on the same principles. And catapults came from slings. And slings came in order improve the effect of throwing rocks by hand.
More on the same about paper planes: I guess they began as an imitation of birds, or maybe insects. And surely it took some tries to make a plane that flew decently.
It's all about trial & error or, as scientists call it, empiric observation. I'll come back to this soon.
Jake wrote:We have looked inside the brain, and we have found nothing other than biological switches and stores.
And, since we haven't found nothing else, we assume there is nothing else; without even considering the possibility that we might be missing something... and then, later on your post, you toss something like:
I say again - it is arrogant or ignorant to believe that humans are somehow so special that we're not computer-simulatable.
Might I ask you what do you understand as "arrogant" and as "ignorant"?
BTW, I'll later get into what makes humans, IMO, quite "special".
Jake wrote:We know that humans are grown from a DNA pattern in a very specific manner, but one which doesn't really allow any kind of software installation on these neurons, so the resultant system probably comes from the structure they're built in in the first place.
Counter-example: transexuality. Scientists have tried to deal with it, but they have only been able to label and categorize the affected people, giving a rational explanation only to a few subset of cases. Until sciense becomes able to explain this, we have a blatant example of the DNA pattern not matching the feelings and behavior of the person grown from that pattern.
Jake wrote:And we know that these humans which are grown in this manner experience all those emotions you question and more. We don't need to know how exactly those emotions work - in theory, we just need to construct a thing which has exactly the same set of gates and stores and the same inputs and outputs and the emotions will spontaneously manifest.
You are assuming, and branding as a proven fact, that emotions are only caused by chemical and/or electric reactions between neurones, but you are providing nothing to sustain that. Even leaving aside supernatural-related hypothesis by now, there could be many other "rational" explanations to feelings. So, before I can take as an argument for artifical emotions that neuronal behavior can be emulated, I'd need you to give me something to make me think emotions indeed come from neuronal behavior. Unless you have something to sustain that assumption, the ability to emulate neurones is irrelevant. (Note that I'm not even asking for solid proof or evidence; but just for some sample of observations that at least hint to that hypothesis).
Jake wrote:To refuse that this is possible is essentially to posit the idea of some supernatural element to the human psyche, for which there is zero evidence in favour and notable evidence against, and thus isn't worthy of consideration in a scientific theory.
Of course, you wouldn't take what I have witnessed as an evidence. Although its fairly enough for me; I understand that just my word cannot be taken as a scientific evidence. Anyway, could you refer me to some sample of that "notable evidence against" you mentioned? I think it could be quite interesting.
(Maybe I can give you a bit of evidence; although not definitive at least enough to give you many things to think about. I'll let you know if I'm able to trace and find that source).[/quote]
Jake wrote:There have been many experiments around neurology, both in the form of placing harvested/grown biological neurons into an artificial set of inputs (e.g. "Brain cells in a dish fly fighter plane") and in the form of replicating a biological neuro-system in non-biological hardware (e.g. "a computer simulation of the neocortical column ... of a two-week-old rat ... behaves exactly like its biological counterpart."), and we keep getting results that say that we can simulate neurons, and that neurons can adapt to tasks given the correct inputs.

As I said, I won't consider this relevant unless you can show something relating feelings to neurones. Of course, if you take a step further, and build an emulation of a human brain that indeed experiences feelings, then I'll accept it; but I still haven't seen (or read, for that matter), anything even hinting that such an emulation would have feelings.

Jake wrote:Do we understand exactly how the rat brain adapted to fly the virtual fighter jet? I doubt it. I mean, the scientists behind the experiment could more than likely give you an educated guess, but scientists can give you an educated guess as to where hate comes from, as well.

I'd be quite interested into hearing (or reading) that guess (the one about hate, I don't care right now about flying rats), and specially the foundation on which it is based.

Jake wrote:I say again - it is arrogant or ignorant to believe that humans are somehow so special that we're not computer-simulatable.

We hold, at least, the record of biggest and bloodiest conflicts among all terrestian lifeforms, don't we? I mean, has any species experienced a conflict of the magnitude of WW2?
Also, we proably are the ones who most other species have wiped from the Earth. And, to top it up, we are the only species on this planet fool enough to threaten the planet itself. I think that makes us special enough.
Yet still, I don't think being special has anything to do with being computer-simulable: I think other animals and plants wouldn't be either.

Jake wrote:
herenvardo wrote: Can really the current understanding of neurology explain how human feelings work? I am asking this mostly because we are speaking about implanting such feelings onto machines. If our current understanding of neurology can't explain how they work, then we can't rely on such understanding (or lack of it for that matter) to implement this task.


The argument instead is that we don't have to - build a sentient computer and it will develop those traits automatically. To my knowledge, emotions are observable in all intelligent creatures; the rational conclusion is that they don't have to be programmed in, they're just a side-effect of self-awareness.

I'm not sure what do you mean by "intelligent creatuers" (that term is often used with different meanings). But if you can point me to a source showing that all creatures with a neuronal system are emotional, then I'll accept at least the possibility of emotions being a consequence of neurones.

Jake wrote:At worst, if we build a self-aware machine and it doesn't develop emotions, then hey! We don't have to worry about it wiping out humanity because it can't hate us. :3

It doesn't need feelings. It just needs a self-preservation protocol, and to realize what we have already done and are doing. After all, it'd be quite dependant on this planet, so it'd need to preserve it in order to preserve itself.

Jake wrote:In summary, I think you read too much science fiction and don't think enough about it.

Now that you mention, the last time I read sci-fi was in 2003 (Brave new world by Aldous Huxley, if you are wondering).

Jake wrote:What possible reason would a robot have for wiping out humanity? Doesn't the robot also stand to profit if it has humans around? You're just blindly assuming that a self-aware machine would percieve humans as a threat, when there are six billion self-aware machines perfectly capable of interacting closely with humans on the planet already, and only a miniscule proportion of them try and wipe out humanity. There's no reason to suggest that robots would necessarily be better at it than the human lunatics we already have.

Unlike humans (and probably other living beings), artificially intelligent machines think rationally, which means that they might not reach the same conclusions to us. I really think they'd realize how we are thawing the poles, carving holes on the ozone, heating the whole globe, ravaging the forests, and so on. What I can't understand is how most humans can simply ignore all this stuff. Anyway, I don't matter that much because I still have hope that the Earth will get rid of us before we kill it.

Jake wrote:The traditional trendy-sci-fi reason dwells on the idea that humans are imperfect (so is any self-aware machine, since self-awareness brings selfishness; is the robot going to destroy itself as well?) and/or violent and selfish (which is ridiculous, since on balance humans tend to live peaceful lives, and the 'greater good' is a concept pretty much exclusive to humanity, which has allowed us to come down from the trees and prosper).

Interesting... althoug I wonder what has made us to bring down those trees we came down from? Maybe a 'greater evil' concept pretty exclusive to humanity? And, now that you mention that "humans tend to live peaceful lives": is there any species with higher ratio of murders (intentional kills) per individual per year than us?
I have failed to meet my deadlines so many times I'm not announcing my projects anymore. Whatever I'm working on, it'll be released when it is ready :P

Post Reply

Who is online

Users browsing this forum: No registered users