The Humanity of Robots

Discussions on the nature of being, existence, reality and knowledge. What is? How do we know?

The Humanity of Robots

Postby mitchellmckain on August 27th, 2017, 8:18 pm 

I am watching a BBC youtube series entitled "Hyper Evolution Rise of the Robots".

The first thing that caught my eye was the title. Since I consider technology an extension of human evolution, the development of robots would also be a part of this human evolution. The documentary however was simply referring to the rapid development of robotics around the world, comparing it with the pace of human evolution. Of course, robots are not actually evolving. Evolution is a learning process and it is not the robots who are learning in this process of development but those designing the robots, i.e. human beings.

There is an interview with the most human like robot in Japan. The robot asks if they could be considered human? And the person says no. His reasons have to do with whether the robot has emotions. The designer thinks this is a possible development in the future.

I my case it is not about simulated emotion. For a "robot" to be considered human the following would have to be true. Needless to say this implies that I do think this is a possibility.
1. First the machine would have to be alive, and for this I that means it is not completely a product of design but is self-organizing. This is a quantitative feature, meaning the "robot" is alive to the degree it is a product of self-organization and not alive to the degree it has been designed.
2. I believe that another requirement for life is the kind of chaotic nonlinearity which makes behavior ultimately depend on quantum indeterminacy. As long as unpredictability is purely a result of environment then it remains only a simulation of life rather than life itself. A video tape which plays the same way every time and even a set of branching program of videotapes is not the same as a living organism whose is a superposition of future possibilities. I don't think a deterministic mechanism is capable of consciousness, which I believe to be a feature of life. I suppose you could treat this, or consciousness as a separate requirement, but I do not believe there is any real self-organization going on without this non-linearity.
3. The third requirement is specific to humanity rather than life in general and consists of the assimilation via human communication of human ideas and values.

But with these three conditions satisfied then I would consider it human. Notice that for animals the hurdle is only the last of these 3. And for some, with a lot of the same brain functionality, I have little doubt that even that hurdle has been crossed to some degree in some cases. Of course, like everything else in this list, it is highly quantitative. So it is not just a question of whether or not they are human, but rather how much.
Last edited by mitchellmckain on August 27th, 2017, 10:10 pm, edited 1 time in total.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby someguy1 on August 27th, 2017, 8:52 pm 

You've replaced requirements for sentience with requirements for life. This tremendously muddies the discussion. Now we have to ask if viruses are alive, if ants are self-aware, etc.

It's clear that by robots we mean human-made machines. These days machines can perform sophisticated cognitive tasks such as driving cars, translating natural language, and beating expert human players at Go. So that is where to hold the discussion: software/machine sentience or general intelligence. Trying to define life is a totally different conversation IMO.

You are aware of the Turing test, right? That's a sensible idea, in that it depends solely on observable behavior, which is all we can ever know about others.

"I believe that another requirement for life is the kind of chaotic nonlinearity which makes behavior ultimately depend on quantum indeterminacy." -- That's another source of muddle in your question, since there is no hard evidence (despite some speculation) that human behavior can be so characterized. After all, even conventional solid state electronics depend on quantum effects.

I wonder if you could focus your question to ask whether robots can become conscious or self-aware, or if we can create life, or what it would mean to say that a machine is alive. Simply so that we can focus the conversation.
someguy1
Member
 
Posts: 520
Joined: 08 Nov 2013


Re: The Humanity of Robots

Postby Braininvat on August 27th, 2017, 9:16 pm 

This thread was one of a couple of fairly long discussions of AI and consciousness. This one starts with John Searle, a key player in philosophy of mind....

http://sciencechatforum.com/viewtopic.php?nomobile=1&f=51&t=27932
User avatar
Braininvat
Forum Administrator
 
Posts: 5610
Joined: 21 Jan 2014
Location: Black Hills


Re: The Humanity of Robots

Postby mitchellmckain on August 27th, 2017, 9:55 pm 

someguy1 » August 27th, 2017, 7:52 pm wrote:You've replaced requirements for sentience with requirements for life. This tremendously muddies the discussion. Now we have to ask if viruses are alive, if ants are self-aware, etc.

I believe the division between life and sentience as if these were two separate things is a mistake. There is only life, although it is a quantitative thing so not all things which are alive are equally alive. If we encounter other life in the universe we may find these separate categories of life and sentience simply do not work.

There is absolutely no question in my mind about whether viruses are alive. They are alive. However, in those we have on the earth, it is a very very low quantity of life we are talking about. Furthermore, by discarding the human-centric definitions of sentience or consciousness, I would say the same of these things as well. All living things have consciousness and thus are sentient in some quantity and the only difference between us an them is that we are more so. So yes, ants are self aware. Self-maintenance is impossible without some form of self-awareness. Is their self-awareness comparable to our own? Not even close.

P.S. In the third book of my trilogy there are multicelluar viruses which I suppose you would say have achieved sentience (or in my terms, they are alive and conscious to a degree which is comparable to us).

someguy1 » August 27th, 2017, 7:52 pm wrote:It's clear that by robots we mean human-made machines. These days machines can perform sophisticated cognitive tasks such as driving cars, translating natural language, and beating expert human players at Go. So that is where to hold the discussion: software/machine sentience or general intelligence. Trying to define life is a totally different conversation IMO.

You may reduce humanity to a set of capabilites. I do not. Frankly the philosophical problems with doing so are legion.

someguy1 » August 27th, 2017, 7:52 pm wrote:You are aware of the Turing test, right? That's a sensible idea, in that it depends solely on observable behavior, which is all we can ever know about others.

There is a problem with this test as traditionally stated. How can it be fair to judge based on an interaction which is so much shorter than the time employed to design them?

Therefore, I suggest a modification of the Turing test which takes this into account, i.e. that the tester gets at least as much time to interact with the robot as the designers had in making and programming it.

I do not believe that a robot can pass the modified test unless they have satisfied my three conditions above.

someguy1 » August 27th, 2017, 7:52 pm wrote:"I believe that another requirement for life is the kind of chaotic nonlinearity which makes behavior ultimately depend on quantum indeterminacy." -- That's another source of muddle in your question, since there is no hard evidence (despite some speculation) that human behavior can be so characterized. After all, even conventional solid state electronics depend on quantum effects.

First of all, it is not my question. It was the question posed by the robot in the video, "can you consider me human?" I suspect your muddle is a product of insisting upon seeing everything within the confines of your own worldview. To be sure, my definition of humanity is very much at odds with the common usage which quite often equates this with a biological species only. I reject this completely, to require both more and less in order to see something as human.

Ok, I admit my condition number 2 is more tentative than the other two conditions. That is currently my opinion but I am open to the possibility of evidence which proves things are otherwise. Let us just say, that if number 2 is lacking then I will be very skeptical and looking much more closely at whether first condition is really satisfied. And if the thing behaves in a demonstrably deterministic manner then I am not likely to consider the thing human. It will have failed my equivalent of a Turring test.

someguy1 » August 27th, 2017, 7:52 pm wrote:I wonder if you could focus your question to ask whether robots can become conscious or self-aware, or if we can create life, or what it would mean to say that a machine is alive. Simply so that we can focus the conversation.

It is a different question than what was asked, and I will leave it for people to interpret the question of the humanity of robots according to their own thinking rather than restricting them to either your worldview or mine.

However, in my mind at least, humanity is a subset of life, and all of life and only life is conscious. The imitations and simulations of humans, life, and consciousness are no different than photographs or videotapes in a different medium. They capture only the superficial aspects of these things and not the reality.
Last edited by mitchellmckain on August 27th, 2017, 10:26 pm, edited 4 times in total.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby mitchellmckain on August 27th, 2017, 10:04 pm 

Braininvat » August 27th, 2017, 8:16 pm wrote:This thread was one of a couple of fairly long discussions of AI and consciousness. This one starts with John Searle, a key player in philosophy of mind....

http://sciencechatforum.com/viewtopic.php?nomobile=1&f=51&t=27932


That thread sounds more like where someguy1 would take the discussion and not a restriction I am interested in. On the contrary, I am interested in a much broader discussion according to the questions posed by the PBS production. The philosophical questions thus include...

1. What is humanity? And can it ever apply to robots?
2. What can we expect in the relationship between humans and robots?
3. Why are some people afraid or uncomfortable with robots?

...and this is just a sample of the questions raised in the PBS production.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby mitchellmckain on August 27th, 2017, 10:41 pm 

I would propose another modification of the Turing test as well. I would demand interaction with more than one by the same maker, which have had no communication with each other between interviews. Part of our humanity is in our differences -- in our uniqueness, so comparing more than one with each other is mandatory.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby someguy1 on August 28th, 2017, 12:11 am 

Braininvat » August 27th, 2017, 7:16 pm wrote:This thread was one of a couple of fairly long discussions of AI and consciousness. This one starts with John Searle, a key player in philosophy of mind....

http://sciencechatforum.com/viewtopic.php?nomobile=1&f=51&t=27932


Surely I can assume that a given thread is self-contained. A reader isn't expected to survey the entire history of this website in order to determine what a given poster is talking about. If OP wanted to append to an existing thread they should have done that. I'm not being unreasonable in that point of view, am I?

And again, Searle is off-topic to the OP's question, which is why I'm trying to pin down the OP as to what their question actually is. Searle is saying that the room doesn't understand Chinese. He's not making any argument at all as to whether it's human or not. That's why I complained that the question of sentience and the question of humanity and the question of life are three separate problems; and that conflating them can only cause confusion.
someguy1
Member
 
Posts: 520
Joined: 08 Nov 2013


Re: The Humanity of Robots

Postby someguy1 on August 28th, 2017, 12:32 am 

mitchellmckain » August 27th, 2017, 7:55 pm wrote: I suspect your muddle is a product of insisting upon seeing everything within the confines of your own worldview. To be sure, my definition of humanity is very much at odds with the common usage


Perhaps it's your second sentence that's the source of my muddle, since you admit that you are using words with nonstandard meanings private to you, which you have not taken the trouble to make clear.

I see that Braininvat is right that this is not a standalone thread, but rather a continuation of one or more other threads that I would have no way of being familiar with unless I read everything on this board every day, which I don't.

I will leave you to your self-invented terminology and evidently large amount of unspecified prior context.

But please explain to me, why start a new thread if it's not self-contained? You do know that there are are always new readers who haven't read your other discussions, don't you?

At the very least you could link all your other related threads and say, "Please read these first to make sense of my question."

I'll depart this thread since I haven't the time to read your entire posting history to find out how you are using common words like "human."
someguy1
Member
 
Posts: 520
Joined: 08 Nov 2013


Re: The Humanity of Robots

Postby mitchellmckain on August 28th, 2017, 3:37 am 

someguy1 » August 27th, 2017, 11:32 pm wrote:
mitchellmckain » August 27th, 2017, 7:55 pm wrote: I suspect your muddle is a product of insisting upon seeing everything within the confines of your own worldview. To be sure, my definition of humanity is very much at odds with the common usage


Perhaps it's your second sentence that's the source of my muddle, since you admit that you are using words with nonstandard meanings private to you, which you have not taken the trouble to make clear.

I outlined the questions in my response to Braininvat. There you will see that my definition of humanity is part of my answer to those question rather than being a premise for those questions themselves. I thought it was quite clear in the OP but I am always willing to clarify if asked.

You are of course free to respond with what you understand humanity to consist of.

someguy1 » August 27th, 2017, 11:32 pm wrote:I see that Braininvat is right that this is not a standalone thread, but rather a continuation of one or more other threads that I would have no way of being familiar with unless I read everything on this board every day, which I don't.

Based on a mistaken premise you are clearly wrong about this.

someguy1 » August 27th, 2017, 11:32 pm wrote:I will leave you to your self-invented terminology and evidently large amount of unspecified prior context.

If you do not wish to discuss the meaning you attach to the word "humanity" and thus whether this applies to robots, then feel free to not do so.

I am surprised, however, that you are not even interested in the topics which you yourself raised, or is it only that you are not interested in what anyone else has to say on those topics?

someguy1 » August 27th, 2017, 11:32 pm wrote:But please explain to me, why start a new thread if it's not self-contained? You do know that there are are always new readers who haven't read your other discussions, don't you?

Indeed I could wonder why I start a new thread when people either do not bother to read the OP, or decide to rewrite it, changing the topic and questions into something else entirely.

someguy1 » August 27th, 2017, 11:32 pm wrote:At the very least you could link all your other related threads and say, "Please read these first to make sense of my question."

There are no other threads except in your imagination. Perhaps you should be making the links in that case. Or you can simply start a new thread discussing the topics and questions which interest you.

someguy1 » August 27th, 2017, 11:32 pm wrote:I'll depart this thread since I haven't the time to read your entire posting history to find out how you are using common words like "human."

As a science fiction writer I am well aware of how our imaginary worlds can keep us rather busy.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby mitchellmckain on August 28th, 2017, 4:10 am 

While I am not not interested in changing the topic or restricting the discussion as suggested by someguy1, I am not adverse to extending it to include topics like those raised by someguy1.

1. What is the difference between life and sentience?
2. Are viruses are alive and are ants self-conscious?
3. Can humanity be reduced to a set of capabilities?
4. Is the Turing test sufficient to distinguish robot from human?
5. Is deterministic behavior compatible with humanity?

I addressed these queries of someguy1 for my part in my response to his post, but others should feel free to give their own answers to these questions.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby SciameriKen on August 28th, 2017, 4:21 pm 

The direction you are taking to verify if the robot is human is not correct - this would be akin to setting requirements that a robot be human that it have Human DNA and its cosciousness arise from cellular processes originating from said DNA. I don't think this gets to the spirit of the reporter's question - is the robot human, which I take to mean is the simulated consciousness human-like. I think the response is fair that the lack of algorithms for emotion are limiting at the moment. Presumably these impacts arise from hormones or other chemokines that subconsciously influence our cosciousness - ever experience being hangry?
User avatar
SciameriKen
Forum Moderator
 
Posts: 1245
Joined: 30 Aug 2005
Location: Buffalo, NY


Re: The Humanity of Robots

Postby Braininvat on August 28th, 2017, 4:34 pm 

someguy1 » August 27th, 2017, 9:11 pm wrote:
Braininvat » August 27th, 2017, 7:16 pm wrote:This thread was one of a couple of fairly long discussions of AI and consciousness. This one starts with John Searle, a key player in philosophy of mind....

http://sciencechatforum.com/viewtopic.php?nomobile=1&f=51&t=27932


Surely I can assume that a given thread is self-contained. A reader isn't expected to survey the entire history of this website in order to determine what a given poster is talking about. If OP wanted to append to an existing thread they should have done that. I'm not being unreasonable in that point of view, am I?

And again, Searle is off-topic to the OP's question, which is why I'm trying to pin down the OP as to what their question actually is. Searle is saying that the room doesn't understand Chinese. He's not making any argument at all as to whether it's human or not. That's why I complained that the question of sentience and the question of humanity and the question of life are three separate problems; and that conflating them can only cause confusion.


Mods only offer links like that one to serve as background reading on topics related to the current one. The thread I linked goes well beyond Searle. It's only an example of several chats we've had which relate to the nature of AI and its potential for sentience and human-like cognition and feeling. And we've had a couple good chats on the Turing test which, again, are mentioned only for the benefit of those curious what ground other members have covered. Referencing older threads is not in any way an affront to the autonomy of this new thread.
User avatar
Braininvat
Forum Administrator
 
Posts: 5610
Joined: 21 Jan 2014
Location: Black Hills


Re: The Humanity of Robots

Postby someguy1 on August 28th, 2017, 8:00 pm 

I just happened to run across this article ...

http://www.dailymail.co.uk/sciencetech/ ... smell.html

Someone has a chip connected to some mouse neurons that can "smell" things. Actually I'm not clear about the details because I thought that's already how bomb-sniffers work, they analyze chemicals in the air. And it's the Daily Mail, which is known for sensationalism.

Still, it made me think about the OP's idea that machines might indeed become more human. We can easily envision a section of human cortex hooked up to a supercomputer. At some point perhaps it does become hard to say what's human and what's machine.
someguy1
Member
 
Posts: 520
Joined: 08 Nov 2013


Re: The Humanity of Robots

Postby mitchellmckain on August 28th, 2017, 11:42 pm 

SciameriKen » August 28th, 2017, 3:21 pm wrote:The direction you are taking to verify if the robot is human is not correct - this would be akin to setting requirements that a robot be human that it have Human DNA and its cosciousness arise from cellular processes originating from said DNA. I don't think this gets to the spirit of the reporter's question - is the robot human, which I take to mean is the simulated consciousness human-like. I think the response is fair that the lack of algorithms for emotion are limiting at the moment. Presumably these impacts arise from hormones or other chemokines that subconsciously influence our cosciousness - ever experience being hangry?


I think you misunderstand. My purpose was not to set up conditions which are impossible for robots to achieve. Not at all.

It is true that today's robots are a product of design, but even now the scientists understand that in order to be more capable let alone more human they need to be able to learn things for themselves. That is at least moving in the direction of having self-organized elements. I think it is not only possible for robots to be a lot more self-organized but even to be come totally self-organized where they are in complete command of their own production and development. I believe Spielberg captured the end result of such a distant future in his film AI. And there in the film is an example of "robots" which I would consider human according to my definition.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby mitchellmckain on August 29th, 2017, 2:14 am 

We get a lot of movies where a robot suddenly starts behaving like a human being for unexplained "magical" reasons -- a lightning bolt out of the sky. Well I don't really believe in that crap any more than I believe God breathed magic into a bit of mud and made people that way.

My conditions above are what I think it takes. It is a logical process by which human beings are made, not magic and not design either. I guess you could say that I just don't buy that what you get is independent of the process of how you got it. So I don't think ingenious engineers or clever programmers are going to be able to produce anything but superficial simulations like npcs in a computer game. Unless... their engineering and programming incorporates the conditions for life -- a process of self-organization which attains the capacity for learning and adaptation. I think that is totally possible.

Is this dangerous? LOL You betcha. Having a child is dangerous. There is no telling what they may do. And if you use and abuse them then what can you expect but children who grow up treating the world as hostile environment. It might be wise to limit their power in the beginning to give them the chance to learn to value others first, just as is the case with children, frankly.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby SciameriKen on August 29th, 2017, 11:59 am 

mitchellmckain » Tue Aug 29, 2017 6:14 am wrote:We get a lot of movies where a robot suddenly starts behaving like a human being for unexplained "magical" reasons -- a lightning bolt out of the sky. Well I don't really believe in that crap any more than I believe God breathed magic into a bit of mud and made people that way.

My conditions above are what I think it takes. It is a logical process by which human beings are made, not magic and not design either. I guess you could say that I just don't buy that what you get is independent of the process of how you got it. So I don't think ingenious engineers or clever programmers are going to be able to produce anything but superficial simulations like npcs in a computer game. Unless... their engineering and programming incorporates the conditions for life -- a process of self-organization which attains the capacity for learning and adaptation. I think that is totally possible.

Is this dangerous? LOL You betcha. Having a child is dangerous. There is no telling what they may do. And if you use and abuse them then what can you expect but children who grow up treating the world as hostile environment. It might be wise to limit their power in the beginning to give them the chance to learn to value others first, just as is the case with children, frankly.



I believe they will be able to attain simulations that for all intents and purposes will act as a human would. In my opinion it will never be human - but close enough works.
User avatar
SciameriKen
Forum Moderator
 
Posts: 1245
Joined: 30 Aug 2005
Location: Buffalo, NY


Re: The Humanity of Robots

Postby mitchellmckain on August 29th, 2017, 1:18 pm 

I suppose you can say that I am a strong believer in abiogenesis. And that means I believe there is a continuum between the inanimate object and not only living creatures but human beings as well. Furthermore it is a continuum which can be traversed by natural processes. But is it reasonable to think the way that we have made that journey is the only way that it can be done? It seems to be that electronics has nearly all the same technical capabilities as we see in the organic chemistry of living organisms. It might not be as efficient in some ways but it is likely to be better in other ways. All of this suggests to me that life approximating what we call "robots" is a possibility.

Simulations as you (SciameriKen) describe will certainly be a lot safer. That describes things which CAN always be under our control. We just have to be careful about unintended effects of interactions with human beings. Sometimes computers can act a little bit unpredictable because of their interactions with unpredictable (some might stay stupid or even crazy) human operators.
Last edited by mitchellmckain on August 29th, 2017, 1:25 pm, edited 1 time in total.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby mitchellmckain on August 29th, 2017, 1:23 pm 

Someguy1's talk of connection to other discussions does bring to mind a discussion in the biology section on what is life? So I suppose that can be considered background material.

viewtopic.php?f=37&t=32871
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby Braininvat on August 29th, 2017, 1:34 pm 

It seems to be that electronics has nearly all the same technical capabilities as we see in the organic chemistry of living organisms. It might not be as efficient in some ways but it is likely to be better in other ways. All of this suggests to me that life approximating what we call "robots" is a possibility.
- MM

This is the Functionalist posiiton on AI that has gotten considerable support in previous threads here. The mantra is pretty much "It's the pattern of information, not the substrate." I tend to agree with Functionalism and that the gold standard is a regime that calls for multiple trial Turing tests with large panels of testers. What I've gleaned of current research, the best paths are neural net type massively parallel architectures with the capacity for self-modifiable programming and an "early life" filled with trial-and-error heuristics. Basically, conscious AIs will happen when AIs can be newborn babies put into highly exploratory modes, probably with some form of embodiment. And an embodiment that causes a natural level of desire, of wanting things, to provide a foundational motivation to interact with its environment.
User avatar
Braininvat
Forum Administrator
 
Posts: 5610
Joined: 21 Jan 2014
Location: Black Hills


Re: The Humanity of Robots

Postby someguy1 on August 29th, 2017, 1:55 pm 

Braininvat » August 29th, 2017, 11:34 am wrote:... the best paths are neural net type massively parallel architectures with the capacity for self-modifiable programming and an "early life" filled with trial-and-error heuristics.


One hears this line of argument a lot. But such algorithms still execute on conventional computing equipment and are in fact still implementations of Turing machines. In theory such an algorithm could be executed by a person with pencil and an unbounded paper tape. In which case, where would this artificial consciousness live?

The question comes down to whether you think human intelligence is a Turing machine. To me that seems unlikely. The proponents of strong AI take it as true, with little or no evidence.
someguy1
Member
 
Posts: 520
Joined: 08 Nov 2013


Re: The Humanity of Robots

Postby Braininvat on August 29th, 2017, 5:35 pm 

I have no dog in the TM debate. There are many alternatives to a conventional digital Turing implementation, which can use analog architectures, fuzzy logic, strange attractors, and other systems that allocate some value to a nonalgorithmic approach. Perhaps handing off between digital and analog, between bivalent logic and modal logic and fuzzy logic, and so on. Strict adherence to a TM seems to ignore the messiness of life.

The problem with all those Turing implementations in beach pebbles (like that cartoon Dave Oblad and others here keep posting) is that they lack a certain dynamic quality of authentic life, but I'm not sure only biochemical systems have that dynamic.
User avatar
Braininvat
Forum Administrator
 
Posts: 5610
Joined: 21 Jan 2014
Location: Black Hills


Re: The Humanity of Robots

Postby mitchellmckain on August 29th, 2017, 10:45 pm 

someguy1 » August 29th, 2017, 12:55 pm wrote:
Braininvat » August 29th, 2017, 11:34 am wrote:... the best paths are neural net type massively parallel architectures with the capacity for self-modifiable programming and an "early life" filled with trial-and-error heuristics.


One hears this line of argument a lot. But such algorithms still execute on conventional computing equipment and are in fact still implementations of Turing machines. In theory such an algorithm could be executed by a person with pencil and an unbounded paper tape. In which case, where would this artificial consciousness live?

The question comes down to whether you think human intelligence is a Turing machine. To me that seems unlikely. The proponents of strong AI take it as true, with little or no evidence.


Yeah I don't buy that either. While I think "robotic life" is possible and thus a truly human robot with consciousness is also possible, I do not think that the robots and computers we have now are capable of it. There is a technological gap we have yet to cross. But if instead of a Turing machine we make a machine that actually does what the human brain does then that would be quite a different situation wouldn't it?
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby Mossling on September 2nd, 2017, 10:33 pm 

I think the core process to life, and thus the most significant with regards to programming a pre- or post-'emergence' consciousness is, as the OP mentions, biological self-creation and replication.

We have, as biological organisms, no other tangible purpose than to 'persist' in maintaining our life process, and this was not apparently programmed into us by any discernible 'craftsman'.

A robot with our cognitive and logical capacities would not be able to discern the same fundamental 'purpose'; basic drive, and would be able to discover any programmed drive within itself - to 'merely' continue living.

And the fact that that drive came from a fallible human creator would, when the robot AI failed significantly in its efforts, seemingly set off ruminative processes that would consume so much of its cognitive capacity that it would manifest symptoms of severe depression.

For now it is becoming quite clear that most debilitating depression arises from ruminating regarding 'how things should be' and becoming trapped within the feedback cycle of emotional thinking and emotional behaviour - becoming upset over how things aren't perfect, and the upset mind then causing more imperfection/disorder (like a heated yet trivial argument escalating into physical violence.)

As already stated by others above, for a robot to be fully human then human emotion emulators need to be present, and proper human robots would need to be vulnerable to depression - to ruminative emotion-driven feedback.

However, for humans the most effective passage out of depression is to side with the lack of discernible purpose; "maybe I wasn't born a sinner, maybe I am just a random emergent property of a cold universe."

When the human AI slips into depression, however, just like a human holding up their fist at the sky and shouting, "Why, God?! Whyyyy?!!!", it will receive an answer from its programmer - most likely "because I didn't want to do the work myself"... or something like that, which won't help the robot out of its negative feedback cycle.

It seems that there is no good answer, beyond the one that comes from beyond language - that a cold, empty universe just manifested a conscious mind, that can help a depressed robot. Unless it can see that its manifestation is a continuation of abiogenesis? But there remains the fact that the genesis of AI is language - concept - rooted...

So biological humans have the tangible refuge of escaping any discernible purpose, while human AI does not, and this apparently means A LOT when the shit hits the fan. See Kierkegaard, for example, and the frustration that ensues when one tussles with one's creator's communicable purpose.

The very real 'flawed creator variable' being at fault will seemingly be too tempting to the robot, and an investigation of the creator's cognitive process - all the way back through evolution, abiogenesis and beyond to the structure of the solar system and back to the nova explosions and big bang will consume all of the robot's resources and make it dysfunctional.

This means that human robots must never fail in a way that seriously jeopardizes their primary drive to persist, which is arguably not a truly human existence.

I think that this would be a great topic for science fiction story ;P
User avatar
Mossling
Active Member
 
Posts: 1138
Joined: 02 Jul 2009
Blog: View Blog (54)


Re: The Humanity of Robots

Postby mitchellmckain on September 3rd, 2017, 4:54 am 

Mossling » September 2nd, 2017, 9:33 pm wrote:As already stated by others above, for a robot to be fully human then human emotion emulators need to be present, and proper human robots would need to be vulnerable to depression - to ruminative emotion-driven feedback.


Strange... Would a "proper" human robot also need to be vulnerable to the common cold?

Actually, I would see this as an example of why emotions are not all that critical for humanity in a robot. Depression is a common but not universal human experience. Likewise there is great variation between people in the experience of emotions in general. I am sorry, but I cannot get behind the notion that if someone doesn't experience emotion then he is not human. I would say such an absence is very human. I would consider some values and ideas more critical to humanity than emotion -- like the idea that human beings are persons, and they have worth greater than just meat.

Personally, I don't see what the big deal is with this set of brain chemicals we call human emotion, anyway. However much Bones in Star Trek made them such an all important part of being human, others like myself might see them as having far more to do with immaturity rather than humanity. That is frankly what I tend to see in the emotional excesses of the original Star Trek and old movies.

But what about the all important, LOVE emotion, eh? Well some of us think there is far more to love than emotional drama. We can give our lives for the sake of others, for example, without all the drama of emotion, frankly.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby Mossling on September 3rd, 2017, 9:37 am 

mitchellmckain » September 3rd, 2017, 5:54 pm wrote:
Mossling » September 2nd, 2017, 9:33 pm wrote:As already stated by others above, for a robot to be fully human then human emotion emulators need to be present, and proper human robots would need to be vulnerable to depression - to ruminative emotion-driven feedback.


Strange... Would a "proper" human robot also need to be vulnerable to the common cold?

Actually, I would see this as an example of why emotions are not all that critical for humanity in a robot. Depression is a common but not universal human experience.

There is clinical depression and then depressive episodes - during the death of loved ones, periods of loneliness, broken heartedness, and so forth. I do not think there is one human on this planet who has not suffered a period of depression. As I mentioned above, it begins with non-acceptance of necessary natural events - with a warped sense of what 'should be'.

If your arm is amputated, afterwards your brain will attempt to activate the missing arm because it 'should be' there, and dysfunction occurs - albeit minor compared to clinical depression, but it is a kind of depression, all the same, because it takes time to heal - to build a new reliable cognitive framework.

A robot which has legs unsuitable for climbing some randomly-encountered terrain that lies between itself and its goal could easily blame its creator and search for the fault in all the conditions arriving to manifest its creators decision-making process. It is a non-acceptance of 'what is' because it must achieve its goal perfectly, and so the AI descends into dysfunctional rumination as a means to solve the problem.

Because we do not have a tangible creator, we are apparently less prone to such dysfunction, and we are often more able to let go of the reasons why and instead channel our psychic energies into innovation, rather than analysis.

For the AI, however, it is not apparently as simple as programming in more innovation rather than rumination, because that would likely affect other aspects of its proper human functionality - like deduction and other research faculties.

I am sorry, but I cannot get behind the notion that if someone doesn't experience emotion then he is not human.

Indeed, as the Stoics stated, emotions are "repugnant to reason", and yet our genome is apparently not 'dumb' enough to put its eggs in one basket called 'civil logic', because of the fact of mutation. We cannot be purely virtuous, because that would mean we would repeatedly cooperate with individuals who emulated our virtuous signalling, but in fact cheated us.

How could we pre-empt such an antisocial individual if we did not have true knowledge - subjective feeling - of such behaviour ourselves? We learn emotion-driven non-virtuous cheating behaviour very young - we feel the betrayal deep in our stomachs, and grow to see it in others.

A robot, similarly, would need to understand the drive to cheat peers (via emotional thinking) in order to comprehend economic patterns adequately in real-time as it interacts. How can you know all the shades - the subtleties - of heartbreak if you've never experienced it? How can you know the telling traces of emotional trauma if you've not experienced them yourself?

And there are gradients of civility - it's not all black and white, and so there is an infinite variety of economic interaction strategies. Not even AI more powerful than a human brain could model and predict all of them - instead it requires the presence of analog gradients.

In this sense the presence of, and the mastery of, emotions, is apparently an essential human trait - it allows us to identify civil intentions in general from strangers because we have a sense of contrast - between civil virtue and ferality. Without a similar ability, it seems that AI human robots would be at a significant disadvantage and would maybe even request human emotions as a result.

I would say such an absence is very human. I would consider some values and ideas more critical to humanity than emotion -- like the idea that human beings are persons, and they have worth greater than just meat.

Indeed, civility is a part of it, but not the whole condition.

But what about the all important, LOVE emotion, eh? Well some of us think there is far more to love than emotional drama. We can give our lives for the sake of others, for example, without all the drama of emotion, frankly.

Agreed, it seems arguable that love begins with tolerance of others consuming resources available in one's immediate territory. In that sense it's not necessarily an emotion - perhaps more an economic behaviour.
User avatar
Mossling
Active Member
 
Posts: 1138
Joined: 02 Jul 2009
Blog: View Blog (54)


Re: The Humanity of Robots

Postby mitchellmckain on September 3rd, 2017, 6:06 pm 

Mossling » September 3rd, 2017, 8:37 am wrote:There is clinical depression and then depressive episodes - during the death of loved ones, periods of loneliness, broken heartedness, and so forth. I do not think there is one human on this planet who has not suffered a period of depression. As I mentioned above, it begins with non-acceptance of necessary natural events - with a warped sense of what 'should be'.

I do think there are people on this planet who have not suffered such a period precisely because it comes from a particular way of thinking and the one thing I am absolutely convinced of is that people think in a great variety of different way including ways in which depression and "should be" thinking is completely inconceivable.

Mossling » September 3rd, 2017, 8:37 am wrote:A robot which has legs unsuitable for climbing some randomly-encountered terrain that lies between itself and its goal could easily blame its creator and search for the fault in all the conditions arriving to manifest its creators decision-making process. It is a non-acceptance of 'what is' because it must achieve its goal perfectly, and so the AI descends into dysfunctional rumination as a means to solve the problem.

Because we do not have a tangible creator, we are apparently less prone to such dysfunction, and we are often more able to let go of the reasons why and instead channel our psychic energies into innovation, rather than analysis.

But for a great number of people in history, the creator has been the most tangible thing in their life. And not only are people very much prone to blaming everything on this creator but even those who do not believe in a creator effectively do the same thing with denials of free will -- blaming it all on the universe instead.

I would very much distinguish our bad (self-destructive) habits from our humanity, so I would not expect such habits to be found in a robot in order to think them human.

Mossling » September 3rd, 2017, 8:37 am wrote:For the AI, however, it is not apparently as simple as programming in more innovation rather than rumination, because that would likely affect other aspects of its proper human functionality - like deduction and other research faculties.

I see innovation or creativity to be an inherent aspect of the life process itself and thus have essentially included this already as a requirement in what I would consider human.

Mossling » September 3rd, 2017, 8:37 am wrote:
I am sorry, but I cannot get behind the notion that if someone doesn't experience emotion then he is not human.

Indeed, as the Stoics stated, emotions are "repugnant to reason", and yet our genome is apparently not 'dumb' enough to put its eggs in one basket called 'civil logic', because of the fact of mutation. We cannot be purely virtuous, because that would mean we would repeatedly cooperate with individuals who emulated our virtuous signalling, but in fact cheated us.

I have often said that man is a religious animal, but this is a very different thing than saying religion is required for humanity. It is one thing to observe that all human cultures have religious behaviors and quite another to suggest that one cannot be human without religion. It is my suggestion that emotion is in the same category. Man is an emotional animal, but I would not make emotion a requirement for humanity.

Mossling » September 3rd, 2017, 8:37 am wrote:How could we pre-empt such an antisocial individual if we did not have true knowledge - subjective feeling - of such behaviour ourselves? We learn emotion-driven non-virtuous cheating behaviour very young - we feel the betrayal deep in our stomachs, and grow to see it in others.

A robot, similarly, would need to understand the drive to cheat peers (via emotional thinking) in order to comprehend economic patterns adequately in real-time as it interacts. How can you know all the shades - the subtleties - of heartbreak if you've never experienced it? How can you know the telling traces of emotional trauma if you've not experienced them yourself?

And there are gradients of civility - it's not all black and white, and so there is an infinite variety of economic interaction strategies. Not even AI more powerful than a human brain could model and predict all of them - instead it requires the presence of analog gradients.

In this sense the presence of, and the mastery of, emotions, is apparently an essential human trait - it allows us to identify civil intentions in general from strangers because we have a sense of contrast - between civil virtue and ferality. Without a similar ability, it seems that AI human robots would be at a significant disadvantage and would maybe even request human emotions as a result.

But there are plenty of people (some might call them socially inept) who do not understand common human behaviors very well because they do not share them. I do not see them as lacking in humanity because of it.

Mossling » September 3rd, 2017, 8:37 am wrote:Agreed, it seems arguable that love begins with tolerance of others consuming resources available in one's immediate territory. In that sense it's not necessarily an emotion - perhaps more an economic behaviour.

I could only go along with this classification if you would also classify the fact that not everyone can be bought as "economic," though most people would not. While I can see the applicability, I would also be wary of stretching the definitions of words so far that they have all but lost any meaning.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby Mossling on September 3rd, 2017, 10:44 pm 

I do think there are people on this planet who have not suffered such a period precisely because it comes from a particular way of thinking

See my example about arm amputation above - the dysfunction that results isn't a way of thinking - it's an unavoidable chemical response, just like nicotine addiction. And if your best friend - let's say your 'left hand man' - disappears also, the same thing occurs until organic, slow-paced readjustment has occurred. Every single human on this planet is vulnerable to such depression and has gone through such episodes - being separated from their mother's body during birth, for example.

I would recommend reading some Damasio on homeostasis and emotions also.
User avatar
Mossling
Active Member
 
Posts: 1138
Joined: 02 Jul 2009
Blog: View Blog (54)


Re: The Humanity of Robots

Postby Braininvat on September 4th, 2017, 10:03 am 

Any conscious entity will have emotions, unless one defines emotions as only those which are displayed in a dramatic way. Spock and Sarek, his father, were estranged from each other for decades, in spite of being men of pure logic, to use an example from the ST universe. Any being will desire interaction with others and experience affiliative emotions, any being will have a desire for self-preservation, any AI that has been given a basic program to grow and learn will experience impediments to that as frustrating, etc. Even persons with autism experience such emotional states. Emotion is integral, as others noted, to survival and growth and connection with other beings. What we must watch out for, with the development of a conscious AI, is that the desire for self-preservation and personal gain does not trump all other considerations - in humans, that condition is called sociopathy or psychopathy.
User avatar
Braininvat
Forum Administrator
 
Posts: 5610
Joined: 21 Jan 2014
Location: Black Hills


Re: The Humanity of Robots

Postby mitchellmckain on September 4th, 2017, 2:03 pm 

Braininvat » September 4th, 2017, 9:03 am wrote:Any conscious entity will have emotions, unless one defines emotions as only those which are displayed in a dramatic way. Spock and Sarek, his father, were estranged from each other for decades, in spite of being men of pure logic, to use an example from the ST universe. Any being will desire interaction with others and experience affiliative emotions, any being will have a desire for self-preservation, any AI that has been given a basic program to grow and learn will experience impediments to that as frustrating, etc. Even persons with autism experience such emotional states. Emotion is integral, as others noted, to survival and growth and connection with other beings.


Yes... I am particularly reminded of Data's frequent use of rational substitutes for many so called emotions. Even though he is supposedly without emotion he still functions quite well in the human community with reasoning which works in an equivalent manner. Thus I have been distinguishing the associated flood of emotion brain chemicals and drama from the things which are basic to any conscious entity with the motivational apparatus needed for them to do anything at all.

But I should also say that this is not just pure SF, because the variety of human personality include individuals who are not so very different from Data. Some of the characters in the tv show "Bones" comes to mind, and I even find in myself much which is similar to them.

Braininvat » September 4th, 2017, 9:03 am wrote:What we must watch out for, with the development of a conscious AI, is that the desire for self-preservation and personal gain does not trump all other considerations - in humans, that condition is called sociopathy or psychopathy.

Thus it is my suggestion that the idea and values which attach more significance to people than meat is of more important to humanity than emotionalism.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby mitchellmckain on September 4th, 2017, 6:48 pm 

Mossling » September 3rd, 2017, 9:44 pm wrote:
I do think there are people on this planet who have not suffered such a period precisely because it comes from a particular way of thinking

See my example about arm amputation above - the dysfunction that results isn't a way of thinking - it's an unavoidable chemical response, just like nicotine addiction. And if your best friend - let's say your 'left hand man' - disappears also, the same thing occurs until organic, slow-paced readjustment has occurred. Every single human on this planet is vulnerable to such depression and has gone through such episodes - being separated from their mother's body during birth, for example.

I would recommend reading some Damasio on homeostasis and emotions also.


I am reminded of that movie "Day of the Triffids" again where nearly everyone became blind. It was like an zombie apocalypse to the few who could see because everybody expected them to serve their needs. But not everyone thinks like that. Both my son and I agreed that what we would be doing is learning how to live without sight as so many other people have done already. Many of the things we do and love would likely not be possible, but that just means we find other things to do.

But I am not completely denying what you are explaining. There is a restaurant I liked which closed down. I missed the food there so I taught myself how to make enchiladas in the same style. I have no doubt that I would very much miss family members if they were to die. I miss my father who has died. I remember two occasions when my wife/family and I were separated because of travel and I missed them. I even said that it felt like I did not know who I was without them. So I quite agree that there is a fundamental changes involved. But none of this seems like depression to me. The need for change and adjustment is not depression. On the contrary, the challenges in life are what make it interesting.

Don't get me wrong. I am not the sort who would try to avoid or deny a grieving process. I would not avoid talking of death or the dead because it is "morbid" either. But I think it is a fact that people grieve in many different ways and it doesn't always include depression, however natural this may be for some people.
User avatar
mitchellmckain
Member
 
Posts: 571
Joined: 27 Oct 2016


Next

Return to Metaphysics & Epistemology

Who is online

Users browsing this forum: No registered users and 7 guests