The Humanity of Robots

Discussions on the nature of being, existence, reality and knowledge. What is? How do we know?

Re: The Humanity of Robots

Postby Mossling on September 4th, 2017, 8:41 pm 

Indeed - it is possible to reframe the burden of pressure one feels due to the change in one's habitual dependency norms so that one 'weathers the storm' efficiently, and for a human the 'bliss' of not having been tangibly 'designed' by a potentially temperamental and cruel creator can help considerably in that effort - because there is ultimately no blame involved, as I have been positing already.

For an AI robot, however, it's a different story - the 'flawed creator variable' is very tangible and possible one.

If you knew for sure that your cognitive system had been designed so that you would suffer missing your loved ones and that you would lose your grasp on who you were, when in the midst of that pain, would you consider contacting your designers to give them a piece of your mind?

Isn't this what religious people tend to do when in severe pain? - Beg their creator(s) for liberation from their suffering in the view that they shouldn't have to go through such pain...?

There's a conflict there - perhaps between a "kingdom of God" recognised (albeit abstractly) within the individual (see the Christian Bible, Luke, for example), and the Creator's more tangible kingdom outside of the mind.

No matter the labels, one could just say that there is a recognition of an EQUAL creative intelligence within and without, and an AI robot would seemingly arrive at the same such problem as humans do, but with the burden of having a fallible Creator that they can meet face-to-face... that is something that will be very difficult to let go of - even moreso than a mere parent (whom we may so easily blame for our sufferings in life), because at least a biological parent didn't consciously program our very DNA!

In other words, the AI robot creators had better be buddha-like sages otherwise they're gonna have some serious teenage tantrum robots on their hands - and who knows what that would be like? A terminator, perhaps? Or maybe they'd just curl up in foetal position and ruminate for evermore?
User avatar
Mossling
Active Member
 
Posts: 1148
Joined: 02 Jul 2009
Blog: View Blog (54)


Re: The Humanity of Robots

Postby mitchellmckain on September 4th, 2017, 9:29 pm 

Mossling » September 4th, 2017, 7:41 pm wrote:No matter the labels, one could just say that there is a recognition of an EQUAL creative intelligence within and without, and an AI robot would seemingly arrive at the same such problem as humans do, but with the burden of having a fallible Creator that they can meet face-to-face... that is something that will be very difficult to let go of - even moreso than a mere parent (whom we may so easily blame for our sufferings in life), because at least a biological parent didn't consciously program our very DNA!

In other words, the AI robot creators had better be buddha-like sages otherwise they're gonna have some serious teenage tantrum robots on their hands - and who knows what that would be like? A terminator, perhaps? Or maybe they'd just curl up in foetal position and ruminate for evermore?


If all you are doing is suggesting this is a possibility then I couldn't agree with you more.

My only objection was stemming from the idea that this is unavoidable or a necessity in order for robots to be called human.

Mossling » September 4th, 2017, 7:41 pm wrote:Indeed - it is possible to reframe the burden of pressure one feels due to the change in one's habitual dependency norms so that one 'weathers the storm' efficiently, and for a human the 'bliss' of not having been tangibly 'designed' by a potentially temperamental and cruel creator can help considerably in that effort - because there is ultimately no blame involved, as I have been positing already.

For an AI robot, however, it's a different story - the 'flawed creator variable' is very tangible and possible one.

If you knew for sure that your cognitive system had been designed so that you would suffer missing your loved ones and that you would lose your grasp on who you were, when in the midst of that pain, would you consider contacting your designers to give them a piece of your mind?

Isn't this what religious people tend to do when in severe pain? - Beg their creator(s) for liberation from their suffering in the view that they shouldn't have to go through such pain...?

Some do, some don't. You might argue that most do it, but I think you will find that many theists would say it is wrong and that the better theist does not do it.

You overestimate the difference between theist and atheist. It makes me suspect that you are a theist turned atheist. An atheist turned theist is likely to say the opposite. Each has converted because they have found a way of thinking that works better for them.

Mossling » September 4th, 2017, 7:41 pm wrote:There's a conflict there - perhaps between a "kingdom of God" recognised (albeit abstractly) within the individual (see the Christian Bible, Luke, for example), and the Creator's more tangible kingdom outside of the mind.

Ahh..... This explains a great deal of your thinking to me. You are connecting this idea of a "should be" with the Judeo-Christian idea that things are not as God intended. But it really isn't so simple. Frankly to me, I think this is like equating the early universe and the unification of the four forces with a "should be." Of course, people don't do any such thing. What you have here instead, is perhaps a diagnosis of an unhealthy version of theism.

Mossling » September 4th, 2017, 7:41 pm wrote:No matter the labels, one could just say that there is a recognition of an EQUAL creative intelligence within and without, and an AI robot would seemingly arrive at the same such problem as humans do, but with the burden of having a fallible Creator that they can meet face-to-face... that is something that will be very difficult to let go of - even moreso than a mere parent (whom we may so easily blame for our sufferings in life), because at least a biological parent didn't consciously program our very DNA!

In other words, the AI robot creators had better be buddha-like sages otherwise they're gonna have some serious teenage tantrum robots on their hands - and who knows what that would be like? A terminator, perhaps? Or maybe they'd just curl up in foetal position and ruminate for evermore?

Hmmm... this also applies to the use of modern technology in manipulating the outcome of human births. I have often argued that this is a very bad idea because parents already have too much power and control over their children, so I would agree that extending this to our DNA and biology would be intolerable.

But I am a theist who doesn't believe in the watchmaker designer type God, so much of what you describe doesn't apply to me. Likewise, I explained in my OP, I consider the humanity of robots to depend on having a self-organized element to them. And thus I don't think this need apply to robots that I would consider human, ether. For those parts which are designed, they would be fairly easy to upgrade.
User avatar
mitchellmckain
Member
 
Posts: 703
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby Mossling on September 5th, 2017, 5:23 am 

I am agnostic actually.

As I said - the human or human AI robot would recognise a creative intelligence within themselves, and in the robots case, a creative intelligence outside of it which created it from scratch. The human has a choice whether to believe in a Creator and thus blame it for its suffering, but the robot doesn't.

While we can neutralise the old "I didn't ask to be born!" with, "Well your father had a sparkle in his eye that night, and the rest is history - it was pure animal attraction", the human robot doesn't have that luxury.

Do you think that a human robot could be human without suspecting it's creator could be at fault instead of itself?

The robot thinks that it 'should be' able to achieve every goal that it is assigned, otherwise it would not have been assigned that goal. But when random events get in the way, then it will go on a 'trans-derivational search', and that's where the rumination and the creator blame would creep in it seems.

Of course the robots pre-installed skillset and components would factor into its assessment of its failure. Could it still be considered 'human' and not take such things into consideration? I think not. And so the blame game ensues - albeit more intensely than someone saying "you raised me badly!" and "I didn't choose to be born!" - a situation ripe for ruminative depression.

And you say that the robot needs to be self-perpetuating/creating, but that's impossible, unless you somehow trick it into believing in some kind of equivalent to abiogenesis, which it would likely discover is false in any case.
User avatar
Mossling
Active Member
 
Posts: 1148
Joined: 02 Jul 2009
Blog: View Blog (54)


Re: The Humanity of Robots

Postby mitchellmckain on September 5th, 2017, 1:47 pm 

Mossling » September 5th, 2017, 4:23 am wrote:I am agnostic actually.

As I said - the human or human AI robot would recognise a creative intelligence within themselves, and in the robots case, a creative intelligence outside of it which created it from scratch. The human has a choice whether to believe in a Creator and thus blame it for its suffering, but the robot doesn't.

Like I said, not believing in a creator does mean people don't blame everything on something else like determinism or the universe. I don't see why this is a significant difference. I don't see why this bad habit of playing blame games is of any importance at all. Human being don't have to indulge in this complete waste of their thought processes and many do not.

Mossling » September 5th, 2017, 4:23 am wrote:While we can neutralise the old "I didn't ask to be born!" with, "Well your father had a sparkle in his eye that night, and the rest is history - it was pure animal attraction", the human robot doesn't have that luxury.

Something that irrational does require any answer at all. It is impossible to ask to be born. You might as well complain that there is any limitations of reality to the dictates of logic.

Like I said before, I wouldn't demand robots to have our bad habits in order to be human, let alone exhibit gross stupidity and irrationality.

Mossling » September 5th, 2017, 4:23 am wrote:Do you think that a human robot could be human without suspecting it's creator could be at fault instead of itself?

Yes I think a robot could be human without adopting the bad habit of blaming its own decisions on everything but itself.

Mossling » September 5th, 2017, 4:23 am wrote:The robot thinks that it 'should be' able to achieve every goal that it is assigned, otherwise it would not have been assigned that goal. But when random events get in the way, then it will go on a 'trans-derivational search', and that's where the rumination and the creator blame would creep in it seems.

The kind of robot I would consider human would be highly self-organized, at least with regards to its mental aspect. There is no reason why it would have to be assign goals or why it could not choose its own goals or why it could not judge which goals are achievable and which are not.

I am not even sure this is a limitation of robots I would not consider human. A Turing machine could be programmed to determine whether the goals it is assigned are achievable or not.

It frankly sounds like you are projecting your own thought processes onto the world in a manner that makes them necessary for any thinking being. I don't think thoughts have to go in such directions and I think there are plenty of people in whom they do not.

A living thing is what it is as a result of two different things, its own self-organization by growth and learning and an inheritance from others. The kind of robot I would consider human would be no different, except as I said the inherited parts will mostly be designed are likely capable of being upgraded.

Mossling » September 5th, 2017, 4:23 am wrote:And you say that the robot needs to be self-perpetuating/creating, but that's impossible, unless you somehow trick it into believing in some kind of equivalent to abiogenesis, which it would likely discover is false in any case.

No, I said the kind of robot I would consider human would have self-organized aspects to it. And it would be human/alive to that extent. It may not be possible with current technology but it may be possible in the future, when roboticists create in their electronic designs the conditions for self-organizing processes. To be sure, it is most likely to happen in the development of their mind rather than their bodies. But it is in the mind where I see most of our humanity anyway.

Yes the robot will always have physical parts which come from external developments, but living things have this also. It is called inheritance. But the robot will likely be able to upgrade whenever something better is available.

Mossling » September 5th, 2017, 4:23 am wrote:Of course the robots pre-installed skillset and components would factor into its assessment of its failure. Could it still be considered 'human' and not take such things into consideration? I think not. And so the blame game ensues - albeit more intensely than someone saying "you raised me badly!" and "I didn't choose to be born!" - a situation ripe for ruminative depression.

Installed skill sets and components would fall into the inherited rather the self-organized category. They would be the non-living non-human aspects of the robot. Sure they would factor into an assessment of failure much like we would factor in the inadequacy of our tools. Just as we would seek better tools they would seek upgrades to better parts. But what if their is a part which cannot be upgraded? Well reality limitations are a part of human life also, where we have to accept that there are sometimes things which cannot be fixed and we have to work around them. No it does not require playing an irrational blame game. Not even all human beings do this, and I have shown theism is irrelevant. It is simply a bad habit which does not have to be indulged in regardless of objectively indeterminable beliefs.
User avatar
mitchellmckain
Member
 
Posts: 703
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby Mossling on September 6th, 2017, 1:33 am 

mitchellmckain wrote: Mossling » September 5th, 2017, 4:23 am wrote:Do you think that a human robot could be human without suspecting it's creator could be at fault instead of itself?


Yes I think a robot could be human without adopting the bad habit of blaming its own decisions on everything but itself.

I didn't mention blame there. I asked a simple question and you extrapolated so that you could answer a different question. This is perhaps why we are not making progress in this discussion.

mitchellmckain wrote:Just as we would seek better tools they would seek upgrades to better parts. But what if their is a part which cannot be upgraded? Well reality limitations are a part of human life also, where we have to accept that there are sometimes things which cannot be fixed and we have to work around them.

Ok, great, so you recognise there is a common potential for both the human and the robot to come up against the limitations of "their tools" as you put it.

Let's keep it simple and work from there with an example.

1. A robot is programmed to march from a flat plain to the top of a hill and place a flag there.
2. The robot is placed in the field and the task is run.
3. It begins to rain and all around the base of the hill turns into marshy land.
4. Because the robot's legs were not designed to cope with marshy land, the robot gets bogged down and cannot continue to ascend the hill and fails in its mission.
5. When the robot assesses the situation with it's human AI 'mind', it evaluates why it failed and looks at all the variables that lead to the failure.
6. Included in the evaluated variables are the engineers who designed and created the robot for the task.
7. The robot asks the question - "Why did my engineers not consider the potential for marshy ground?"
8. The robot comes up with two potential realistic answers
a) The engineers were ignorant of the potential outcome.
b) The engineers had lied about the true mission (maybe they were testing the robot's reaction to failure or how it manages in difficult terrain.)

In order for the robot to self-correct and thus self-improve, it would need to be able to avoid the above potential answers in any future mission, and so come up with creative work-arounds. This would mean developing a better understanding of its engineers' ignorance and improving its lie-detection capacity; emotional sensitivity.

Thus, if human emotions and their subtleties were not already present in the AI, it would seek to gain a real-time feel for them. And then the blame game can ensue as it does in human societies. It is not paranoia to believe that other humans are out to cheat one. So many of them are - even one's closest 'friends' and family.

You see, to be alive, and thus human, is to be economically-driven - it's all about resources and efficiency and economic strategy. As soon as there are two human minds, there is the potential for cooperation - the human robot cooperates with its human peers as an individual economic agent, otherwise it would not be able to monitor and evaluate its personal efficiency and efforts.

Part of being an economic 'player' - in game theory, for example, is to be aware of potential cheaters. Humans lie all the time as to their true intentions, and a human robot would need to be able to factor this in AND ADMINISTER ECONOMIC PUNISHMENTS also.

Or are you saying that a human robot would not be able to identify cheaters as early as other humans, and would not punish cheaters the same way we humans do?

mitchellmckain wrote:No, I said the kind of robot I would consider human would have self-organized aspects to it.

But ultimately it was programmed by another human, unlike us humans.

When you know that humans can and do LIE, and you know such a tricky human has created your own Human AI and robotic existence for some goal, then would you believe outright that that goal is the true goal? There would be possible alternative goals as being potential truths, and the more the robot failed in its missions and gained evidence of such truths - paranoid hypotheses - that its primary directive is a means to test some other hidden agenda, it seems that it could become quite dysfunctional in the eyes of its creator. A 'case for blame' would arise - not an emo tantrum necessarily, but a blame game all the same.

Does this make more sense now?
User avatar
Mossling
Active Member
 
Posts: 1148
Joined: 02 Jul 2009
Blog: View Blog (54)


Re: The Humanity of Robots

Postby mitchellmckain on September 6th, 2017, 1:27 pm 

Mossling » September 6th, 2017, 12:33 am wrote:
mitchellmckain wrote: Mossling » September 5th, 2017, 4:23 am wrote:Do you think that a human robot could be human without suspecting it's creator could be at fault instead of itself?

Yes I think a robot could be human without adopting the bad habit of blaming its own decisions on everything but itself.

I didn't mention blame there. I asked a simple question and you extrapolated so that you could answer a different question. This is perhaps why we are not making progress in this discussion.

Seemed like a vague question to me, so I took it in the context of our discussion, and fault is a synonym for blame. But ok, in your question as written, at fault for what? Yes I think a human robot could think their creator is responsible for some things just as many humans think they have a creator who is responsible for some things. And sometimes they would be right and sometimes they would not be right.

Mossling » September 6th, 2017, 12:33 am wrote:
mitchellmckain wrote:Just as we would seek better tools they would seek upgrades to better parts. But what if their is a part which cannot be upgraded? Well reality limitations are a part of human life also, where we have to accept that there are sometimes things which cannot be fixed and we have to work around them.

Ok, great, so you recognise there is a common potential for both the human and the robot to come up against the limitations of "their tools" as you put it.

Let's keep it simple and work from there with an example.

1. A robot is programmed to march from a flat plain to the top of a hill and place a flag there.
2. The robot is placed in the field and the task is run.
3. It begins to rain and all around the base of the hill turns into marshy land.
4. Because the robot's legs were not designed to cope with marshy land, the robot gets bogged down and cannot continue to ascend the hill and fails in its mission.
5. When the robot assesses the situation with it's human AI 'mind', it evaluates why it failed and looks at all the variables that lead to the failure.
6. Included in the evaluated variables are the engineers who designed and created the robot for the task.
7. The robot asks the question - "Why did my engineers not consider the potential for marshy ground?"
8. The robot comes up with two potential realistic answers
a) The engineers were ignorant of the potential outcome.
b) The engineers had lied about the true mission (maybe they were testing the robot's reaction to failure or how it manages in difficult terrain.)

To the degree a robot is programmed, it is not alive and not human. Though there are similar situations with human beings is when we confront behaviors which are not a matter of choice but of instinct or physiology. Now I don't believe these are in fact a product of design by a creator but this doesn't change the fact that some do believe this. So, according to my thinking this would be a difference because no matter how human they have become robots are not a product of abiogenesis. But this just means I would admit there is a limit to just how human a robot can become.

BUT is this a significant difference? I don't think so. For an objective measure I would examine and compare how much of their capabilities come from their own learning and effort, collectively, and how much does not. For the robots of the far future at the end of Spielberg's AI movie, for example, I would judge that the portion from the design of humans approaches near 0% and the portion from their own efforts approaches very near 100%.

Mossling » September 6th, 2017, 12:33 am wrote:In order for the robot to self-correct and thus self-improve, it would need to be able to avoid the above potential answers in any future mission, and so come up with creative work-arounds. This would mean developing a better understanding of its engineers' ignorance and improving its lie-detection capacity; emotional sensitivity.

No, I disagree. This is a possible methodology and way of thinking but not a necessary one.

Mossling » September 6th, 2017, 12:33 am wrote:Thus, if human emotions and their subtleties were not already present in the AI, it would seek to gain a real-time feel for them. And then the blame game can ensue as it does in human societies. It is not paranoia to believe that other humans are out to cheat one. So many of them are - even one's closest 'friends' and family.

ditto my previous response.

Mossling » September 6th, 2017, 12:33 am wrote:You see, to be alive, and thus human, is to be economically-driven - it's all about resources and efficiency and economic strategy. As soon as there are two human minds, there is the potential for cooperation - the human robot cooperates with its human peers as an individual economic agent, otherwise it would not be able to monitor and evaluate its personal efficiency and efforts.

No, I disagree that being alive and human is any such thing. This is just your thinking. I disagree that there is any necessity in it. There are completely other ways of thinking.

Mossling » September 6th, 2017, 12:33 am wrote:Part of being an economic 'player' - in game theory, for example, is to be aware of potential cheaters. Humans lie all the time as to their true intentions, and a human robot would need to be able to factor this in AND ADMINISTER ECONOMIC PUNISHMENTS also.

Or are you saying that a human robot would not be able to identify cheaters as early as other humans, and would not punish cheaters the same way we humans do?

I would say that a human robot may not think and operate in such a manner just as some humans do not.

Mossling » September 6th, 2017, 12:33 am wrote:
mitchellmckain wrote:No, I said the kind of robot I would consider human would have self-organized aspects to it.

But ultimately it was programmed by another human, unlike us humans.

No. A machine that was ultimately programmed by another is not one I would consider human. In order to be "human" they would have to program themselves in a process of learning just as humans do.

You could say that humans do begin with the basic biological instincts and inherited ideas, and these could correspond to similar elements in a humanish robot from programming and inherited ideas also. But the robot would only be human to the degree which these portions are comparable.

Mossling » September 6th, 2017, 12:33 am wrote:When you know that humans can and do LIE, and you know such a tricky human has created your own Human AI and robotic existence for some goal, then would you believe outright that that goal is the true goal? There would be possible alternative goals as being potential truths, and the more the robot failed in its missions and gained evidence of such truths - paranoid hypotheses - that its primary directive is a means to test some other hidden agenda, it seems that it could become quite dysfunctional in the eyes of its creator. A 'case for blame' would arise - not an emo tantrum necessarily, but a blame game all the same.

Does this make more sense now?

A robot I would consider human would not have goals which come from others but only from his own decisions, and thus the fact that humans can lie would have nothing to do with them. However, you could well be describing possible problems with programmed AIs. Perhaps such can be considered the premise of computers/robots gone malicious/dangerous in various science fiction films as 2001 and Alien.
User avatar
mitchellmckain
Member
 
Posts: 703
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby someguy1 on September 7th, 2017, 1:55 pm 

I ran across this article about a guy who's making incredibly lifelike virtual babies and people. I must admit, the future is a lot closer than I thought it was. I find this article creepy but also amazing. Worth reading.

https://www.bloomberg.com/news/features ... sibly-real
someguy1
Member
 
Posts: 570
Joined: 08 Nov 2013


Re: The Humanity of Robots

Postby Braininvat on September 7th, 2017, 4:42 pm 

Like Cross, Sagar often appears oblivious that his pitch might sound creepy. In August, when I pay a visit to Soul Machines to see Sagar’s latest creations, he’s wearing a T-shirt that depicts two fetuses sharing a womb, arranged head-to-toe in a kind of yin-yang pose. One of the fetuses is human; the other has a distinctly artificial brain filled with circuitry. He wanted to make this design the company logo. The investors who gave him $7.5 million last November said no.


LoL.

Sounds like the coding is for a kind of pseudo-consciousness - heuristic, but not really aware. I would guess that, like Watson, it will manifest some remarkable skills but ultimately fail a Turing test. That said, I would welcome seeing Baby X on Jeopardy's Tournament of Champions, see if it can banter with Alex or the other contestants, between rounds.
User avatar
Braininvat
Resident Member
 
Posts: 5768
Joined: 21 Jan 2014
Location: Black Hills


Re: The Humanity of Robots

Postby Mossling on September 7th, 2017, 10:48 pm 

mitchellmckain » September 7th, 2017, 2:27 am wrote:no matter how human they have become robots are not a product of abiogenesis. But this just means I would admit there is a limit to just how human a robot can become.

BUT is this a significant difference? I don't think so.
Mossling » September 6th, 2017, 12:33 am wrote:You see, to be alive, and thus human, is to be economically-driven - it's all about resources and efficiency and economic strategy. As soon as there are two human minds, there is the potential for cooperation - the human robot cooperates with its human peers as an individual economic agent, otherwise it would not be able to monitor and evaluate its personal efficiency and efforts.

No, I disagree that being alive and human is any such thing. This is just your thinking. I disagree that there is any necessity in it. There are completely other ways of thinking.

Mossling » September 6th, 2017, 12:33 am wrote:Part of being an economic 'player' - in game theory, for example, is to be aware of potential cheaters. Humans lie all the time as to their true intentions, and a human robot would need to be able to factor this in AND ADMINISTER ECONOMIC PUNISHMENTS also.

Or are you saying that a human robot would not be able to identify cheaters as early as other humans, and would not punish cheaters the same way we humans do?

I would say that a human robot may not think and operate in such a manner just as some humans do not.


mitchellmckain wrote:No, I said the kind of robot I would consider human would have self-organized aspects to it.

Mossling replied: But ultimately it was programmed by another human, unlike us humans.

No. A machine that was ultimately programmed by another is not one I would consider human. In order to be "human" they would have to program themselves in a process of learning just as humans do.

You could say that humans do begin with the basic biological instincts and inherited ideas, and these could correspond to similar elements in a humanish robot from programming and inherited ideas also. But the robot would only be human to the degree which these portions are comparable.

I think I am getting your angle now - you seem to expect that conscious can stand somehow alone from the homeostasis of a biological form, like Theists often hope, because it gives hope to an imortality fantasy - reborn in a new life (Hinduism, Buddhism), arrival in heaven or hell as an 'afterlife' (but no matter the outcome at least still 'alive' there!), becoming an immortal ghost that wanders the universe, just not truly dead.

My view as yet is that it is more likely wishful thinking than reality. It is understanable that the stability-seeking mind (that patches over the blind spot in the eye (and is thus permanently hallucinating) and instinctively separates unities from backgrounds in order to 'process' the data) would, as part of a whole-organism drive towards self-perpetuation create such fantasies.

And this neatly brings us back to the discussion at hand - that if you trace the evolution of cognition back through time, it is ALWAYS a homeostatically-oriented phenomenon. Consciousness is an electrochemical process that is dynamic - it is a 'stream' - and as such, has a direction; an orientation; a primary goal, just like any stream.

How can any stream move without gravity or propulsion?

I am stating that human 'emergence' - of a conscious mind that can model it's own stream of consciousness, seems to require a self-perpetuating drive as a fundamental directive, and the subconscious 'intelligence' managing this drive is rooted in homeostasis. This is not fanciful thinking, see Damasio's Looking for Spinoza (2003), for example:
All living organisms from the humble amoeba to the human are born with devices designed to solve automatically, no proper reasoning required, the basic problems o f life. Those problems are: finding sources of energy; incorporating and transforming energy; maintaining a chemical balance of the interior compatible with the life process; maintaining the organism's structure by repairing its wear and tear; and fending off external agents of disease and physical injury. The single word homeostasis is convenient shorthand for the ensemble o f regulations and the resulting state o f regulated life.
In the course of evolution the innate and automated equipment of life governance—the homeostasis machine—became quite sophisticated. At the bottom o f the organization of homeostasis we find simple responses such as approaching or withdrawing o f an entire organism relative to some object; or increases in activity (arousal) or decreases in activity (calm or quiescence). Higher up in the organization we find competitive or cooperative responses. We can picture the homeostasis machine as a large multibranched tree of phenomena charged with the automated regulation o f life.
[...]
The entire collection of homeostatic processes governs life moment by moment in every cell o f our bodies. This governance is achieved by means of a simple arrangement: First, something changes in the environment o f an individual organism, internally or externally. Second, the changes have the potential to alter the course o f the life of the organism (they can constitute a threat to its integrity, or an opportunity for its improvement). Third, the organism detects the change and acts accordingly, in a manner designed to create the most beneficial situation for its own selfpreservation and efficient functioning. All reactions operate under this arrangement and are thus a means to appraise the internal and external circumstances of an organism and act accordingly.
[...]
It is apparent that the continuous attempt at achieving a state of positively regulated life is a deep and defining part of our existence—the first reality of our existence as Spinoza intuited when he described the relentless endeavor (conatus) of each being to preserve itself.
[...]
The relation between feeling and consciousness is tricky. In plain terms, we are not able to feel if we are not conscious. But it so happens that the machinery of feeling is itself a contributor to the processes o f consciousness, namely to the creation o f the self, without which nothing can be known. The way out o f the difficulty comes from realizing that the process of feeling is multitiered and branched. Some of the steps necessary to produce a feeling are the very same necessary to produce the protoself, on which self and eventually consciousness depend. But some o f the steps are specific to the set of homeostatic changes being felt, i.e., specific to a certain object.
[...]
The importance of the biological facts in the Spinoza system cannot be overemphasized. Seen through the light of modern biology, the system is conditioned by the presence of life; the presence of a natural tendency to preserve that life; the fact that the preservation of life depends on the equilibrium of life functions and consequently on life regulation; the fact that the status of life regulation is expressed in the form of affects—joy, sorrow—and is modulated by appetites; and the fact that appetites, emotions, and the precariousness of the life condition can be known and appreciated by the human individual due to the construction of self, consciousness, and knowledge-based reason. Conscious humans know of appetites and emotions as feelings, and those feelings deepen their knowledge of the fragility of life and turn it into a concern. And for all the reasons outlined above the concern overflows from the self to the other.
[...]
The importance of the biological facts in the Spinoza system cannot be overemphasized. Seen through the light of modern biology, the system is conditioned by the presence of life; the presence of a natural tendency to preserve that life; the fact that the preservation of life depends on the equilibrium of life functions and consequently on life regulation; the fact that the status of life regulation is expressed in the form of affects—joy, sorrow—and is modulated by appetites; and the fact that appetites, emotions, and the precariousness of the life condition can be known and appreciated by the human individual due to the construction of self, consciousness, and knowledge-based reason. Conscious humans know of appetites and emotions as feelings, and those feelings deepen their knowledge of the fragility of life and turn it into a concern. And for all the reasons outlined above the concern overflows from the self to the other.

Thus, a self-learning potentially human AI could only become a socially-functional human if it felt some economic benefit by doing so - just as human babies do - moving from 'feral' to civilized through education and tangible economic reward. Therefore the human AI would need a homeostatic-type substrate and have economic needs.

Civilized human life is an economic existence, and the sophisticated concepts that we manipulate are socially-rooted; in your 'self', and my 'self'; multi-being. A feral human is often not considered truly human, and is seen as dangerous, so a human AI robot without social inclinations would be seen in the same way I expect, and it's sophistication and functionality would be akin to a feral human compared to a civilised human. Still, the human robot would need a primary drive that propelled its stream of consciousness - something to process, otherwise it would have no 'food for thought'. The 'food for thought' is an echo of the food it seeks from the practical world around it.

Emergence; self-consciousness, requires a self to feed and protect, otherwise the consciousness just merely draws a model of its own conscious behaviour in it's own 'mind' as coldly and unintelligently as a mirror-like lake reflecting the scenery around it. But I am guessing that you do not consider such a lake to be 'self-conscious' ;P . Such an idea is not unheard of among mystics, though - 'ghosts in machines', and so forth.

Perhaps a human robot would be required to have an infinite debt to it's human creators and thus would be expected to repay some of that 'unpayable debt' through service to their human community? This could be a root for a human AI robot self identity; an economic foundation; a sense of dignity and value to society, but who knows how that would be worked out ethically. Again, a good premise for a sci-fi story however...
User avatar
Mossling
Active Member
 
Posts: 1148
Joined: 02 Jul 2009
Blog: View Blog (54)


Re: The Humanity of Robots

Postby mitchellmckain on September 8th, 2017, 3:07 am 

Mossling » September 7th, 2017, 9:48 pm wrote:I think I am getting your angle now - you seem to expect that conscious can stand somehow alone from the homeostasis of a biological form, like Theists often hope, because it gives hope to an imortality fantasy - reborn in a new life (Hinduism, Buddhism), arrival in heaven or hell as an 'afterlife' (but no matter the outcome at least still 'alive' there!), becoming an immortal ghost that wanders the universe, just not truly dead.

Incorrect. I do not believe that consciousness can stand alone. But I am a functionalist as Braininvat has frequently observed, which means what makes it consciousness is the functionality of the process and not medium in which it occurs. So yes, I think other mediums than the biological one can also perform the same function. This also applies to expectations for alien life which may not be anything like our biochemistry or in fact any kind of chemistry at all.

I am not a Descarte dualist nor a (Neo)Platonic idealist thinking that the mind represents a separate substance or superior dimension of reality. I am, in fact, opposed to these lines of thinking. I am physicalist with regards to the mind-body problem -- saying the mind is no less physical than the body. Yet, you can say that I do employ an effective dualism which sees the mind as another form of life -- meme life rather than gene life -- with its own needs and inheritance separate from those of the biology of the body, while of course being quite dependent upon the body as most life forms are in fact dependent on other forms of life.

Mossling » September 7th, 2017, 9:48 pm wrote:And this neatly brings us back to the discussion at hand - that if you trace the evolution of cognition back through time, it is ALWAYS a homeostatically-oriented phenomenon. Consciousness is an electrochemical process that is dynamic - it is a 'stream' - and as such, has a direction; an orientation; a primary goal, just like any stream.

How can any stream move without gravity or propulsion?

My view of consciousness is certainly different. I see consciousness as a quantitative aspect of the life process itself, which is also quantitative in nature. Such is the logical conclusion from abiogenesis which makes a continuum from non-life to life. So it is not just whether something is alive but how much. The simple fact is that the life process includes self maintenance in response to environmental changes which is impossible without both an awareness of self and an awareness of the environment -- thus consciousness in some form and quantity.

Mossling » September 7th, 2017, 9:48 pm wrote:I am stating that human 'emergence' - of a conscious mind that can model it's own stream of consciousness, seems to require a self-perpetuating drive as a fundamental directive, and the subconscious 'intelligence' managing this drive is rooted in homeostasis. This is not fanciful thinking, see Damasio's Looking for Spinoza (2003), for example:
All living organisms from the humble amoeba to the human are born with devices designed to solve automatically, no proper reasoning required, the basic problems o f life. Those problems are
[...]

Thus, a self-learning potentially human AI could only become a socially-functional human if it felt some economic benefit by doing so - just as human babies do - moving from 'feral' to civilized through education and tangible economic reward. Therefore the human AI would need a homeostatic-type substrate and have economic needs.

Likewise I have a different understanding of what is life and do not equate humanity with socially functional human either. Some people care little for either economic needs or social functionality and while you may deny that they are human, I would not.
User avatar
mitchellmckain
Member
 
Posts: 703
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby Mossling on September 8th, 2017, 4:22 am 

mitchellmckain » September 8th, 2017, 4:07 pm wrote:I do not believe that consciousness can stand alone. But I am a functionalist as Braininvat has frequently observed, which means what makes it consciousness is the functionality of the process and not medium in which it occurs. So yes, I think other mediums than the biological one can also perform the same function. This also applies to expectations for alien life which may not be anything like our biochemistry or in fact any kind of chemistry at all.

Yes, order can manifest spontaneously from heated water, crystals, and lasers, and yet the existence of intelligence has not yet apparently been found in media other than the standard biological ones that we know. So 'expecting' that consciousness can emerge within some as yet unkown other setting is, as far as I can see, as rational as expecting it to be able to stand alone from the body. I don't see your view as any more functionalist than what scientiologists or qigong or reiki practitioners believe.

Yet, you can say that I do employ an effective dualism which sees the mind as another form of life -- meme life rather than gene life -- with its own needs and inheritance separate from those of the biology of the body, while of course being quite dependent upon the body as most life forms are in fact dependent on other forms of life.

So in other words you are dazzled by the emergence of the self-referential CPU power of the mind, and wish to view it as separately as a blossom may be from a gnarled tree branch. That's your choice, but there is no science that I am aware of that supports your claim.

I see consciousness as a quantitative aspect of the life process itself, which is also quantitative in nature. Such is the logical conclusion from abiogenesis which makes a continuum from non-life to life. So it is not just whether something is alive but how much. The simple fact is that the life process includes self maintenance in response to environmental changes which is impossible without both an awareness of self and an awareness of the environment -- thus consciousness in some form and quantity.

Well, you can think of our cognitive power, albeit rooted in neurochemistry, in terms of the evolution of computers - modelling 2D imagery capably until with further evolution 3D imagery and even fluid 3D movies were handled well. And the next step is for the computer to model its own computation in order for emergence of a self-conscious agent to occur.

So yes, there can be consciousness - cognition - but conscious of what, exactly? An amoeba 'recognising' there is food in its vicinity through pure chemical affinities in its outer membrane, or a human 'recognising' the process of recognition itself? It is all cognition, and it is all chemically-rooted, but just at different levels of organizational sophistication.

So consciousness is all around us - perhaps a tub of water is conscious of the impact to its rim that causes pressure waves, and yet the topic at hand is human consciousness - which is characterized by the emergence of a self. This is apparently what is necessary for a robot to be considered human. So in that case it requires the CPU brain power to model its own modelling AND a social reflection in at least one other equally capable individual.

Again, I think you wish to separate such human consciousness from the domain of biological economics and thus self, which to my knowledge just cannot be done, because by removing the economic dynamics you lose the stream that is essential to consciousness.

Thus, self-consciousness emerges from the dynamic stream of chemicals that move through our biological systems as a mere by-product - just like a blossom on a tree. I have even seen it stated that our human bodies and minds are mere transitional stages between a more significant single-celled 'root' - a sperm or an egg.

Again, I can understand why you would like to single out and elevate consciousness beyond the biological domain - there is always a hope for immortality - some less flawed substrate that can contain one's 'clever ghost' beyond death. Death is a frightening concept to be harangued by.

Likewise I have a different understanding of what is life and do not equate humanity with socially functional human either. Some people care little for either economic needs or social functionality and while you may deny that they are human, I would not.

Can you give some examples of these people who care little for their economic needs? Do they not breathe, or circulate blood? They can profess apathy as much as they like, and yet until suicide, there will be homeostatic 'care' deep within their bosom, and even suicide is apparently caused by meme-tangles that obfuscate the signals coming from the homeostatic drive.

A chimp can recognise its own face in a mirror, but it cannot yet apparently recognise that it recognises. Moving up one level to humanity allows us 'naked apes' to have the potential to recognise that we recognise, and yet until we do that (which can easily happen between mother and infant, for example - 18 months and onwards), we are more like chimps. We completely overtake chimps cognitively at around 4 years of age apparently. So I think you need to also clear up your definition of what separates a human from a chimp - and probably more specifically bonobo chimps.

For a feral child raised by wolves, it may have all the data there and the potential to recognise that it recognises, but there would be no economic benefit to do so, and therefore emergence would not seemingly take place, and the human child would continue to be more wolf-like instead of human. It may supercede its fellow wolves in areas of body language and signalling, but it would have no need for modelling its own modelling as is required in human civil life.

Emergence only apparently happens when two humans come face to face and model each other modelling one another, and then continue empathising while cooperating, and once the economic benefits are felt homeostatically, then the sense of social agency, prosociality; dignity, and thus virtue is then sealed by their cooperative deal. The human nervous system is seemingly set up ready to 'exploit' this very potential. Anything less than that is more monkey or wolf-like.
User avatar
Mossling
Active Member
 
Posts: 1148
Joined: 02 Jul 2009
Blog: View Blog (54)


Re: The Humanity of Robots

Postby mitchellmckain on September 8th, 2017, 11:09 pm 

Most of part 1 of the BBC show is focused on the limb operations of robots. Some of those towards the end of the show were quite impressive in the physical feats they could do.

They speak of a part 2 which will focus more on the intelligence of robots, and I will be waiting for that to come out.
User avatar
mitchellmckain
Member
 
Posts: 703
Joined: 27 Oct 2016


Re: The Humanity of Robots

Postby Don Juan on September 12th, 2017, 6:15 am 

....reconsidering my post and reply
Don Juan
Active Member
 
Posts: 1126
Joined: 17 Jun 2010


Previous

Return to Metaphysics & Epistemology

Who is online

Users browsing this forum: No registered users and 5 guests