Question for non dualists only

General philosophy discussions. If you are not sure where to place your thread, please post it here. Share favorite quotes, discuss philosophers, and other topics.

Question for non dualists only

Postby dogsbestfriend on January 21st, 2016, 10:08 am 

I am trying to figure out how non dualists think so please answer only if you are a non dualist, please include what you believe yourself to be, and please feel free to expand on your answer as much as possible. Thank you so much.

Does consciousness and intelligence force one to care about ones future? - so suppose there is a being/robot that is as conscious and intelligent as the average human, he also feels great pain when he is physically beaten, would you have to assume that he would avoid situations that would result in him suffering great pain if the pain came after a time delay of, say, five minutes?
dogsbestfriend
 


Re: Question for non dualists only

Postby Serpent on January 21st, 2016, 11:18 am 

I believe myself to be a reasonably well-grounded independent; have no religious or philosophical affiliations.

Does consciousness and intelligence force one to care about ones future?

Of course. Consciousness is awareness of self and of the division between self and environment. The purpose of intelligence is to enhance survival capability by making plans.

If the robot were as intelligent as a goldfish, it would avoid pain-causing situations after a time delay of 30 seconds. If her were as intelligent as a dog, he'd avoid painful situations after a time delay of six months. If he were as intelligent as a man, but free of the major human mass hysterias, he would avoid forever.
Serpent
Resident Member
 
Posts: 4219
Joined: 24 Dec 2011


Re: Question for non dualists only

Postby Ormond on January 21st, 2016, 11:36 am 

Consciousness is awareness of self and of the division between self and environment.


A non-dualist might say that consciousness is the illusion of a division between self and environment.
Ormond
 


Re: Question for non dualists only

Postby dogsbestfriend on January 21st, 2016, 12:55 pm 

I believe myself to be a reasonably well-grounded independent; have no religious or philosophical affiliations.


So as a non dualist you believe that consciousness is nothing more then the processes of the physical brain?
dogsbestfriend
 


Re: Question for non dualists only

Postby TheVat on January 21st, 2016, 1:06 pm 

The term in the field of philosophy that probably fits better is "physicalist." Which is, yes, a monist view of reality. But there are several varieties of physicalism or scientific materialism (as it's also called), so it would be wrong to say speak of "how non-dualists think," as if that were one way of thinking about the relationship between mind and brain. It should also be noted that there is another form of monism that is not physicalist, but rather Idealist, a philosophy that ranges from Berkeley's Idealism to various forms of panpsychism to mathematical Idealism (in which all reality is composed fundamentally of information). A trip to the Stanford Online Encyclopedia of Philosophy may be helpful in getting a better sense of all the alternative to dualism.

You also might find some good threads on this topic in our Metaphysics/Epistemology section, where this has been discussed extensively (exhaustively, some might say....). Putting "consciousness" into our search engine may be helpful. This is a vast topic, and it will demand some discipline and study time from you, if you hope to post coherent questions.


E.g.

viewtopic.php?f=51&t=27932
User avatar
TheVat
Forum Administrator
 
Posts: 7701
Joined: 21 Jan 2014
Location: Black Hills


Re: Question for non dualists only

Postby dogsbestfriend on January 21st, 2016, 1:50 pm 

I think my question is quite coherent, as a dualist I want to hear how different people with different views would answer my question. as you can see I also ask what they believe.
dogsbestfriend
 


Re: Question for non dualists only

Postby TheVat on January 21st, 2016, 2:00 pm 

The custom here is to first post YOUR position, fully explaining and clarifying the basis for your position, citing whatever sources you have used, and then let others respond to that. Just tossing out an extremely vague net isn't as useful here. Especially given that "non-dualism" could cover a wide range of philosophies held by scientists, philosophers, and followers of monistic religions. As I already explained. Write something so that we can understand exactly what you mean by identifying as a dualist, and what your agenda is here.

(one reason many regulars here are a bit leery of broad "survey" postings is that they often turn out to be students using us to do their homework, or sometimes people trolling in order to zap us repeatedly with their religious dogmas. I'm NOT saying you fall in those categories, but just to emphasize the importance of presenting a position that it's clear you have spent some time developing....)
User avatar
TheVat
Forum Administrator
 
Posts: 7701
Joined: 21 Jan 2014
Location: Black Hills


Re: Question for non dualists only

Postby Natural ChemE on January 21st, 2016, 3:25 pm 

dogsbestfriend » January 21st, 2016, 9:08 am wrote:Does consciousness and intelligence force one to care about ones future? - so suppose there is a being/robot that is as conscious and intelligent as the average human, he also feels great pain when he is physically beaten, would you have to assume that he would avoid situations that would result in him suffering great pain if the pain came after a time delay of, say, five minutes?

No, sentient beings aren't required to care about stuff like avoiding physical pain. In fact many humans seek it out.

That said, we're likely to design our AI's fear physical pain anyway for two reasons:
  1. We want our robots to try to avoid injury.
  2. Fearing pain would motivate the intelligence to learn about the world around it.
Natural ChemE
Forum Moderator
 
Posts: 2731
Joined: 28 Dec 2009


Re: Question for non dualists only

Postby dogsbestfriend on January 21st, 2016, 3:40 pm 

Thanks Natural ChemE
dogsbestfriend
 


Re: Question for non dualists only

Postby Serpent on January 21st, 2016, 3:55 pm 

dogsbestfriend » January 21st, 2016, 11:55 am wrote:
I believe myself to be a reasonably well-grounded independent; have no religious or philosophical affiliations.


So as a non dualist you believe that consciousness is nothing more then the processes of the physical brain?


Obviously - in the case of entities that possess this specialized ganglion. I define consciousness more broadly, though, and would rather call it a process of biological entities.
Serpent
Resident Member
 
Posts: 4219
Joined: 24 Dec 2011


Re: Question for non dualists only

Postby dogsbestfriend on January 21st, 2016, 4:18 pm 

Thank you Serpent.
dogsbestfriend
 


Re: Question for non dualists only

Postby TheVat on January 21st, 2016, 4:51 pm 

What sort of dualism do you believe in, DBF?
User avatar
TheVat
Forum Administrator
 
Posts: 7701
Joined: 21 Jan 2014
Location: Black Hills


Re: Question for non dualists only

Postby Serpent on January 21st, 2016, 5:54 pm 

dogsbestfriend » January 21st, 2016, 12:50 pm wrote:I think my question is quite coherent, as a dualist I want to hear how different people with different views would answer my question. as you can see I also ask what they believe.

Actually, you could phrase that last part a little better. I interpreted it as: How long would an intelligence remember a negative experience well enough to avoid repeating it? But I'm not at all sure that's what you meant.
Serpent
Resident Member
 
Posts: 4219
Joined: 24 Dec 2011


Re: Question for non dualists only

Postby dogsbestfriend on January 22nd, 2016, 3:42 am 

My question starts with 'Does consciousness and intelligence force one to care about ones future?'
actually I suspect that NaturalChemE didn't understand my question either because if someone likes physical pain it could be because it gives them some kind of pleasure.

I am an open minded dualist.
dogsbestfriend
 


Re: Question for non dualists only

Postby Serpent on January 22nd, 2016, 10:42 am 

We got the first half. To be persnickety, the answer is really : No, consciousness doesn't "force" some other part of the creature into caring; it simply cares.


This is the ambiguous bit
would you have to assume that he would avoid situations that would result in him suffering great pain if the pain came after a time delay of, say, five minutes?
Delay between what and which?
Serpent
Resident Member
 
Posts: 4219
Joined: 24 Dec 2011


Re: Question for non dualists only

Postby dogsbestfriend on January 22nd, 2016, 12:44 pm 

Delay between what and which?


between the act of avoiding a situations that would result in him suffering great pain, and the pain itself.

for example would he go into a burning building knowing he will get burned before he has a chance to leave.

a person wouldn't do it, but I am trying to figure out if that is a rational or an emotional decision.
dogsbestfriend
 


Re: Question for non dualists only

Postby TheVat on January 22nd, 2016, 12:59 pm 

Human motivation is more complex. A person might well face getting burned in a building, to rescue someone they care about. When you have a being capable of developing moral principles, especially principles that concern the welfare of a group to which they belong, it is quite possible for them to endure aversive situations to satisfy their sense of right action. A person might also do so for more selfish goals, e.g. get burned rescuing an expensive work of art that they own, etc. The seems to be part of the behavior of an intelligent being that can conceive past and future. But how does this really impact on the metaphysical question of dualism v. monism? Any thoughts much appreciated.

-- Paul
User avatar
TheVat
Forum Administrator
 
Posts: 7701
Joined: 21 Jan 2014
Location: Black Hills


Re: Question for non dualists only

Postby mtbturtle on January 22nd, 2016, 1:13 pm 

dogsbestfriend » Fri Jan 22, 2016 11:44 am wrote:
Delay between what and which?


between the act of avoiding a situations that would result in him suffering great pain, and the pain itself.

for example would he go into a burning building knowing he will get burned before he has a chance to leave.

a person wouldn't do it, but I am trying to figure out if that is a rational or an emotional decision.


could be both.
User avatar
mtbturtle
Resident Member
 
Posts: 9554
Joined: 16 Dec 2005


Re: Question for non dualists only

Postby Serpent on January 22nd, 2016, 8:03 pm 

dogsbestfriend » January 22nd, 2016, 11:44 am wrote:
Delay between what and which?


between the act of avoiding a situations that would result in him suffering great pain, and the pain itself.

for example would he go into a burning building knowing he will get burned before he has a chance to leave.

a person wouldn't do it, but I am trying to figure out if that is a rational or an emotional decision.


What's the length of time got to do with motivation? If you know that, in a given circumstance, you'll be hurt, it doesn't matter if it will happen in three minutes, three hours, there days or three weeks from now, you'll make plans to avoid that situation. Like planning never again to put by finger in a live light socket, so I flip the breaker before making electrical repairs. That only requires a few minutes forethought. I also plan never to get in the crossfire in gang-war. That requires considerably longer-range planning: what profession I study, where I live, who my friends are, which route I take to work.

No human (barring mental illness) goes into a burning building, knowing they won't get out. They take the risk of being trapped or overcome by smoke, if the person they hope to rescue is important enough. They might well go in, knowing they'll be injured, as long as there is a chance of getting their spouse or child out alive.

Intelligence and emotional connectedness are somewhat related: more complex species form more binding social ties. But you have to distinguish between risk and certainty, and realize that intelligent creatures can also calculate trade-off values. Will losing a kidney be more painful than losing a brother?

We're not dual. We are layered, faceted and nuanced.
Serpent
Resident Member
 
Posts: 4219
Joined: 24 Dec 2011


Re: Question for non dualists only

Postby dogsbestfriend on January 24th, 2016, 2:03 am 

from my understanding of artificial inteligence I don't see how one goes from understanding everything to caring about ones survival - unless you were preprogramed to do so, as david hume put it, You can't get an ought from an is.
dogsbestfriend
 


Re: Question for non dualists only

Postby Natural ChemE on January 24th, 2016, 2:36 am 

dogsbestfriend » January 24th, 2016, 1:03 am wrote:from my understanding of artificial inteligence I don't see how one goes from understanding everything to caring about ones survival - unless you were preprogramed to do so, as david hume put it, You can't get an ought from an is.

Human-like artificial intelligences are largely modeled after humans (duh, right?). Humans don't have large databases of pre-programmed knowledge, and humans don't seek knowledge for its own sake.

The theory is that we basically make robots with lots of sensors to give it physical senses like our five senses (vision, hearing, taste, feeling, and smelling). This isn't too hard beyond the need for hardware; for example, even cheap webcams can be used for their eyes and pressure sensors can be used for touch. Plus lots of other ways - the main point here is that we can make the body with all the senses humans have, plus more, and all better (since we can easily give the robot cameras that are far more sensitive than human eyes).

Next, we program the robot to be able to construct, validate, and improve models. This has already been demonstrated, apparently called regression testing on Wikipedia. This is actually easier than you might think at first; you basically just have the software try to:
  1. identify stuff it sees through its sensors;
  2. test correlations to see what works;
  3. accept correlations that work while forgetting correlations that fail;
  4. extend correlations to build ever-increasingly models of the world around it.
Requires lots and lots of computational power, but other than that it's actually really simple to do.

Finally, you gotta have it want something. So you tell it to avoid pain, keep its batteries charged, etc. This is called an optimization problem, and we do them all of the time. You basically tell the robot: "using what you believe as regressed from your observations, choose your next actions".

Overall:
    Image.
For example, say you tell the robot to avoid physical pain and death. The robot can then predict what it believes will happen in the future based on the models it has built up over time; for example, if it sees that the room it's in has been set on fire, it'll predict that the fire will spread (since that's what fires do) and that the fire will hurt/kill it. Then it'll think up various courses of action, such as leaving the room, and select the course of action that seems most optimal (through optimization).

Like humans, the robot won't necessarily consider all possible choices, so it may miss something. Like humans, the robot is limited to what it knows and can reason, so it can make mistakes. And like humans, robots will have fairly fuzzy understandings of the world around it because they're always trying to make sense of the observations from their sensors, just like humans.

References if you'd like to learn more:
  1. Intelligent agent, Wikipedia.
    • Artificial Intelligence: A Modern Approach, Stuart Russell and Peter Norvig
        The most popular textbook on this topic.
    • This free e-Course on edX.
        This e-Course is based on Berkeley's CS188: Introduction to Artificial Intelligence course.
    Natural ChemE
    Forum Moderator
     
    Posts: 2731
    Joined: 28 Dec 2009
    Serpent liked this post


    Re: Question for non dualists only

    Postby Dave_C on January 24th, 2016, 6:26 pm 

    There's an interesting concept here that has some applicability to the discussion called the “symbol grounding problem” (SGP). Harnad (1990) describes this problem saying, “How can the semantic interpretation of a formal symbol system be made intrinsic to the system rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?” It sounds like the question posed in the OP is about what motivates a phenomenally conscious agent and what might similarly motivate an artificial agent (AA).

    Per Harnad:
    A symbol system is:
    1. a set of arbitrary "physical tokens" scratches on paper, holes on a tape, events in a digital computer, etc. that are
    2. manipulated on the basis of "explicit rules" that are
    3. likewise physical tokens and strings of tokens. The rule-governed symbol-token manipulation is based
    4. purely on the shape of the symbol tokens (not their "meaning"), i.e., it is purely syntactic, and consists of
    5. "rulefully combining" and recombining symbol tokens. There are
    6. primitive atomic symbol tokens and
    7. composite symbol-token strings. The entire system and all its parts -- the atomic tokens, the composite tokens, the syntactic manipulations both actual and possible and the rules -- are all
    8. "semantically interpretable:" The syntax can be systematically assigned a meaning e.g., as standing for objects, as describing states of affairs).


    The point I think is, how could any kind of symbol system create meaning and not just more symbols. And why is it that meaning should emerge at all from the manipulation of those symbols since it goes without saying that those symbol manipulations will proceed from one physical state to the next regardless of what meaning might emerge from those interactions.*

    Taddeo (2005) points out that all of the approaches to solve the SGP seek to ground symbols through sensorimotor capaicities but then states that none of these can offer a valid solution. To Natural ChemE's point then, and in response to dogsbestfriend, an AA doesn't need to be “preprogrammed” if it can develop representations of an enviroment and make reliable predictions through this interaction between sensors and the artificially intelligent (AI) computation. What Natural ChemE's post discusses is this method of having the AA interact with the environment such that the AA can 'learn' in some sense, how to act as if those actions were meaningful.

    However, Taddeo claims that this method of interaction still can't provide meaning because the solution to the SGP can not rely in any way on the programmer. One has to program the agent to avoid pain or keep its batteries charged. The program has to be weighted in some way by the programmer, so the use of “pre-installed” dispositions are not allowed for 'grounding' meaning, and it should be noted that what we perceive as meaning in an AA doesn't require additional, unobservable (ie: subjective) phenomena to supervene on the AA. What the AA does and how it does it can be understood strictly from observing the physical changes in state of all the AA's parts.

    So to dogsbestfriend's point about whether or not an AA can “[care] about ones survival – unless [the AA] were preprogramed to do so”, this question seems to ask how any symbol manipulation system or computation can generate some sort of phenomenal experience such that the experience can have meaning as opposed to simply an AA acting (as if it were caring).

    *There's a related problem created that is worth considering and might help understand the SGP. If some 'meaning' (ie: through phenomenal experience) arises from the interaction of the symbol system, then how is it that THAT particular meaning/experience arises and not some other meaning/experience? The symbol system is not affected by what can't be objectively observed, so any subjective experience could arise, but the symbol system would be unaffected. This presumes of course that the experience that we can't observe is epiphenomenal. So if an experience of meaning is not physically observable, it doesn't matter what experience it is, the experience could be anything but it would be unable to influence any physical state. So how can an AA even have meaning since meaning can't influence the AA?

    Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42
    http://www.cs.ox.ac.uk/activities/ieg/e ... roblem.pdf

    Taddeo, M., & Floridi, L. (2005) Solving the symbol grounding problem: a critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence
    http://www.philosophyofinformation.net/ ... pcrfyr.pdf
    User avatar
    Dave_C
    Member
     
    Posts: 364
    Joined: 08 Jun 2014
    Location: Allentown



    Return to Anything Philosophy

    Who is online

    Users browsing this forum: No registered users and 7 guests