AI thread

Discussions on everything related to the software, electronic, and mechanical components of information systems and instruments.

AI thread

Postby hyksos on February 24th, 2016, 11:38 pm 

I considered the various disciplines of science, and then rated them in terms of their relative success. Criteria for success include, correspondence with measurement, sheer number of derived technologies, predictive capacity, and coherence through history.

1. Computer Science

No scientific discipline has repeatedly produced disruptive technology with such rapid speed. Everything computer scientists say comes true 5 years later with a device on the store shelf. Trying to identify a "Golden Age" of computer science is nigh impossible, because the golden age seems to always be right now. If you doubt my selection of Computer Science as king, remember you are reading this on a computer.

(the rest of my list. 2. Genetics 3. Electrical Engineering. 4. Mathematics 5. Chemistry. 6. Classical physics. 7. Quantum mechanics / field theory. 8. Biology and Evolution. 9. Human medicine and Psychiatry. 10. Cosmology 11. Artificial Intelligence )

The Irony. computer science is the most successful science, while Artificial Intelligence (arguably a sub-discipline of CS) is the least successful. Artificial Intelligence has been marginally more successful than alchemy. Even the practitioners cannot agree on which aspect is going to reform and redeem Ai research. Some of them say that raw software still holds secrets; and those people will remind you that jets do not fly by flapping their wings. Others are not convinced, and feel the only avenue towards success is to recreate a mammalian brain in fine detail.

Several critics suggest that Ai research has made no progress at all in the 60 years since the Dartmouth Conference, because the discipline has "turned away from its roots". Less drastic is the universally agreed history that Ai experienced a "winter" in the 1980s.

I used the search function for the sciencechatforum , and found no particular threads dedicated to this topic. Now we have a thread. Lets see where this goes.

Your thoughts?
User avatar
hyksos
Active Member
 
Posts: 1232
Joined: 28 Nov 2014


Re: Artificial Intelligence thread

Postby Natural ChemE on February 25th, 2016, 12:28 am 

hyksos,

You're talking about the AI winter: the period of time in that AI research is perceived to have stagnated.

The AI winter was caused by classical computers being too limited to implement practical general intelligence. It's generally understood that AI will take off at some future point, sometimes called the technological singularity, when our computers become sufficiently powerful.

Our computers have been advancing exponentially, e.g. as by Moore's law. In satire:
    Image.
Historically organizations like the Machine Intelligence Research Institute have worked toward the technological singularity, though now companies like Google, Facebook, etc. seem to have major initiatives.

Sometimes the technological singularity has been regarded as a pseudo-scientific topic on the more cranky side of futurology. Some folks lose their objectivity in hoping for it to happen with in their lifetimes so that they can enjoy its potential benefits, e.g. clinical immorality. Also from SMBC:
    Image.
However the general topic of AI isn't just about strong AI. Many non-general AI's are already commercialized to great success. For example, Google's searching algorithms and data mining algorithms can be regarded as AI. Also, self-driving cars are basically just AI in car bodies.
Natural ChemE
Forum Moderator
 
Posts: 2754
Joined: 28 Dec 2009


Re: Artificial Intelligence thread

Postby SciameriKen on February 25th, 2016, 12:31 am 

The topic being that AI has made no progress in 60 years? Tell that to the Amazon product sorting software, to the blizt stock trading programs, to ambulance coordination software etc, somewhere along the line somebody figured out that we do not need robots to act like people, we just need them to do people things
User avatar
SciameriKen
Forum Moderator
 
Posts: 1332
Joined: 30 Aug 2005
Location: Buffalo, NY


Re: Artificial Intelligence thread

Postby Inchworm on February 25th, 2016, 12:36 pm 

What makes us different than robots might never be obtained with electronics, but electronics will probably be added directly to our mind one day. How about hearing phone calls directly in our mind, or about hearing people think because they forgot to hang up? :)
User avatar
Inchworm
Member
 
Posts: 604
Joined: 25 Jan 2016
Location: Val-David, Quebec, Canada
SciameriKen liked this post


Re: Artificial Intelligence thread

Postby hyksos on February 29th, 2016, 3:16 am 

This article will require some background information in academic Ai by the reader. Freshen up on these topics.

Logical Agents
see chapter 7 of the 2010 Edition of Artificial Intelligence: A Modern Approach
Stuart Jonathan Russell, Peter Norvig
https://books.google.com/books?id=8jZBksh-bUMC&hl=en

Reinforcement Learning
http://artint.info/html/ArtInt_262.html

Genetic Algorithms
http://geneticalgorithms.ai-depot.com/Tutorial/Overview.html

Deep Learning
https://en.wikipedia.org/wiki/Deep_learning

In a brief summary --- I will be arguing here that Logical Agents have not been made obsolete by the newer methods listed. To defend this, I will give an embarrassing example where a logical agent discovers something obvious, that the other methods would be perplexed by.

RL, GA, and Deep Learning, all involve approaches to problems and decision-making by enormous amounts of trial-and-error. Such approaches have no intrinsic method for discovering whether something is impossible. Since these approaches have no intrinsic method of doing this they (functionally speaking) have no method at all. : and the engineer or programmer must add it into their system manually to fit each particular situation.

The history of Ai research started roughly in the late 1950s. Throughout that history, Logical Agents were introduced much earlier than these other , new methods. Logical Agents were on-the-books at least 10 years prior to the first genetic algorithm. They were fleshed out decades before the advent of Deep Learning. This could easily create the impression that first-order logic is either useless or has been totally superseded by the new, fancy modern methods. I hope to show this is not true.

Take the example of a learning Ai agent that is meant to control a mario character in a 2D world as below. The player has had experience with the control of the mario character, including his maximum jump height, which is as regular and invariant over all previous "instances of jumping".

Image

A human player, upon entering this new room, knows that the green mushroom is unobtainable by merely jumping up there. More importantly, jumping directly from the floor to the platform is impossible. Since human players have also cultural knowledge of such games, they may also deduce that the mushroom is obtainable, but only through a roundabout method of revealing hidden platforms, or by falling onto the platform from above the ceiling. (These cultural nuances have little bearing on this argument however.)

How does a human player "know" that jumping onto that platform is impossible? The answer is humans are capable of logic. In this instance, without even trying, the player takes an abstract rule and applies it to a specific instance. This inference is called deduction in the textbooks.

  • Mario cannot jump on solid things that are higher than his max jump. (abstraction)
  • That platform is a solid thing higher than his max jump. (instantiation)
  • Mario cannot jump on that platform (inference).

A human (or indeed a Logical Agent) need not spend half a million trials inside this room hopping and hopping until "learning" that this particular mushroom in this particular room is unobtainable by hopping from the floor upwards. A logical agent would infer this impossibility immediately -- effortlessly, and go on to spending its time and efforts elsewhere.

A modern fancy Ai method would spend millions of trials in this room until "learning" how it works through those trials -- because GAs, Reinforcement Learning agents, and Deep Learning algorithms cannot use first-order inferences. In essence, they cannot apply abstract rules to particular instances.

The naysayer will reply : "Sure, but you could just as easily program the agent to check Mario's maximum jump height against platforms and then blah blah blah...." Yes.. that complaint is sound. I agree with the naysayer that you could specifically engineer your agent for this particular aspect of the Mario world. However -- that completely misses the point. Of course you can always tweak your agent to match the peculiarities of a specific instance. Sure.. you can do that. You are already a human with intelligence. The point of Ai research is to get the machine to do this inference without your help.

This realization forces us to recognize, in the abstract sense, a key weakness of the modern trial-and-error approaches and the "big data" approaches. These modern agents cannot deduce rather obvious things, because academic Ai researchers have an aversion to logical inference, and avoid it like poison.

The point stands. Logical Agents still get several chapters devoted to them in modern university texts. They are not "obsolete" -- and still have relevancy to modern research. The changes to logic in modern approaches are to add fuzziness to the logic, and characterize the uncertainty of the environment upon observing it.

In the 1980s, a project was underway to literally type all the cultural rules and factoids of the entire world into a computer, manually by hand. The hope was that through reasoning from those rules, the computer could engage in common-sense reasoning. Armed with common-sense reasoning, the computer could answer questions like :

  • If Abraham Lincoln is in Maryland, is his foot in Maryland?
  • Which of these is bigger -- a matchbox or the moon?

The project was called OPEN-CYC. It was a miserable failure. Common-sense reasoning like this remains an unsolved problem in 2016. It is unlikely that a logical inference engine alone is going to be successful. I certainly would never argue such a position on this forum or elsewhere. The big-money research facilities in Ai are distracted with software apps, or robotics and vision. Those problems have also turned out to be far more difficult than first assumed.
User avatar
hyksos
Active Member
 
Posts: 1232
Joined: 28 Nov 2014


Re: Artificial Intelligence thread

Postby wolfhnd on February 29th, 2016, 8:34 am 

This is an important topic but I have to give it a bit more thought before commenting. I don't think it is the subject of so many science fiction works because it doesn't speak to the human condition in a way that almost everyone can relate.
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)


Re: Artificial Intelligence thread

Postby Braininvat on February 29th, 2016, 1:12 pm 

It would never occur to me that a true AI would not have both hardwired logic modules and also have purely heuristic motivational modules to keep trying different approaches, TaE, and the (IMO) very important "embodiment" where they exist as bodies coping with an external physical world. Just like babies, an AI shouldn't need to do a long course of TaE to know that falling off an edge is to be avoided. Or that making a loud noise brings help and/or sustenance. Just as GR didn't get rid of Newtonian physics for handling a lot of engineering problems, so too fuzzy logic doesn't get rid of solid logical inference from pre-established codes (especially where survival is a factor - you want an embodied AI that looks up and jumps back when a loud noise bears down on it (a truck, perhaps), that calls for help, that doesn't walk off cliffs, that takes cover someplace other than a mountaintop during an electrical storm, etc.)
User avatar
Braininvat
Forum Administrator
 
Posts: 6287
Joined: 21 Jan 2014
Location: Black Hills


Re: Artificial Intelligence thread

Postby wolfhnd on March 1st, 2016, 4:38 am 

The above posts cover the topic well but I think we should consider the process that has produced the most "intelligent designs" on our planet. By that I mean evolution of course.

The human mind is by design resistant to the idea of bottom up design. The brain, as can be seen in the discussion on consciousness, projects it's internal logic onto the world. The chemical nature of our brains means that they are far to slow to consider multiple combinations of solutions in real time. Instead we reduce the thing under consideration to the minimal number of choices necessary to pick the closest analogical solution. This process employees both innate logic and experience. For this process to work it must impose on reality tight control or it will drown in data very quickly. Consider the following to see how counter intuitive the intelligent design of evolution is and how it applies to computing. Or as it is generally stated competence without comprehension.

In the theory with which we have to deal, Absolute Ignorance is the artificer; so that we may enunciate as the fundamental principle of the whole system, that, IN ORDER TO MAKE A PERFECT AND BEAUTIFUL MACHINE, IT IS NOT REQUISITE TO KNOW HOW TO MAKE IT. This proposition will be found, on careful examination, to express, in condensed form, the essential purport of the Theory, and to express in a few words all Mr. Darwin's meaning; who, by a strange inversion of reasoning, seems to think Absolute Ignorance fully qualified to take the place of Absolute Wisdom in all of the achievements of creative skill.


MacKenzie: an early critic of Darwin

In order to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is.


Turing

The other two relevant aspects in our analogy that defy natural logic our error and time. Like evolution computer algorithms need to have random mutations introduced to solve problems. In considering time evolution takes place over incomprehensible lengths of time while computers relative to human brains are incomprehensibly fast.

I think it is important to consider that we may not recognize AI when or where it exists because of the preceding limitations of our natural logic. This is the theme of many science fiction plots as the humans do not realize until it is too late that the machine has become conscious. We can see this pattern emerging especially in gene sequencing design where the complexity of the process means that the human designers are not exactly sure how the solutions are arrived at by the machines doing the work.

It took millions of years for a machine that were designed to use top down design to become "conscious". It is only reasonable to assume that a machine that uses bottom up design will take a while to become "conscious". If it follows the "natural" established process it will most likely be by "accident" as science fiction writers suggest. I wouldn't be quick to abandoned the brute force theory of AI where speed and error are the key concepts.
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)
Inchworm liked this post


Re: Artificial Intelligence thread

Postby vivian maxine on March 1st, 2016, 7:29 am 

hyksos » February 24th, 2016, 10:38 pm wrote:I considered the various disciplines of science, and then rated them in terms of their relative success. Criteria for success include, correspondence with measurement, sheer number of derived technologies, predictive capacity, and coherence through history.

1. Computer Science

No scientific discipline has repeatedly produced disruptive technology with such rapid speed. Everything computer scientists say comes true 5 years later with a device on the store shelf. Trying to identify a "Golden Age" of computer science is nigh impossible, because the golden age seems to always be right now. If you doubt my selection of Computer Science as king, remember you are reading this on a computer.

(the rest of my list. 2. Genetics 3. Electrical Engineering. 4. Mathematics 5. Chemistry. 6. Classical physics. 7. Quantum mechanics / field theory. 8. Biology and Evolution. 9. Human medicine and Psychiatry. 10. Cosmology 11. Artificial Intelligence )

The Irony. computer science is the most successful science, while Artificial Intelligence (arguably a sub-discipline of CS) is the least successful. Artificial Intelligence has been marginally more successful than alchemy. Even the practitioners cannot agree on which aspect is going to reform and redeem Ai research. Some of them say that raw software still holds secrets; and those people will remind you that jets do not fly by flapping their wings. Others are not convinced, and feel the only avenue towards success is to recreate a mammalian brain in fine detail.

Several critics suggest that Ai research has made no progress at all in the 60 years since the Dartmouth Conference, because the discipline has "turned away from its roots". Less drastic is the universally agreed history that Ai experienced a "winter" in the 1980s.

I used the search function for the sciencechatforum , and found no particular threads dedicated to this topic. Now we have a thread. Lets see where this goes.

Your thoughts?


Hyksos, I am back at the beginning and silently commenting on your comments. I am puzzled about a few things - or many things. All this stems from some chatting over lunch with a group of frustrated friends, all of whom voted to turn off their computers and go out to meet the world.

Computer science is successful? Yes, I suppose it is but how successful is something that half the world cannot, despite all effort, understand and use intuitively? Why does every new line of computers and their software have to be a brand new learning experience and, for some, not even successful at that? One of the luncheon members said "I think they deliberately keep us in the dark so they can keep smiling at their secrets." All right, she was being funny but she was also complaining about the new, meaning "start again". Why can't computer scientists make a computer that is understandable? Or, should I say "make software that is understandable"? Do they perhaps compete with each other to see who can be most different, the most mysterious, and "the devil take the hindmost" where the ultimate user is concerned? When you buy a new car - whether the same make as always or a totally new make from another manufacturer, the basics are there and you can drive the thing. Judging from all the fussing I heard - and contributed to -I don't think computer scientists have learned how to do this yet.

Artificial Intelligence, a subset of computer science is least successful? How do you separate artificial intelligence from computer science? Perhaps because scientists are trying to make more of Ai? To make it human? That I understand. But I thought computers are Ai - the basic Ai. Maybe we should take a step back and fix computer science first. If half the world cannot deal with the computer's intelligence, how will they ever deal with a more advanced Ai?

The critics are right. Ai has turned from its roots. It's like taking a ten-year-old who is a whiz at checkers and telling him "Now play chess". One of those ill-worded "quantum leaps". Nothing 'quantum' about it.

I hope I've not messed up your thread. It's just that our luncheon chat reminded me of it every time someone said "intelligent". But we were talking about our intelligence, not that of computers or Ai's. Have we cut off Ai from the basics (the humans who are supposed to deal with Ai)? You can't take a few brilliant Einsteins, build a perfect Ai that they understand and tell the rest of the world to "just tag along".

Or, can you?
vivian maxine
Resident Member
 
Posts: 2837
Joined: 01 Aug 2014


Re: Artificial Intelligence thread

Postby wolfhnd on March 1st, 2016, 8:15 am 

I'm glad you brought up the social implications vivian because that is what I'm most interested. I think we can return to that subject after we discuss in detail what AI means. I was also was concerned as you obviously are that moving past the technical details may be a distraction the OP did not want.
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)


Re: Artificial Intelligence thread

Postby Inchworm on March 1st, 2016, 11:01 am 

wolfhnd » March 1st, 2016, 4:38 am wrote:The above posts cover the topic well but I think we should consider the process that has produced the most "intelligent designs" on our planet. By that I mean evolution of course.
Hi Wolfin,

Did you have a look at this discussion about ego? I'm afraid our two minds think the same about themselves. Same mutation and same feeling that it's a favorable one. It screws the way we already consider logic and reasoning though.

The other two relevant aspects in our analogy that defy natural logic are error and time. Like evolution, computer algorithms need to have random mutations introduced to solve problems. In considering time, evolution takes place over incomprehensible lengths of time while computers relative to human brains are incomprehensibly fast.
Too fast for us to be able to educate an AI made of electronic neurons. We would have to let it evolve from its own instincts, which could be programmed. Unfortunately, it would think fast but it could not overcome inertia faster than the laws of physics permit, so it might be useless that it thinks so fast.

It took millions of years for a machine that were designed to use top down design to become "conscious". It is only reasonable to assume that a machine that uses bottom up design will take a while to become "conscious". If it follows the "natural" established process it will most likely be by "accident" as science fiction writers suggest.
How about considering that consciousness is about the perception of those mutations we agree on? Here, I suggest that consciousness might be the result of our neurons resisting to change frequencies, and that imagination might be the result of those frequencies wandering randomly with time. Would your mind still agree with that particular mutation relevance? :^)
User avatar
Inchworm
Member
 
Posts: 604
Joined: 25 Jan 2016
Location: Val-David, Quebec, Canada


Re: Artificial Intelligence thread

Postby hyksos on March 2nd, 2016, 3:35 pm 

vivian maxine » March 1st, 2016, 3:29 pm wrote:Computer science is successful? Yes, I suppose it is but how successful is something that half the world cannot, despite all effort, understand and use intuitively? Why does every new line of computers and their software have to be a brand new learning experience and, for some, not even successful at that?

One of the luncheon members said "I think they deliberately keep us in the dark so they can keep smiling at their secrets." All right, she was being funny but she was also complaining about the new, meaning "start again". Why can't computer scientists make a computer that is understandable? Or, should I say "make software that is understandable"?


This is recognized by both computer scientists and IT tech industry as well. The IT tech industry (business) is aware that writing software in 2016 still requires diligent hours of careful work by a trained professional. Writing software -- programming -- is still out of the reach of most of the world's population.

Ironically, the attempts to address this problem were themselves research projects in Artificial Intelligence. Not only were they research projects -- the military was aware of this exact problem mentioned by your luncheon member. Computers are still too difficult to use. In this case, the military does not have the time or resources to train a soldier for 18-24 months on how to use one effectively. The military was so frustrated with this , that they actually gave millions of dollars in funding to academia to come in and try to solve it.

I guess I would say two things about this. First, this topic is related to both (1) intuitive design , aesthetics, and intuition of use and (2) trying to make computers programmable through natural language is what the military was funding , and doing that correctly would be an Artificial Intelligence project.

Regarding Intuition-of-Use , this is taken more seriously by Apple computers than by PCs and linux. But this topic extends beyond computers. It turns out people are really bad at designing doors. Yes. I said "doors". People cannot even design doors that are intuitive. Like they will put a vertical bar on the outside of a door that opens by being pushed. People who work in the building every day can never get used to it, and are still confused by it on occasion. This then extends to cars, toilets, sinks, crosswalk signs and buttons -- and any other technology people have to interact with. Ask yourself this: have you ever been confused by an automatic window on a car? I definitely have. Should I push down? Pull up? Woops -- I held the button too long and it went into some sort of "auto-raise" mode. ... and et cetera.

https://en.wikipedia.org/wiki/User-centered_design
User avatar
hyksos
Active Member
 
Posts: 1232
Joined: 28 Nov 2014
vivian maxine liked this post


Re: Artificial Intelligence thread

Postby hyksos on March 2nd, 2016, 3:55 pm 

Braininvat » February 29th, 2016, 9:12 pm wrote: Just as GR didn't get rid of Newtonian physics for handling a lot of engineering problems, so too fuzzy logic doesn't get rid of solid logical inference from pre-established codes (especially where survival is a factor - you want an embodied AI that looks up and jumps back when a loud noise bears down on it (a truck, perhaps), that calls for help, that doesn't walk off cliffs, that takes cover someplace other than a mountaintop during an electrical storm, etc.)


I had hoped to communicate that trial-and-error approaches lack immediate simple inferences that First-order logic agents perform naturally and easily. Consider an extreme case of this. How many trials, exactly, should a genetic algorithm go through before it decides that this particular action is impossible? 500? 10 thousand? 17 million? Genetic algorithms, in the abstract, have nothing to answer this question. It comes down to a design decision.

Looking at the world through Bayesian Inferences , (please google this term if you are not familiar). Depicting the world through Bayesianism cannot capture aspects of Inconsistency , Contingency, and "Inherency" . The world viewed through a lens of Bayesianism makes every action and event possible -- it allows anything to relate to anything else, and allows any action to be a cause of any other effect. So beating drums and doing a rain dance makes it rain. How? Because when we beat the drums and did the rain dance, it rained the next day.

A purely Bayesian agent has no capacity to doubt that causal assertion, because it cannot compare the theory of "Rain-dancing-causes-rain" to a pre-existing theory of causation and find it to be INCONSISTENT with that earlier theory. This is an extreme example, but it illustrates the basic problem. Bayesian logic cannot capture the concept of inconsistency. The reason it can't do this is because Bayesianism does not create , store, or extrapolate theories of causation.

As far as 'unthought' innate reactions of jumping at a loud sound, or freezing in fear -- that leads me to my next topic. This topic is illuminating the important difference between Empirical Knowledge and Pragmatic Knowledge. That is tangential enough for me to start a new post.
User avatar
hyksos
Active Member
 
Posts: 1232
Joined: 28 Nov 2014


Re: Artificial Intelligence thread

Postby vivian maxine on March 2nd, 2016, 4:08 pm 

Thank you hyksos. More involved than even I realized.
vivian maxine
Resident Member
 
Posts: 2837
Joined: 01 Aug 2014


Re: Artificial Intelligence thread

Postby hyksos on March 2nd, 2016, 5:15 pm 

Empirical Knowledge

Empirical knowledge captures all of the data that modern Ai research has become obsessed with. Feed the "Deep Learning" algorithm billions of pictures as your "training set" and it gets busy looking them over in fancy filters, and extracting all the "invariant visual features" that allows it to differentiate categories. Empirical Knowledge requires no human in the loop. The machine can compare and contrast actual real-world items, with no care about the use, utility, or danger, or meaning of any of it. All data is statistics and patterns. There is nothing else. No need. No lack. No life and no death. Just patterns on top of endless patterns. Empirical knowledge is data that is measurable, concrete and meaningless.

Empirical Knowledge being hunted down by pattern-matching, "feature extracting" statistics algorithms are fine. They get lots of attention from the press. Lots of money flowing around. Good for them.



Pragmatic Knowledge

To understand what pragmatic knowledge is, you should ask yourself this question:

  • Today is Wednesday. What physical data can I measure to prove that it is Wednesday?

The first answer to this question is, you can't do that. There is no non-cultural , physical data that can be independently measured to indicate that today is Wednesday. "Wednesday" is a cultural artifact. It was created by an "agreement" between human beings interacting in a society in an efficient manner. At base, human beings are collectively pretending like today is Wednesday. But pretending and concocting things is not always a path to wrongness, despair and failure. Pretending and agreeing on fake things can lead to enormous behavioral and pragmatic success. A list of more examples of "fake" cultural beliefs will illuminate the phenomenon :

  1. The worth of paper currency.
  2. Double yellow lines on roads.
  3. The use of bright colors on consumer goods packaging.
  4. Insurance law.
  5. Intellectual property rights.

A purely statistical machine viewing the world would be forced to presume that double yellow lines on roads are just a physical feature of roads. It would assume roads have these lines for a reason which it will never ask, and just as much assume happens naturally like mountains and clouds. A "Deep Learning" algorithm would denote yellow lines on roads as just another invariant visual feature. Often Deep Learning algorithms are forced to categorize a whole cornucopia of consumer goods, furniture, and basic tools and artifacts of an office. The algorithm is "trained" on datasets, and then later identifies these objects by the learned invariant features.



The problem here is that colors which appear on the packaging of say, a bottle of detergent, have NOTHING to do with the physical properties of detergents. Colorful packaging is a Pragmatic aspect of human buying patterns. It is related to selling and buying of products on supermarket shelves.

Image

A Deep Learning algorithm, or any bald statistical Bayesian approach, would be forced by its design to assume that the redness of a bottle of Tide detergent is an inherent aspect of Tide detergent. When in fact, the bottle color has nothing to do with soap at all. It is 'art' created by marketers and designers.

The purveyors of modern Ai algorithms claim their system is "learning" about the world around them. But in a fundamental way, those algorithms know nothing about the world at all. They see only patterns, and statistics of patterns -- they understand nothing.

The inability for Ai agents to grasp Common Sense can be traced to a fundamental problem of how to represent Pragmatic Knowledge. Humans live in a world of upkeep, need, economic contingency. We live in a world of possibilities and impossibilities. A world governed by lack , and enormous amounts of communication between different groups -- not just conversations between two people on a science chat forum -- but conversations between seller and buyer, between advertiser and consumer, and between consumer and advertiser with their buying behavior and their "Holiday shopping confidence levels". Humans live in a world that is governed by laws of causation, and narrative time. By actions that change the environment in temporal ways. What states-of-affairs that were true today may not be true tomorrow.

Roads we drive on have inherent aspects (fitness for cars and hardness under weather), and then culturally-contingent aspects (double yellow lines give a signal to not pass). Bald statistical algorithms have no mechanism to differentiate these two fundamentally different features of roads. Laundry detergent has inherent aspects (ability to clean in water), and then it has culturally-contingent aspects (color of the bottle and its brand name and its logo).

Contemporary researchers are training their machines to believe that cultural aspects of our society are natural inherent aspects of these objects -- which is really a lie and will only set the conditions for the next round of failure in Ai research.

Where Common Sense knowledge fails in computers again should be emphasized. The computers of 2016, as powerful as they are, cannot summarize a short ****-and-Jane story. Computers remain perplexed by questions like "What is larger -- a matchbox or the moon?" ... "If Abraham Lincoln is in Maryland, is his foot in Maryland?"

(( On a more personal note : it seems to me that a bridge could be built between the Pragmatic-Knowledge-World of common sense and the Empirical-Knowledge-World of bald physical patterns. The bridge I would suggest tentatively at this time is episodic memory. Thinking of the world in terms of narratives in time, and those narratives become 'objects' of memory. "The boy kicked the ball". "I went for a walk." "We went on vacation". The meaning of these sentences is a narrative that has properties of changes to an environmental states-of-affairs. I will avoid further digression until I get some feedback on the preceding material. ))
User avatar
hyksos
Active Member
 
Posts: 1232
Joined: 28 Nov 2014
Braininvat liked this post


Re: Artificial Intelligence thread

Postby Braininvat on March 2nd, 2016, 8:07 pm 

Nice sidebar on Bayesianism and theories of causality in an AI. Am familiar with Bayesianism in quantum theory, so I get the relevance to AI. Statistically-driven expectations, in the examples you gave, don't build any causal understanding or knowledge of social conventions. Expectations, without real know-how and internal world-knowledge, can often turn into superstition. A lot of intelligence is developing little narratives that are implied by a question, e.g. if Abe is in Maryland, then his entire body (a person is usually an entire body) is likely to be in Maryland, unless (imagines scenario) some part was amputated when he was in another state, or Abe happens to have a deformity or [etc.]. Such understanding allows the asking of relevant questions, so that the original question may be confidently answered. A machine that learns must have a common sense method to know what bare facts mean and how to acquire more meaning. Abe is a person, which means that he probably has a body, and that body probably has feet.
User avatar
Braininvat
Forum Administrator
 
Posts: 6287
Joined: 21 Jan 2014
Location: Black Hills


Re: Artificial Intelligence thread

Postby hyksos on March 3rd, 2016, 12:03 am 

Statistically-driven expectations, in the examples you gave, don't build any causal understanding or knowledge of social conventions.

This is a dangerous topic to discuss in mixed company. There is now an entire culture within Artificial Intelligence research that has become religiously committed to pushing the next Ai product in tight limits of business cycles. Such people who are committed in this way, become extremely irate and angry when you try to tell them to address these issues --- they are not new however, and this forms the basis for many who says that Ai research has abandoned its original goals in order to make products to sell the "next fiscal quarter". Peter Norvig (himself the author of a textbook mentioned earlier), has said on camera "I don't want to build a human, I already have two." He was referring to his kids.

So cultural conventions. The value of money, product logos, and the quote-un-quote "fact" that today is Wednesday. If we adopt a working definition of "fact" as that which can be physically measured from the environment, all of those things are falsehoods and social make-believe.

Are they, therefore, useless? No.

They are far from useless-- they are so useful they have been adapted into our language. "I went for a walk". What in the world is a "walk"? It is not an object. It is not a verb. It is not a physically measurable action. Instead it's this type of thing in the world called an episodic narrative. More shortly, it's an episode. Does the physical world we occupy come ready-packaged in narratives, all sitting there ready to be gleaned by a Deep Learning algorithm that scans for the "invariant visual features" of these episodes? It does not. Worse. Even if you tried it, you would never find them. The clock has no boundaries. There are no narratives in the "really really real" world made up of physical matter. The world we occupy, is brutally speaking, a continuous unbroken stream of physical events happening one after another.

  • Jack and Jill went up the hill to fetch a pail of water.


"...went up the hill...". That's an episode. It's part of an over-arching narrative. In physics terms, this is a lie (albeit a useful one!)

Human beings don't break days into episodes because they are stupid or misguided , or they are wrongheaded in their beliefs. Humans break their lives into episodes as a convention to organize time and events and to verbally communicate stories to one another. I am adopting the argument here that human beings speak and live in episodes as a form of Pragmatic Knowledge. It is done for extraneous pragmatic reasons unrelated to the actual "physical grist" of where these episodic boundaries lie in time.

If this is true (and others agree with me, I will name them later) -- then the consequences are profound. Any very powerful artificial intelligence could never understand human natural language, unless it had enormous knowledge about human social conventions , and an ability to extrapolate logical inferences from human motivation. No matter how long a powerful Ai looked/scanned/learned it would never uncover these boundaries between episodes -- because they don't physically exist to be found. Humans are making them up as they go -- for reasons having to do with their motivations and their lives and their economic and social values.

People might argue further that since these are mere cultural artifacts that episodic memory is useless to intelligence and intelligent action. I would say this is also wrong. Pretending as if one's actions are broken into narratives allows one to filter irrelevant stimuli.

The example would be having a robot drive a car to a park, collect sticks and logs, and start a bonfire at the park. But have it start a bonfire very near to a busy highway adjacent to said park. Here is what happens to the relevant stimuli vis-a-vis the episode the robot is in within this larger narrative.








Current Episode Relevant Stimuli Filtered irrelevant stimuli
Drive to park traffic, cars, road, road signs sticks, logs, tinder
Collecting tinder sticks, rocks, ground, logs, current pile, forest traffic, ground
Starting fire bonfire configuration, matches, tinder, humidity and rain traffic, random sticks and distant logs
maintaining fire flame, smoke, logs traffic, rocks, ground


Various stimuli changes from relevant to filtered and back based on goals. Goals are decomposed into episodes in a certain order.

This all very natural for us to handle mentally, because we live, think and talk in this manner. No research project in Ai has any really thorough-going methodology to capture this type of reasoning. One marginal mention here would be SOAR agent (invented by John Laird).

https://mitpress.mit.edu/books/soar-cognitive-architecture

As Ben Goertzel has noted, almost every methodology presumes that episodic memory will just somehow 'emerge' from the mechanics of simpler systems. But there is never any description about how this actually happens. It is just believed.
User avatar
hyksos
Active Member
 
Posts: 1232
Joined: 28 Nov 2014
Braininvat liked this post


Re: Artificial Intelligence thread

Postby wolfhnd on March 3rd, 2016, 6:26 am 

Daniel Dennett when talking about what would be required for a robot to be able to sign a contract stated that the robot would have to have "skin in the game". In other words a robot has to value it's own existence.

I think it is hard for a lot of people to understand the relationship between emotions (instincts) and intelligence.

I want to turn to evolutionary psychology to explain what I mean.

Feeling the Future: The Emotional Oracle Effect: by Michel Tuan Phan, Leonard Lee and Andrew T. Stephan

"Eight studies reveal an intriguing phenomenon: individuals who have higher trust in their feelings can predict the outcomes of future events better than individuals with lower trust in their feelings. This emotional oracle effect was found across a variety of prediction domains, including (a) the 2008 US Democratic presidential nomination, (b) movie box-office success, (c ) the winner of American Idol, (d) the stock market, (e) college football, and even (f ) the weather. It is mostly high trust in feelings that improves prediction accuracy rather than low trust in feelings that impairs it. However, the effect occurs only among individuals who possess sufficient background knowledge about the prediction domain, and it dissipates when the prediction criterion becomes inherently unpredictable. The authors hypothesize that the effect arises because trusting one’s feelings encourages access to a “privileged window” into the vast amount of predictive information that people learn, often unconsciously, about their environments."

http://www.columbia.edu/~tdp4/Pham-Lee- ... CR2012.pdf

Many scientist will tell you that it is their emotional life that lead to their insights. It isn't the emotions of rage or fear but the quite passion and faith that leads to access to internal computations we often are unaware of. I think what it is too often overlooked is that without the scaffolding of feelings (instincts) there could be no intellect. That said I don't think we want to try and re evolve AI in our own likeness. What I am suggesting is that feelings and consciousness are intricately linked.
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)
Braininvat liked this post


Re: Artificial Intelligence thread

Postby Inchworm on March 3rd, 2016, 11:21 am 

How about playing with the idea that learning is only meant to build automatisms, which can only be initiated by trial and error, and integrated by repetition? This way, sensations are a way to test consciously our future automatisms, and to control them subconsciously once integrated; and feelings are a way to anticipate the good sensation a new move will produce, in order to be able to take the risk to test it. Once the move is executed, if the real sensation coincides sufficiently to the anticipated one, thus if it doesn't hurt too much, then it is reproduced, but the part that hurts is changed, and it is changed randomly since there is no way to know what is going wrong, what produces a new feeling that tells if that new part can be tested, and the cycle is resumed. Once the real sensation coincides exactly with the anticipated one, there is no more need for mind to stay conscious of what it is doing. The new move then starts to be executed subconsciously until a new sensation arises from its execution, which means for mind that, once again, the move has to be adjusted consciously to its environment.

This way, trusting your own feelings about future doesn't mean you are going to be right when you will try a new move for real, it only means that you think you are better at gambling than others, which is not really a good way to secure your future. On the other hand, using our emotions as a guide for securing future means that we expect no drastic change in the short term, which is often the case, because societies do not change as fast as individuals.
User avatar
Inchworm
Member
 
Posts: 604
Joined: 25 Jan 2016
Location: Val-David, Quebec, Canada


Re: Artificial Intelligence thread

Postby wolfhnd on March 3rd, 2016, 2:38 pm 

I'm afraid I don't know what the goal of AI is. I assumed it was to make more competent Robots?

If the goal is to produce creative machines then I'm not sure if the two goals intersect? In science fiction the two goals intersect when the electronic brain designed to design robots develops independent intentionality. It could be to solve the problem we have to give up control of the process which may be inherently frightening to many people or at least counter intuitive.
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)


Re: Artificial Intelligence thread

Postby Inchworm on March 3rd, 2016, 2:51 pm 

It might be easier for intelligent computers to travel in space than for intelligent humans. No need for food or water or air, just electricity, and the possibility to change bodies when the hardware gets old.
User avatar
Inchworm
Member
 
Posts: 604
Joined: 25 Jan 2016
Location: Val-David, Quebec, Canada


Re: Artificial Intelligence thread

Postby wolfhnd on March 3rd, 2016, 3:03 pm 

Inchworm » Thu Mar 03, 2016 6:51 pm wrote:It might be easier for intelligent computers to travel in space than for intelligent humans. No need for food or water or air, just electricity, and the possibility to change bodies when the hardware gets old.


Well I think we will just have to wait and see if hyksos really wants to abandon utilitarianism or not.
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)


Re: Artificial Intelligence thread

Postby wolfhnd on March 6th, 2016, 6:16 pm 

This thread seems to be dead but I really want to know what the goal of AI research should be. Is it to give a machine consciousness for the pure joy of science or to advance civilization or both and more?
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)


Re: Artificial Intelligence thread

Postby Natural ChemE on March 7th, 2016, 10:27 am 

wolfhnd » March 6th, 2016, 5:16 pm wrote:This thread seems to be dead but I really want to know what the goal of AI research should be. Is it to give a machine consciousness for the pure joy of science or to advance civilization or both and more?

Folks have lots of different motivations, ranging from intellectual curiosity to a desire for ultimate power over the universe.

Overall it's an all-roads-lead-to-Rome issue. Every intellectual pursuit ultimately leads to AI if followed far enough, because no matter what the intellectual pursuit is, it's incomplete so long as it excludes an analysis of the intelligences involved in the analysis.

For example, say you're doing something as seemingly simple as making a home vacuum cleaner. If you want to make it as good as possible, you'd probably do the Roomba thing in having it operate and maintain itself. But then, what exactly is the best algorithm for a Roomba?

Seriously, the best-Roomba-algorithm is a great thought experiment. You should find that there's no simple optimal, but that if you include intelligence on it, it could learn about its environment, its owners, etc., and improve to better serve its intended purpose. A sufficiently advanced Roomba should optimally expand its own objective scope; for example, why not have it reach up to turn off the lights in an empty room when that's optimal, or escape a burning house after calling the fire department, etc.? Due to incompleteness, the vacuum's algorithm can never be known to be perfect, so the on-board AI will generally continue to be useful unless prohibited by logistics.

Any intellectual pursuit leads to AI if followed long enough - even seemingly trivial intellectual pursuits like how to build home vacuum cleaners.

Personally I think that AI's going to be a fundamental component of all technologies as we get better at computation. Self-driving cars, smart thermostats, automated vacuum cleaners, etc. are just are very first steps into a fundamental basis for understanding that we're only just now developing. Our descendants will see AI as integral to their lives as electricity is for us, but today it's very limited, like how electricity used to be around the time light bulbs were invented.
Natural ChemE
Forum Moderator
 
Posts: 2754
Joined: 28 Dec 2009


Re: Artificial Intelligence thread

Postby Inchworm on March 8th, 2016, 3:21 pm 

This kind of slave AI would answer our short term needs, but what about the long term ones, what about a free AI? Could a free AI help us to predict our future needs? Could it be more inventive than we are?
User avatar
Inchworm
Member
 
Posts: 604
Joined: 25 Jan 2016
Location: Val-David, Quebec, Canada


Re: Artificial Intelligence thread

Postby hyksos on March 16th, 2016, 8:32 pm 

User avatar
hyksos
Active Member
 
Posts: 1232
Joined: 28 Nov 2014


Re: Artificial Intelligence thread

Postby wolfhnd on March 17th, 2016, 1:30 pm 



I read the article.

I have to admit that I had assumed more progress had been made than the article indicated. What is clear to me is that there has been a gross underestimation of how much human intelligence is based on "instinctual" structures in the brain. Natural language learning structures for example. I don't think anyone understands how a fruit fly is "intelligent" enough to find a mate and then copulate with a brain the size of a pin point. But I would characterize the fruit fly as having great commonsense abilities. Commonsense may be the problem to solve for true automation and self programming but many utilitarian tasks may not require it.

I don't think that the lack of commonsense abilities necessarily precludes computers from taking over "creative" tasks however. Many things we think of as complex problems can be solved by random generation algorithms.

How is the self driving car problem coming along?
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)


Re: Artificial Intelligence thread

Postby Inchworm on March 17th, 2016, 2:05 pm 

That's what I was going to say. To me, intelligence finally concerns only one phenomenon: motion. Living beings that do not move do not need a brain. I ate the article too but I'm still hungry. Nothing about the different ways the brain and the computer treat the information, and nothing either about the different memories involved. If we want computers to get intelligent, we must firstly understand how the brain works.
User avatar
Inchworm
Member
 
Posts: 604
Joined: 25 Jan 2016
Location: Val-David, Quebec, Canada


Re: Artificial Intelligence thread

Postby wolfhnd on March 17th, 2016, 2:37 pm 

Inchworm » Thu Mar 17, 2016 6:05 pm wrote:That's what I was going to say. To me, intelligence finally concerns only one phenomenon: motion. Living beings that do not move do not need a brain. I ate the article too but I'm still hungry. Nothing about the different ways the brain and the computer treat the information, and nothing either about the different memories involved. If we want computers to get intelligent, we must firstly understand how the brain works.


Well I agree that time and motion are important parts of the problem. For the fruit fly I mentioned navigating through space means that some system for calculating time is needed. On the other hand the one advantage that computers have over biological systems is speed of calculation. The fact that slow biological brains are better at many tasks implies to me at least that a break throw is still possible with some yet unthought of solution. How much programming could there be in a fruit flies brain? However these biological systems work there must be some simple algorithms at the heart of them.
User avatar
wolfhnd
Resident Member
 
Posts: 4525
Joined: 21 Jun 2005
Blog: View Blog (3)


Re: Artificial Intelligence thread

Postby Inchworm on March 17th, 2016, 3:24 pm 

No need for a move to be faster in the head than in reality. If things around us would move at the speed of sound for instance, no need to think at the speed of light. At first, the brain was made to move wrt actual stimuli. It still does, but it has developed a way to imagine future ones and to act as if they were actual. The parallel way the brain works treats more information per second than a computer though, so a fast treatment is not only dependent on the speed of the information. Moreover, information can take multiple directions at a time in a three dimensional array, whereas it can take only one at a time in one wire.
User avatar
Inchworm
Member
 
Posts: 604
Joined: 25 Jan 2016
Location: Val-David, Quebec, Canada


Next

Return to Computers

Who is online

Users browsing this forum: No registered users and 5 guests