Turing Complete

Discussions concerned with knowledge of measurement, properties, and relations quantities, theoretical or applied.

Turing Complete

Postby hyksos on January 1st, 2017, 3:12 pm 

I don't want to be anyone's personal tutor on this forum. But after so many repeated misunderstandings, I felt compelled to create this thread. I hope that the content of this thread will help anyone gain an intuitive grasp on what it means that an algorithm or physical system is "Turing Complete".
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014


Re: Turing Complete

Postby hyksos on January 1st, 2017, 4:49 pm 

Prior to the groundbreaking work of Alan Turing himself, mathematicians had reduced all the branches of mathematics under a single umbrella framework -- now called Set Theory.

The methods used to uncover set theory were procedures of very rigorous proof writing. This is as type of proof-writing where individual characters in the written proof are accounted for. Every single line of the proof can be shown as a direct inductive inference from earlier lines. Today this type of proof-writing is called a "Hilbert-Style Proof". If you take a university course on the topic, you may hear them called Well-formed Formulas, or even Sentential Logic.

Prior to this time, mathematical proofs were far more intuitionistic, and they were allegedly derived from the flashes of genius in the mathematicians mind. One famous example is Leonard Euler and how he wrote proofs about infinite series. His methods would make any high school teacher cringe, because they lacked the "rigor" of modern methods. But things were looser and less formalized back then.

Proof-writing became so formal and compartmentalized, that men of the early 20th century began to toy with the idea that proofs could be written mechanically, by a machine. One such mathematician who really took the idea and ran with it was Alonzo Church. Another one of these men: Alan Turing.

Sir Alan Turing imagined an actual metal machine with moving parts, that moved a tape through a reader. The tape would contain a linear list of bits of 1 or 0. The machine would read the tape and change its gear settings based on what it read. The machine could erase and write bits onto the tape. Turing never built one. He didn't intend to either, as he was thinking about how such a machine as it would perform mathematically. So its speed was considered to be fast enough for any application, and its tape infinitely long.

glowingturing.png


The mathematical model of the 'reader' in a turing machine is now called Finite State Automata. Church and Turing began to realize that if the FSA in their Turing machine had a rich enough suite of states and transitions, that it could run any algorithm that any other computing machine can run. In their own writing, they said that any idealized computing machine (be it the tape-reader thing, or whatever else) was capable of realizing the entire class of Recursively-Enumerable Functions.

In our modern tech jargon, we say that a Turing Complete machine can emulate any other computer. It is important to note that there is no stipulation of how difficult this would be to do in practice, nor how long it would actually take to run an algorithm using such emulation. There is also no stipulations about how finely-constructed its inner circuits must be, or how the underlying circuitry is represented on the tape. ( for more digression on this subject see Busy Beaver functions : https://www.youtube.com/watch?v=CE8UhcyJS0I )

From Tape to Workstation.

A Turing machine would build up sections of its tape to act as "RAM" and "Hard drive" and a "calling stack". Then there would be a part of the tape to store something resembling an "operating system". Because the writing and reading of bits on a tape is not "addition", subroutines would be written in the simplified state-transition language of the TM to emulate addition. Then other subroutines would be built up to eventually emulate an Arithmetic Logic Unit (or ALU). Armed with a functioning ALU (in software of all things) the Turing Machine could begin to emulate a genuine CPU. Because it can add, it can now keep track of this incredibly useful thing called an "instruction pointer". The processors inside our cellphones today perform all of this with physical circuitry -- but a Turing Machine does not come packaged with such luxuries.

Eventually the TM would segregate part of its tape to store instructions. Those instructions in turn would cause the Turing Machine to run subroutines, and eventually realize programs that look something like our modern algorithms. Before long, the TM could compute trillions of vector operations (per second?), allowing it to run Crysis

Cellular Automata

Throughout the 20th century, a number of heavy-hitter scientists contributed their 2 cents to descriptions of machines which can compute anything, (such as Von Neumann's "universal constructor" and other oddities.) As the decades passed, smarty pantses continued to find simpler and simpler rules which can compute anything.

In the year 2000, the simplest Turing-Complete rule was discovered. It is called Rule 110. It is a 2-state 3-neighbor totalistic cellular automata. It operates in a 1-dimensional array of cells. The visuals for Rule 110 are not showing the rule itself, but "slices" of its grid states stacked on top of each other , with time running downwards.

The simpler a rule is, the more heinously difficult it becomes to construct even the simplest of useful logical functions using them. It would be an enormous feat to produce a NAND gate using Rule 110. The starting state alone would likely be thousands of cells wide. It should be emphasized that all that careful programming and tweaking would be required to even realize a single primitive logic gate. Very careful design and care is required to make a logic gate, let alone more complex algorithmic features. For comparison, a modern CPU in 2016 contains around 450 million gates. On top of that, physical computers already have lots of engineered aspects going for them. They have RAM segregated from CPU by a motherboard, and a BUS, and peripherals in slots. Cellular Automata have none of that luxurious jazz, and would need to emulate it somehow. Some people may claim that there is something inherent in simple Cellular Automata that makes them "physically natural" in some way. The above numbers show that this claim is unfounded -- even unreasonable.

Conway's Game of Life (heretofore ConwayGOL) is now proven to be Turing complete. But it's universal computing ability was for a long time only assumed to be true by the community. The proof involves showing that there exists a pattern which emulates the functionality of Rule 110. In general, if rule X can emulate Rule 110, then it follows that rule X is Turing Complete.

ConwayGOL is a 2-state 8-neighbor sum-symmetric rule. One might ask how 'simple' ConwayGOL is compared to Rule 110. On first glance, they appear "just about equal". This is very far from the truth. If you re-write ConwayGOL as a 2-state 8-neighbor totalistic rule, its rule table is an array of 512 bits. That is far larger than the paltry 8 bits of Rule 110's entire totalistic table.
r110totali.png


ConwayGOL gets a lot of press due to its very simple description. But it is only one possible rule out of the 2-state sum-symmetric rules which operate on 2D grids. Sum-symmetric rule definitions follow the convention of Birth-and-Survive, written B and S respectively. "Birth" means an off cell becomes on in the next cycle. "Survive" means an on-cell remains on in the next cycle. If a rule is not mentioned in the convention, the cell does a default behavior of ""remaining dead" or "dying".

ConwayGOL is B3/S23

Here is a list of 2D grid rules s-s, which are now proven to be Turing Complete, as of Jan 2017.
Code: Select all
B3/S238   (Eightlife)
B3/S237
B3/S2378

B38/S237
B38/S2378

B37/S23
B37/S237
B37/S238
B378/S237
B378/S2378

B36 /S23
B36 /S238
B368/S23
B368/S238


In the last few years, other Cellular Automata on grids have been found that appear to be even simpler than Conway. One of them is a block-cellular-automata that updates the grid by localized rotations. It only has 1 rule. It was investigated by Russian mathematician Dmitry Shintyakov. It appears to be very complex, but there is yet no proof of its Turing Completeness.

It's entire rule is
Rotate clockwise by 90° every block with exactly 1 alive cell.


That's it. It's eerie to know that an algorithm that simple can be so complex.

For more see : http://dmishin.blogspot.com/2013/11/the-single-rotation-rule-remarkably.html
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014
FaradaveDave_ObladBurtJordaan and one more user liked this post


Re: Turing Complete

Postby Dave_C on January 2nd, 2017, 11:51 am 

Hi Hyksos. Thanks for posting this. I’d like a better understanding on some of this, it’s a topic that has always left me a bit puzzled.

If I look at a definition of “Turing Complete”, Wiki gives me:
In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally universal if it can be used to simulate any single-taped Turing machine.


My interpretation (please correct me where I’m wrong) is that a Turing machine can be interpreted as performing any given mathematical calculation so anything else that can also do that is ‘Turing complete’ meaning it can be used to perform any mathematical calculation.

The phrase ‘interpreted as performing …’ means that the “physical system implements a computation if the causal structure of the system mirrors the formal structure of the computation” (Chalmers) meaning there has to be some sort of 1 to 1 relationship between what the machine or CA is doing and the mathematical computation it represents.

Conway’s GOL doesn’t seem to be performing a computation per se. I would interpret Conway’s GOL as a ‘game’ of sorts. I don’t see how the GOL can be interpreted as performing any (every?) mathematical computation. Similarly, the “Busy Beaver” video you provide a link to seems to be a similar game based on a very simple mathematical set of rules. I guess I can see the potential for a CA to perform any mathematical computation, but I don’t understand how these particular, very simple, sets of rules can be interpretable as performing any computation and therefore how they can be interpretable as being Turing complete.

Thanks for any insight.
User avatar
Dave_C
Member
 
Posts: 270
Joined: 08 Jun 2014
Location: Allentown


Re: Turing Complete

Postby hyksos on January 2nd, 2017, 9:21 pm 

I guess I can see the potential for a CA to perform any mathematical computation, but I don’t understand how these particular, very simple, sets of rules can be interpretable as performing any computation and therefore how they can be interpretable as being Turing complete.

The simple rules do not perform any computation -- yet. In all the proofs, the mathematician is showing that there exists a configuration of 1's and 0s on a grid which can perform some particular fundamental computation. The smarty-pantses who discovered simpler and simpler rules include Claude Shannon and Marvin Minsky. Those guys were actually working with things called Tag Systems

https://en.wikipedia.org/wiki/Tag_system

A little more on the laborious : So you have to imagine a person who wants to write software but assembler code is too 'easy' for them. Instead this eccentric fellow wants to go ahead and program a computer directly with machine code. Such a person who would find that exercise "fun" would be the kind of person who would try to make Rule 110 perform a computation. ( Yes, it is possible -- but at what cost? )

For Rule 110, Mathew Cook had to laboriously show how gliders can overlap each other without destroying their relative locations from left to right. It is assumed then that the relative locations of the gliders could be 'interpreted' by an even larger grid as being differentiated "symbols". Cook needed the grid/array to be able to copy symbols to the end of the array and concatenate them there. As the proof goes, it shows that Rule 110 can emulate a Tag System. (Voila. quod erat demonstrandum.) It's clumsy and no sane person would think too far into this. But mathematicians agree, it does work -- albeit in a very ugly way.
Last edited by hyksos on January 2nd, 2017, 9:31 pm, edited 1 time in total.
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014
Dave_C liked this post


Re: Turing Complete

Postby Scott Mayers on January 2nd, 2017, 9:23 pm 

I find all this interesting and still continue to learn. But for this topic, as much as I've read Turing so far, you have to keep stopping to try to absorb WHAT elliptical thinking is involved external to his thinking. The definition on the Wikipedia page is insufficient and is circular. One of the major problems in teaching this subject is that those who put in the effort to the ideas demand OWNERSHIP to their discoveries which tend to obscure meaning because they tend to trademark some generic ideas with names that make it require MORE effort to decode before you can understand.

While "Turing" is given the name for his 'machines', the simplest introductory label of this is "FUNCTION", and represents the simplest set of functions possible for ANY machine to convey information. So it uses an 'imaginary' tape that is infinite (non-circular) that has a space(s) for DATA and a space(s) for REFERENCE ('address' in modern computing). He tried to find the simplest set of ways to deliver some information, receive it, and/or change it. These three simple concepts for ONE space is all that is logically and minimally needed to exchange INFORMATION.

These are the minimal functions simply for ONE space, and thus, for our modern minimal sense, of a zero or one as information. Then he extended this to a sequence of ordered spaces (the tape). For this, you need an extra 'function' to MOVE to one space forward or backwards. So reduced, this means that for a FUNCTION of information 'trading', you need a "print" (or write) command ( or function), a "read" command, and a "move" (or transfer function). [Actually to be more complete, he missed two transfer-types: one to transfer information from one 'cell' on the tape to or from one 'cell' on his moving head (a cellular contemporary memory unit). The 'erase' is done by 'printing' a symbol of nothing and so is absorbed by it.

The "program" is done externally (at first) by what he puts on cards. So you might want to have a list that begins with a read command, print, or move. Instead of having these distinctly written, he imagined ALL commands as a SINGLE command. (for those with background in architecture, this is a Reduced Instruction Set Computer.) He needed also information to "stop" the reading on the card (the program) or it would run forever or "hang".

Then after he had formulated the minimal information to do this on the tape, he wondered if instead of simply using cards, if he could have a blank or initial GENERAL card of instructions that would BEGIN reading the tape to use the data on it to load the SPECIFIC information so that the computer could be UNIVERSAL. So the "General" instruction is used for ALL computing devices but the program can be written on the very tape using those general functions above. Once the initial tape HAS that program, then ANY number of programs can be written to the tape and voila, we have a modern computer. [The "general" card or program is the start-up BIOS hardware that loads first the program. If it uses more than ONE program, it is ideal to have the first one be a commanding one, and "operating system".

The above description making such a 'logical' machine work IN GENERAL, was the name given to a Turing Machine. I can't be sure, but I believe the "complete" adjective is meant to mean including a GENERAL program written on the tape as I just defined. This way, it can be used to make ANY kind of information exchanging machine 'complete' rather than individual or proprietary, like a calculator that requires YOU to 'print' in each data and specific function (like add or subtract or any of the input buttons) to 'read' by your own interpretation of the simple numbers. A general computer that can do anything in kind to "Turing Complete" will be one that is programmable.

P.S. well-formed-formulas are about defining data as symbols, its order and how the 'form' of the logic is to take consistently. The grammar, syntax, and semantic interpretations of ANY logic. So the minimal data symbols being used to convey information enough to run the general computer has a minimal set of symbols, and to make them conform to a process, the 'rules' of how to interpret the functions in a 'convention' is 'well-formed'. I would ignore this if you are just trying to understand the first ideas of computing though.
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015


Re: Turing Complete

Postby Scott Mayers on January 2nd, 2017, 9:33 pm 

Hyksos, You may have discussed this somewhere that I missed and so am wondering why particularly you raised this?

The extended link and discussion on the goal there is about determining how a computer can exhaust (enumerate) certain possible overall numbers on the tape. There it deals with how many '1's you can make the computer print but seems too indepth to relate to just what "turing complete" means. All the term refers to is to what a general computing machine requires (to run read and run any kind of pre-written program on the tape) rather than to have to print in the program each time similar to the way we might have had to do without an operating system with the original "Commodore" type home computers.

EDIT: To clarify appropriate to the wiki page, the whole general computer with the program on the tape is a "computationally universal" Turing machine where the 'completeness' may simply refer to the minimal set of FUNCTIONS and setup needed to read, write, print, and move on an infinite tape.
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015


Re: Turing Complete

Postby hyksos on January 2nd, 2017, 9:51 pm 

Dear Scott Mayer,

There is no rigorous definition of "Turing Complete". This is why, till this day it is still called the Church-Turing Thesis, not the Church-Turing Theorem.

The word "complete" is attached as a play on the phrase "Non deterministic polynomial Complete" often written NP-Complete.

These highly-reduced "Beaver-like" computing agents can easily lead our mind astray. They are too tightly recursive and blur the line between value data and instruction data. The fundamental aspect is the ability of the machine to load a program from memory and execute it -- making it a so-called General Purpose Programmable Computer.

There is a sense that a programmer of the device would not know what the instruction movp corresponds to in terms of a machine code number. There is even a sense that he should not know. He's not supposed to be thinking of instructions as numbers. Busy Beavers think everything is 1 or 0, even parts of the tape that would be protected portions of the Operating System. No sane person would tinker in that part of RAM.
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014


Re: Turing Complete

Postby Scott Mayers on January 2nd, 2017, 10:22 pm 

The definitions on the Wikipedia page are either circular or require PRIOR (pre-) requisite education to interpret the terms. I simplified it to what it INTENTIONALLY means. Turing used multi-valued terms to fit neatly into one space for logical simplicity. He had to in order to later add the program onto the cards to be interpreted with the functions AND because he didn't define a physical means to 'address' each memory unit ONLY. It would be more complex without because he couldn't use each skip memory space as 'marks' to label had he simplified it to only ones and zeros.

In modern computers, we treat the addresses sequentially with alterable data and instructions where needed. So his tape 'form' was like:

(data), (markup), (data), (markup), ...

If we used only '0's and '1's, because of varying sized data, he opted to simplify this to have one space as data (mult-valued but as minimal as possible) and one space as a marker to help identify the data to its left or right. Since he wasn't concerned with an actual computer, it doesn't matter for his intent to illustrate how it can work. The markup spaces could be modified as needed and only represent working spaces except for the initial data and markup that both are used to identify the beginning of the program to read.
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015


Re: Turing Complete

Postby Dave_Oblad on January 3rd, 2017, 3:47 pm 

Hi Guys,

I ran across this last night and watched it. I found it very interesting. It is a bit dated but still relevant.



It gets a bit heavy in places and feels a bit rushed.. but offers an interesting perspective on Turing Complete.

Regards,
Dave :^)
User avatar
Dave_Oblad
Resident Member
 
Posts: 3213
Joined: 08 Sep 2010
Location: Tucson, Arizona
Blog: View Blog (2)
Natural ChemEDave_CScott Mayers liked this post


Re: Turing Complete

Postby Scott Mayers on January 3rd, 2017, 5:11 pm 

Dave,

I'm enjoying watching this video and am just pausing at the 33 minute mark because while I like it, I am not sure how this is related to what is "Turing Complete" by your interpretation? Is there something discussing this particularly later on other than the use of Turing machines he mentioned briefly as a means to 'model' reality? That is, he wasn't defining the concept of "completeness" nor "Turing" there; he assume his listeners already understand it.
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015


Re: Turing Complete

Postby Dave_Oblad on January 3rd, 2017, 5:40 pm 

Hi Scott,

Not my interpretation of Turing Complete.. just his. I seem to recall he does a very good job of defining the concept of what is, and isn't, Turing Complete. I fall way short of holding a candle to his experience and credentials.

Best wishes,
Dave :^)
User avatar
Dave_Oblad
Resident Member
 
Posts: 3213
Joined: 08 Sep 2010
Location: Tucson, Arizona
Blog: View Blog (2)


Re: Turing Complete

Postby Scott Mayers on January 3rd, 2017, 6:40 pm 

Dave_Oblad » January 3rd, 2017, 4:40 pm wrote:Hi Scott,

Not my interpretation of Turing Complete.. just his. I seem to recall he does a very good job of defining the concept of what is, and isn't, Turing Complete. I fall way short of holding a candle to his experience and credentials.

Best wishes,
Dave :^)

I'm now at the 55 minute mark and am getting lost by the increased use of terms I'm unfamiliar with. I understand the logical concepts involved but find it frustrating that there actually is not a 'universal' means to stick to one set of vocabulary for all areas of studies that actually discuss the same concepts over and over again without realizing they are not discussing anything 'novel' but just in 'novel ways'.

The whole topic can be reduced to questioning whether we can find some means IN reality to interpret the whole of reality OUT or beyond our own experience. Godel's Incompleteness theorem suggests we cannot. But this was actually intrinsically understood when 'irrational numbers' were discovered using different language. The nature of being unable technically to determine the n-th term of some irrational number like Pi is just the same argument about whether we can use some computer to ideally solve any problem as such.

I think the competing ideas deal with whether we could determine if nature itself is 'determinate' in the likeness of some God-like ideal perspective. The 'modeling' Stephen mentioned at the beginning demonstrates my understanding in this sense. That if we were to LITERALLY complete the explanation of something using some model, that model itself would actually BE the very thing itself and no longer be a mere model because it would have to exhaustively demonstrate all the details of that reality.

"Turing Complete" DOES require explaining what is NOT by contrast and so I'll have to see whether he satisfies this for me. To me, any 'completeness by X' just simply reduces to meaning that something qualified by that meaning exhausts some range of expectations that X is understood to be able to do by definition. But it is unnecessarily a more complex way of just saying that

If all things we 'posit' in some Universe at least do not defeat its existence by its definition, then what we 'posit' that speaks about that Universe may justify its existence for being posited but not able to be disproved.

I think that this is just a way of trying to rationalize some 'closure' where something is literally unable to be 'closed' with consistency.

Stephen appears to be supporting the same idea that a something can derive from nothing by some degree of consistent logic of some sort. If this can be shown in a 'consistent' way, then it can rationalize ALL reality consistently in our Universe too. I disagree with trying to find some way out of the paradoxes by beginning with a consistent logic as some 'origin'. But you can show as is done over and over and over again in different guises that the nature of beginning with 'consistent' logic cannot be 'closed' (completed). Then this should be sufficient to PROVE that reality as a whole (totality versus our little part of it) is caused by a type of relatively 'irrational' origin.

Some would place some particular 'god' in that explanation but this still just circularly tries to make 'rational' to that 'irrationality'. The only thing that least justifies this problem is to assume nothing at some 'origin' or assume absolutely everything. But if even everything is assumed, so should an 'origin'. So we are back to assume nothing as a logical origin.

So everything is reduced to a non-axiomatic origin because not even ANY 'law' exists there to be forced to BE 'consistent' because the idea of "law" itself BEGS some obedience to be consistent. You can at least argue that for inconsistency to REMAIN inconsistent justifies HOW at least one consistency can be derived from beginning inconsistently. Then reality just separates itself to worlds that are 'consistent-biased' from the rest of totality that infinitely contains all other possibilities.
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015


Re: Turing Complete

Postby Dave_C on January 3rd, 2017, 10:01 pm 

Hyksos - thanks. Got it. I hesitate to ask how one shows that rule 110 (or any other rule) might be interpretable as being a calculation and then how that calculation might be interpretable as being Turing complete. I suspect that's a bit too complex to explain, but if you know how to easily relate this or provide a link or nice video such as Dave Oblad's YouTube video, that would be great.

I think I saw Wolfram show rule 110 in Dave O's link early on (less than 15 minutes).

Dave O: Thanks for the video. Only watched the first 30 minutes, but it helps. What struck me is how Stephan makes the jump from those simple CA rules to physics. The similarity between CA output and nature can be striking, but I would question whether or not CA's are the right way to conceptualize physics. I hesitate to get too deep into this because the OP is not about how CA's can be used to represent physics, but clearly this is something much talked about in philosophy and science. I like Feynman's view on this aspect of CA's.
https://people.eecs.berkeley.edu/~chris ... eynman.pdf
User avatar
Dave_C
Member
 
Posts: 270
Joined: 08 Jun 2014
Location: Allentown


Re: Turing Complete

Postby hyksos on January 4th, 2017, 12:41 am 

The whole topic can be reduced to questioning whether we can find some means IN reality to interpret the whole of reality OUT or beyond our own experience. Godel's Incompleteness theorem suggests we cannot. But this was actually intrinsically understood when 'irrational numbers' were discovered using different language. The nature of being unable technically to determine the n-th term of some irrational number like Pi is just the same argument about whether we can use some computer to ideally solve any problem as such.


I disagree with this statement on several counts. Godel was not writing a proof about our ability to "interpret reality". The link between pure mathematics and the physical world is tenuous at best.

Godel Incompletness is not a re-hashing of irrational numbers and their digit expansions. Quite rigorous proofs from calculus can give you all the terms of PI in a various panoply of infinite sums. We even have infinite products for PI. And our system of mathematics can prove that the infinite sum does actually equal PI -- meaning you are in no territory to invoke Godel's Theorem.

When a mathematician says that a proof for a certain proposition "exists" --- they are saying nothing about whether we could actually find it. When a mathematicians says a theorem's proof "does not exist" they mean this in a literal sense : even if you could enumerate every single theorem written in a binary format, one-by-one, that never, not even in all eternity will that collection of symbols be the proof you seek.

This has nothing to do with resolution of a computing agent storing fractional numbers, and how long that would "take" to print out the nth digit.

Godel's theorem is about the existence of proofs. It really is not a consequence of round-off error.
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014
Braininvat liked this post


Re: Turing Complete

Postby hyksos on January 4th, 2017, 1:05 am 

But this was actually intrinsically understood when 'irrational numbers' were discovered using different language.


Sorry to be hostile, but I just needed to double down on this particular claim you made. Godel's 2nd Incompleteness theorem was utterly unexpected by the vast majority of working mathematicians. It has consequences that "cut so deep" that Kurt Godel is considered the greatest logician since Aristotle. So the kinds of issues that the theorem touches upon run so deep in western culture, that you have to basically go back to Athenian Greece to re-visit them. That's big. Like really big.
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014


Re: Turing Complete

Postby Scott Mayers on January 4th, 2017, 1:47 am 

You miss my point hyksos.

I'm very understanding of Godel's Incompleteness theorem and it is conditional upon its own limitations: that it is based on 'consistent' logic.

What it proves in the first theorem is that given any sufficiently provisional logic (one that can be expected to cover all truths of a given domain), such an ideal logic could not exist that can actually cover all truths.

Explanations about reality ARE logical constructs that use our observations as input minimal assumptions (via senses). As such, since no mathematical logic can exhaustively cover ALL mathematical truths, nor can ANY logic exhaustively cover ALL truths anywhere. This includes reality. Godel only meant to speak against Russell and Whitehead's construction of math that initially thought they repaired 'closure' of math by using "Types" to overcome the paradoxes. Godel showed that even their sufficient logic that seemed to justify all of math they attempted was futile because it could resolve all truths. But this implicitly proves this true about reality (given 'consistency' as a meta-law to all logical systems, which includes science)

The second theory was reflecting that any logic cannot be structured to prove itself and ALSO extends to our reality. If 'true' though, even his own proof reflectively cannot be trusted because it is based on consistent logic. The theorem is 'true' but means that if something exists based on 'consistent' assumptions, they too must be unable to prove something within it consistently. As such, this is why you have to seek an 'inconsistent' logic to answer this with closure because only they are complete.

SO, my point about relating history to this using 'irrational' numbers as an example, is to show that the same kind of intellectual problems on logic occur over again and again in different forms throughout history. Godel's theorem is just an update and Turing did the same in his initial papers using different terms too. Likewise, at the same time, Quantum Mechanic's Copenhagen Interpretation of Hiesenberg's "Uncertainty Principle" correlate in another set of different terms. The point is that the logic regarding non-closure (and thus, uncertainty and undecidability, etc, ect) can be collected under one logical umbrella rather than the complicated ways all these are re-ignited in multiple ways unnecessarily.

Personally, I'd prefer sticking with Godel's but since even YOU misinterpret this as being only a limited proof that doesn't extend beyond the range of math, only proves why such errors are likely due to the confusion of multi-disciplinary studies to re-invent some distinct idea as though only locally valid when they are summarily suitable for a simpler overall explanation collecting all fields of study.
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015
Natural ChemE liked this post


Re: Turing Complete

Postby hyksos on January 4th, 2017, 4:44 am 

SO, my point about relating history to this using 'irrational' numbers as an example, is to show that the same kind of intellectual problems on logic occur over again and again in different forms throughout history. Godel's theorem is just an update and Turing did the same in his initial papers using different terms too. Likewise, at the same time, Quantum Mechanic's Copenhagen Interpretation of Hiesenberg's "Uncertainty Principle" correlate in another set of different terms. The point is that the logic regarding non-closure (and thus, uncertainty and undecidability, etc, ect) can be collected under one logical umbrella rather than the complicated ways all these are re-ignited in multiple ways unnecessarily.


The only connection between these various topics you have collected here is the overarching cultural-historical trend. In the early 20th century, we can add all these major changes to a general trend in the so-called "Western Intellectual Tradition"
  • Godel Incompleteness.
  • The emergence of quantum mechanics.
  • Special and General Relativity.
What is the common thread that runs between these three things? It is an issue of Certainty. Philosophical Certainty was being severely damaged by those three revolutions. Northern European white men have a sensitive spot regarding certainty, and some soul-searching was required. The western mind is obsessed with order, control , and rightness of thought. We westerners have learned some hard lessons lately.

As our culture came out of the end of the 20th century, some mathematicians were even saying things like "Mathematics is more beautiful incomplete." So we transitioned from revulsion, into a sort of respect, into admiration.. even 'love'.

This stuff we discuss is all history now. Today, smart people are finding and investigating numbers that cannot be described by ZFC. So some theorem being "independent of ZFC" is not shocking and disturbing like it was in the 1940s.

Personally, I'd prefer sticking with Godel's but since even YOU misinterpret this as being only a limited proof that doesn't extend beyond the range of math, only proves why such errors are likely due to the confusion of multi-disciplinary studies

The proof itself does not extend beyond the range of math, and that is just common sense. No sane person would categorize a uni course on Proof Theory or course on Foundations as "applied mathematics". That would be ridiculous.

Nevertheless the implications of Godel's work may extend beyond pure math. I think of two ways where that could happen.

I.)

We know now that a first-order logic system can never make references to the fact that it can state propositions. We humans do this all the time with meta-lingual references. We can refer to the fact that we are speaking. An utterance in conversation itself can be referred to. FOL systems are forever locked into their own little universe of their own symbols, which can never represent sentences themselves. In this way, "systems" of deductive calculi (whatever they may be) are presumed to state everything all at once, like the theorems all form a static tree of deduction.

An artificial intelligence as a robot, instead lives and acts in a world where there is a time prior to stating a proposition, and another time later, after the proposition has been stated. ("When I was young I was wrong about why the sky being blue, but now I am correct in my theory.") All logic and math do not allow for an agent to be wrong about something it believes, and to later revise the knowledge to something different, but more true in the future. A system which revises its knowledge in lieu of more information, is some kind of system -- but it aint logic.

II.)
The class of Recursively-Enumerable functions may be "larger" than ZFC. I think that it probably is. The reasons is because it was discovered (like last year!) that a Busy Beaver with 8000 states is undecidable. BB(8000) is kind of ridiculous, but it is a good start. There is likely a much smaller Beaver that is undecidable. And yes, there must exist the smallest undecidable Busy Beaver. Nobody has any clue which it is. Whatever it turns out to be, it will have far less than 8000 states.

If I had to guess, it is probably like BB(11)

... or smaller? Food for thought.
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014


Re: Turing Complete

Postby Braininvat on January 4th, 2017, 11:15 am 

https://en.m.wikipedia.org/wiki/Busy_beaver

For anyone struggling to keep up, this wiki may be helpful.

(I see that Hysos linked a BB video in his Magnum Sub-OP, so this is just an adjunct to that, where you can brush up on some particulars)

Regarding (from the sub-OP)

The simpler a rule is, the more heinously difficult it becomes to construct even the simplest of useful logical functions using them. It would be an enormous feat to produce a NAND gate using Rule 110. The starting state alone would likely be thousands of cells wide. It should be emphasized that all that careful programming and tweaking would be required to even realize a single primitive logic gate. Very careful design and care is required to make a logic gate, let alone more complex algorithmic features. For comparison, a modern CPU in 2016 contains around 450 million gates. On top of that, physical computers already have lots of engineered aspects going for them. They have RAM segregated from CPU by a motherboard, and a BUS, and peripherals in slots. Cellular Automata have none of that luxurious jazz, and would need to emulate it somehow. Some people may claim that there is something inherent in simple Cellular Automata that makes them "physically natural" in some way. The above numbers show that this claim is unfounded -- even unreasonable.


Really glad someone's making this point - engineered is good. And, as Wolfram discovered, neural nets really can't be built from CA, i.e. there are sorts of natural engineering that aren't really compatible with a CA emulation. What CA can do is the sort of thing you find in GOL, where gliders can made to interact to do various computations. Seems like you need a buttload of gliders, though. If a universe were going to "compute itself" this way, seems like you'd want some extra dimensions, and even then it would really be the Hard Way.

I'm pretty rusty (learned about this stuff in the 80s), but I gather that a Moore neighborhood might work a little better than a Von Neuman neighborhood in setting up logic functions. Is that right?
User avatar
Braininvat
Resident Member
 
Posts: 5839
Joined: 21 Jan 2014
Location: Black Hills


Re: Turing Complete

Postby hyksos on January 4th, 2017, 3:36 pm 

I see a little bit of activity in this thread since I posted. I see braininvat's reply and Dave O mentioning Stephen Wolfram. I wanted to add some additional nuggets. I apologize ahead of time if this material is not appropriate for this forum. It may be too technical , but I just wanted to comment.

Stephen Wolfram and Juergen Schmidhuber are very excited about the implications of computability to the universe we live in. I don't know either man personally, but I have a strong feeling that they would agree with this diagram, which I put together in photoshop.

superREF.png


In this version of things, the system of mathematics we use appears as a sub-class of all computable functions, or "REFs" for short. Then our universe operates by some subset of mathematical laws that appear as a subset of ZFC. If this diagram is true, you can get a sense of why Stephen Wolfram is so excited about the possibility of universal computation.

Schmidhuber has confidence that REFs can simulate "any universe" , including ours, if indeed our universe is computable at all.

In talks and presentations, Wolfram has said that there is a reason that all universal computing agents are all "equal to" each other in some strange way. His explanation is that our situation is like the above diagram. Notice that there cannot be anything "larger" than the class of REFs. That is, you go up and up the hierarchy until you reach the class of REFs, and you hit a "ceiling" upon which you cannot go higher. That is Wolfram's explanation for why all these Turing-complete rule systems are equal to each other.

This may not be true in the end, but folks, if that is true, then that is really profound stuff. Exciting stuff, really. I can sense why Wolfram wrote such a large book on the topic!


(I guess what I would say next would be to mention a few scientists who disagree with the above diagram. They think it might be backwards. I will wait for some replies before taking that tangent.)
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014


Re: Turing Complete

Postby Scott Mayers on January 4th, 2017, 7:24 pm 

hyksos wrote:
  • Godel Incompleteness.
  • The emergence of quantum mechanics.
  • Special and General Relativity.
What is the common thread that runs between these three things? It is an issue of Certainty.


Then you should get my point. But...

hyksos wrote:
Scott Mayers wrote:Personally, I'd prefer sticking with Godel's but since even YOU misinterpret this as being only a limited proof that doesn't extend beyond the range of math, only proves why such errors are likely due to the confusion of multi-disciplinary studies

The proof itself does not extend beyond the range of math, and that is just common sense. No sane person would categorize a uni course on Proof Theory or course on Foundations as "applied mathematics". That would be ridiculous.

Nevertheless the implications of Godel's work may extend beyond pure math. I think of two ways where that could happen. ...

Yes it does extend beyond math! I don't know where your head is at?

1) Math is just a limited case of logic using number related concepts as its domain. [agree/disagree?]

2) If the BEST possible mathematical/logical system that could possibly exist that can cover all of math truths cannot ever be 'complete', then there is at least ONE major part of all of logical systems collectively that cannot be complete assuming a "consistent logic" [conditional requirement by Godel's Incompleteness Theorem].
[agree/disagree?]

Example logic: If I have a class of students in one particular room of a whole school that has someone absent, then this necessarily implies that if we take the school as a WHOLE, even if all other classrooms have no absentees, the school as a whole STILL has an absentee.

3) Conclusion: If all of math is incomplete by its best consistency for ANY system of logical construction, then logic as a whole based on consistency is incomplete [agree/disagree?]

If I have one room in my house unfinished (incomplete), my house as a whole is unfinished (incomplete).

Godel was challenging only a specific subclass of logic. Had he 'posited' something about all of math, this does not necessarily extend to all of logic; but since it limits EXHAUSTIVELY all possible logical systems used to describe math as being able to be 'complete', then you cannot even 'borrow' an extended logic based on the condition of "consistency" from logic as a whole OR it would BECOME a contradiction that Godel exhausted all possible logical systems for math.

So accidentally, his theorem sufficiently proves that for Some most sufficient system of logic that can exhaustively validate ALL of reality, there is no such (consistent) system of logic that can provide a means to validate ALL 'truths' in reality. This thus includes ANY subject of reality to which we are trying to measure some 'validity' to it as a whole, like science.

[I don't follow the extended details you also raised and they are irrelevant to my point.]
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015


Re: Turing Complete

Postby Scott Mayers on January 4th, 2017, 7:44 pm 

And...(continuation of my last post),

Since Turing's own Undecidability relates to computing and a subset of logic involving math, Godel's Incompleteness theorem is sufficient to cover this. Turing argued a form of 'incompleteness theorem' subject to computing. What is "complete" is his limiting explanation of his logic regarding using a sequential 'memory space' that can completely exhaust the acts of copying, erasing, and altering symbols to each memory space. The Universal machine may be included too but is still limited to ONLY THE DATA part, not the part that places the specific computer information at the beginning on (operating or specific machine instructions that the general machine loads). [Why? Because he actually used that to prove that you cannot have EVERY such machine description as part of the information on the tape as a whole that the general machine could interpret....his 'incompleteness or undecidable proof']

And this extends to all AREAS of science that have their own form of 'incompleteness' theorem. This is why I'm arguing that it would be better to just deal with one based on logic that covers all of these areas to be less confusing.

The Pythagoreans realized this 'incompleteness' when they learned of the proof that not all numbers could be expressed as 'rational'. It ended their hope that numbers could be the essence of reality. Their lack of speaking of 'completeness' is still implicit in their conditional expectations. The 'consistency' is also assumed by them by default.

This DOES leave some 'inconsistent' logic (as a meta-logic) as a possible solution though.
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015


Re: Turing Complete

Postby hyksos on January 4th, 2017, 11:48 pm 

Scott --

Technical words, such as complete or undecidable have very concrete, formal definitions in mathematical logic.

You cannot do what you are doing. You are not allowed to change the definitions of these words by placing quotation marks around them. There are landmark theorems in 20th century mathematics that require those words to have precise definitions.

One such theorem , also written by Kurt Godel, is his Completeness theorem. This should not be confused with the famous 2nd Incompleteness Theorem.

The Completeness Theorem (1929) tells us,

If a final sentence (d) follows logically from a set of existing sentences (S), then there exists a proof of that fact.

This theorem ensures us that 2+2=4. We can sleep at night knowing that if we do find a proof of something, that proof is correct and true.

I do not blame anyone on this forum for being ignorant. I don't expect everyone I meet at a bar to have finished graduate-level mathematics courses. Nevertheless, my presence in this thread is not to be someone's free math tutor.

Just to show that I'm not being mean or spiteful, let me give you some heads ups.Wikipedia does have articles on these topics. Big red flashing warning sign: Wikipedia is for reference, not learning. If you attempt to learn Foundations of math, or Proof Theory using wikipedia you will only lead yourself into pain. When it comes to these logic articles, wikipedia is written in an "ultra-modern" form. The intended audience appears to be people who have already studied Model Theory. That's totally unfair and I have no idea why wikipedians feel the need to be so obtuse.

You can get up to the Completeness Theorem of Godel with self-study, if you are serious and committed. But you will never need the Lowenheim-Skolem theorem to get that far. That stuff is deep in the back chapters, (if not in another book entirely.) Again, no idea why the wikipedians are weirdos.

A broader learning tract must start from the basis of modern math : All branches of mathematics are reducible to a set of axioms. Those axioms are stated in Set Theory. These things we call the "theorems", are actually all the deductive consequences of the axioms of Set Theory.

Remember, prior to modern mathematics, proofs were stylistic and relied on the intuition of the writer (see my note about Euler). In the 20th century, that sort of fancy intuitionistic stuff was turned away from violently. Mathematicians started to precisely define what a "deduction" means. Like what it means in a very precise symbol-by-symbol definition. There is no room for kinda-sorta in modern math. It all sounds very "machine-like". And that is really the point. Turing and Church not only suspected machine-like cranking out of deductions, but took the idea really seriously.

....and that leads back to the top of this thread.
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014


Re: Turing Complete

Postby Scott Mayers on January 5th, 2017, 12:21 am 

hyksos » January 4th, 2017, 10:48 pm wrote:Scott --

Technical words, such as complete or undecidable have very concrete, formal definitions in mathematical logic.

You cannot do what you are doing. You are not allowed to change the definitions of these words by placing quotation marks around them. There are landmark theorems in 20th century mathematics that require those words to have precise definitions.


Then PLEASE clarify specifically what you believe me to be mistaken on by providing those formal definitions you believe them to mean and then pointing to where you think that I've misunderstood something. I can't argue against something I only presume you already share charitably. By contrast, you're telling me that I don't understand some meaning of specific words in context that I had not even found a need to formally define myself here for trusting the conventional meanings I already understand.

You can define "complete" as "that which is not finished" contrary to the conventional means. Are you asserting I'm this kind of 'far-off' from understanding? They didn't choose the words arbitrarily like "eigenstate" was for Quantum Mechanics. So what are you assuming about me here other than you appear to think I've used cheat notes from bubblegum wrappers for my education?
Scott Mayers
Member
 
Posts: 326
Joined: 04 Aug 2015


Re: Turing Complete

Postby hyksos on January 5th, 2017, 1:43 am 

So with mathematics it is entirely possible to finish courses on the subject, having had memorized the formulas and "turned-the-crank". Get the B on the exam and get the hell outta there, as it were.

You can do this with freshman calc courses, and this probably does happen more than it should. But not everyone is going to major in Algebraic Topology. As you know -- some (many?) people find math revolting.

In contrast to the turn-the-crank approach to math, would be those rare students in the back row who not only want to memorize the formula -- but they want to see the theorems derived. And they "enjoy" this. Further, they may even go back and read about how these things were first derived originally back in history. When you get to that level of excitement, you have even gone beyond the textbook at that point.

It takes a lot of reading and training to "turn the crank" on Godel's theorems. But I sense in you something more. I detect you want to go way deeper than the surface. You want to like, get into Godel's mind. To see the world through his eyes.

That's fine. I found myself reading biographies of the guy on occasion, so I sympathize. (anecdote: he suffered from anorexia in his later life and was hospitalized for it.)
2798b39b43a4059651ed493112cf3cc1.jpg

(The above photo is Einstein and Godel at Princeton.)

There was an intellectual milieu that Kurt Godel swam in. One of the central characters in this epic intellectual drama was David Hilbert.

To understand what Godel himself thought he was doing and what he was working on, you must become intimate with the life and work of David Hilbert. Hilbert had grandiose metaphysical plans in store for mathematics and in general, for the mathematicians whom he inspired. His philosophy was a mixture of Platonism and proto-logical positivism. In some sense, Godel, Alonzo Church, Ernst Zermelo, and Turing were illegitimate sons of Hilbert. He was the Master; they were his padawans.
dhilbert.jpg

Hilbert saw far, and new years in advance where mathematics must go next. If there is nothing else which you remember in this thread, you must remember this: Godel's 2nd Incompleteness theorem has a primary consequence ---> it showed the capacities David Hilbert had attributed to mathematics were not correct.

Regarding this topic , everything else we might blabber about is merely nitpicking of details. Remember that.


Then PLEASE clarify specifically what you believe me to be mistaken on by providing those formal definitions you believe them to mean and then pointing to where you think that I've misunderstood something.

Incompleteness of a deductive system is not somehow implicit in the existence of irrational numbers. Incompleteness in mathematics is way way above it, up in an ivory tower. Irrational numbers and integers are trivialities. They are trivially expressed as classical sets. (more academic: the integers are a proper subset of the set of irrationals. Want to go spicier? the set of integers is not closed under the division operator.)

Spicey or not, you're playing with pebbles in the sand. Stop thinking like a kid in a linear algebra course. Come up and join us in the clouds.

Before Kurt came on the scene, logicians were already stepping outside of math and examining math like an object unto itself. They began to catalogue "how it works" and then even began to talk about "what it could do".

The gear-turning of Proof Theory involves mind-bending inception proofs where you show (from the outside) that your toy deductive model can prove induction , by using induction itself on its symbols. You literally use induction to prove that your symbolic system can use induction. (If your head isn't spinning , you aren't paying attention.)

Normal people don't think like this. Generally speaking, normal people wouldn't even know why you would think like this. But the men in the photos above? They thought like this -- and worked like this most of their lives.

As far as the "inception" proofs go, David Hilbert knew it was going on and gave it a pet name. If, in your reading, you find out what Hilbert called that, perhaps you will be ready to see , at least briefly, a way into Godel's mind.

These topics are not learned in a weekend. They are not grasped in a month. Even after "passing a course" on them, you have not finished. You will not feel that way. As you get older through the years, you will come back to these topics at different stages of your life. You will be wiser. Each time you revisit, you will have a new perspective on them than before. There are no final answers to these puzzles.
User avatar
hyksos
Active Member
 
Posts: 1056
Joined: 28 Nov 2014


Re: Turing Complete

Postby Natural ChemE on January 5th, 2017, 7:01 am 

Scott Mayers,

You know what's stupid to the point of being offensive? It appears that hyksos's characterization of the historical understanding of Gödel's work is correct. Gödel was actually that limited; the man literally starved himself to death, and he had no head for things outside of his limited domain of professional expertise.
Gödel's incompleteness theorems, Wikipedia (links omitted) wrote:
    Appeals to the incompleteness theorems in other fields

    Appeals and analogies are sometimes made to the incompleteness theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including Torkel Franzén (2004); Alan Sokal and Jean Bricmont (1999); and Ophelia Benson and Jeremy Stangroom (2006). Bricmont and Stangroom (2006, p. 10), for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. Sokal and Bricmont (1999, p. 187) criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.).

    It annoys me to no end that these historically recognizable names do not belong to people who have a modicum of the common sense that we take for granted today. The understanding that you put forward is, by any reasonable metric, correct. The fact that the historical figures didn't see it that way seems wrong.
    Natural ChemE
    Forum Moderator
     
    Posts: 2754
    Joined: 28 Dec 2009
    Forest_Dump liked this post


    Re: Turing Complete

    Postby Forest_Dump on January 5th, 2017, 9:04 am 

    I have to admit that I typically avoid looking through topics like math and physics believing it all to be cut and dried programmatic/technical instruction in how to assemble data, apply the proper algorythm and interpret and apply the results to some specified problem (i.e., the deductive approach the proclaimed positivists at this end of the scientific spectrum espouse). Now I could certainly understand that the lack of formal training in formal and informal logic, etc., (i.e., philosophy) would cause problems because these guys wouldn't grasp things like how to construct and evaluate an argument, acknowledge limits and biases in their knowledge and undertanding, etc. However, after some quick browsing through topics here over the last couple of days, I have to admit I am astounded at some of the things I have seen. And some definitely goes well beyond the bounds of anything I would call science into mysticism without any means of independent verification outside a very (surprizingly) narrow paradigmatic perspective. I have to admit that I am simply shocked.
    User avatar
    Forest_Dump
    Resident Member
     
    Posts: 8721
    Joined: 31 Mar 2005
    Location: Great Lakes Region


    Re: Turing Complete

    Postby Braininvat on January 5th, 2017, 12:43 pm 

    And some definitely goes well beyond the bounds of anything I would call science into mysticism without any means of independent verification outside a very (surprizingly) narrow paradigmatic perspective. I have to admit that I am simply shocked.


    Agree. I am going to really start locking threads in Science that drift off into philosophizing or half-baked personal theories, especially if there are indications that posters have only half-absorbed knowledge about the science and/or philosophic issues in question. Or I will simply delete the derailing posting. I have a life, too, and don't have time to take everyone aside and explain everything. Novices are certainly welcome, but the forums that are not titled Beginner Science are places where you are expected to have some education (or go and get some) (Youtube videos are not sufficient, in case you're wondering) on the topic at hand. And, for bleep's sake, everyone PLEASE learn the technical terminology before you start expounding at great length. If you are unsure, ask questions. And IF YOU WANT TO QUESTION THE BASIC STRUCTURE OF SCIENCE, or question how it defines certain terms, would you please take it to the Philosophy of Science thread. That's what it is there for.
    User avatar
    Braininvat
    Resident Member
     
    Posts: 5839
    Joined: 21 Jan 2014
    Location: Black Hills


    Re: Turing Complete

    Postby Forest_Dump on January 5th, 2017, 12:59 pm 

    As a forward, I have no problem if this gets deleted or relocated BUT:

    I am curious as to how or why some of this stuff is done other than as pure flights of fancy with mathematical algorythms created, it seems to me, purely for some kind of aesthetic appeal. I think of math being done in order to solve some problem or serve some purpose but it seems to me that some of this work being done, and by apparently well-respected authorities, has no clear application, it is uncertain what kind of values could be plugged into the formulas, no idea, then, what something spit out might mean if they did work and authorites in the field aren't even sure at times if the formulas are even complete and functional. What is it other than abstract art?
    User avatar
    Forest_Dump
    Resident Member
     
    Posts: 8721
    Joined: 31 Mar 2005
    Location: Great Lakes Region


    Re: Turing Complete

    Postby hyksos on January 5th, 2017, 2:51 pm 

    I hope that my words are not minced by the regulars here. My post about Kurt Godel is nothing other than profound respect for his genius. I bow before him in the same way I would bow before Aristotle. Many others do.

    We mortals can hope to rise to the level of doing exercises in Proof Theory in a textbook that has the material laid out for us in a neat digestible format. We may arrive there on our best days. Godel and Zermelo not only "wrote the textbook", but invented the material in it. As always, they stood on the shoulders of giants. One of those Giants was David Hilbert.
    Last edited by hyksos on January 5th, 2017, 3:56 pm, edited 1 time in total.
    User avatar
    hyksos
    Active Member
     
    Posts: 1056
    Joined: 28 Nov 2014


    Re: Turing Complete

    Postby hyksos on January 5th, 2017, 3:19 pm 

    Godel's famous theorem is inextricably intertwined with Hilbert's program. These mathematicians were absolutely concerned with the ability to demonstrate the soundness of deductive systems.
    Last edited by hyksos on January 5th, 2017, 3:57 pm, edited 1 time in total.
    User avatar
    hyksos
    Active Member
     
    Posts: 1056
    Joined: 28 Nov 2014


    Next

    Return to Mathematics

    Who is online

    Users browsing this forum: No registered users and 4 guests