Jump to content
Science Forums

Chomsky Vs Norvig And The Missing Debate About The Nature Of Intelligence


Recommended Posts

I was reading this article the other day and it got me thinking about a topic that's been touched on in the True Ai, Are We Getting Close Yet? thread: a lot of the heated rhetoric in the community concerning AI really has everything to do with a lack of a common understanding of the word "Intelligence", and maybe even the word "Artificial".

 

Two of the biggest names in AI/Learning, Noam Chomsky--who needs no introduction--and Peter Norvig--Director of Research at Google and well-known author of AI textbooks--have been debating about their favored approaches with increasing acrimony, as is detailed in the linked article ("Norvig vs. Chomsky and the Fight for the Future of AI" by Kevin Gold, tor.com, 11Jun2011). To oversimplify, their positions are:

 

  • Chomsky has spent a half century building an ever more complex universal grammar for how languages are put together as a mechanism for "understanding" language, and argues that humans (at least) have a mechanism for utilizing this grammar to enable language and learning by being able to map the parts of the language onto concepts and information.
  • Norvig has--quite successfully--shown that ignoring grammar and simply using neural network technology along with massive amounts of data (like what Google Hoovers up on the internet) can be used to map anything in one language into any other. This approach has been used not only for language translation, but to create the quite impressive Jeopardy-playing contraption "Watson" at IBM and many other technologies that seek to allow machine "understanding".

The crux of this debate and analysis of "who's winning" has really come down to the apparent success of the Neural Network/Statistical Learning approaches versus the general lack thereof from grammar-based approaches, which have been in development for a lot longer. To quote the article:

 

Chomsky, one of the old guard, wishes for an elegant theory of intelligence and language that looks past human fallibility to try to see simple structure underneath. Norvig, meanwhile, represents the new philosophy: truth by statistics, and simplicity be damned. Disillusioned with simple models, or even Chomsky’s relatively complex models, Norvig has of late been arguing that with enough data, attempting to fit any simple model at all is pointless. The disagreement between the two men points to how the rise of the Internet poses the same challenge to artificial intelligence that it has to human intelligence: why learn anything when you can look it up?

 

What occurred to me in reading this, is that the two sides are really talking right past one another, mainly because they lack agreement on what the *goal* of all this is. Chomsky is closer to understanding this (which is no surprise to me because he's a cognitive scientist, not just a computer scientist):

 

Chomsky started the current argument with some remarks made at a symposium commemorating MIT’s 150th birthday. According to MIT’s Technology Review,

Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science,” said Chomsky.

Norvig is obviously arguing for "going with what works":

 

...with enough data from the internet, you can reason statistically about what the next word in a sentence will be, right down to its conjugation, without necessarily knowing any grammatical rules or word meanings at all. The limited understanding employed in this approach is why machine translation occasionally delivers amusingly bad results. But the Google approach to this problem is not to develop a more sophisticated understanding of language; it is to try to get more data, and build bigger lookup tables. Perhaps somewhere on the internet, somebody has said exactly what you are saying right now, and all we need to do is go find it. AIs attempting to use language in this way are like elementary school children googling the answers to their math homework: they might find the answer, but one can’t help but feel it doesn’t serve them well in the long term.

 

In his essay, Norvig argues that there are ways of doing statistical reasoning that are more sophisticated than looking at just the previous one or two words, even if they aren’t applied as often in practice. But his fundamental stance, which he calls the “algorithmic modeling culture,” is to believe that “nature’s black box cannot necessarily be described by a simple model.” He likens Chomsky’s quest for a more beautiful model to Platonic mysticism, and he compares Chomsky to Bill O’Reilly in his lack of satisfaction with answers that work. (emphasis Buffy)

 

When I think about how to translate these two points of view, I see a huge difference in the two combatants goals:

 

  • Chomsky is looking for a way to get a base of code to logically describe knowledge about the world: something that you can actually learn in an abstract way and then apply *precisely* because it is described in an abstract manner. That is the software actually "understands" how things work and can "use" that knowledge in new and creative ways.
  • Norvig is looking for a way to gather together enough data, such that programs produce "correct results" but "understanding" is really irrelevant: no matter what you want the software to do, there's an analogue out there that's close enough that you'll get the right result a percentage of the time that is proportional to the amount of modelling data that you can get your hands on.

That sounds to me like two guys who are living on different planets. As a fellow computer scientist, I certainly appreciate what Norvig and others are doing so successfully, in getting computers to perform useful jobs that do indeed seem "intelligent". But while Chomsky is being derided as being "old guard" promoting solutions that "don't work", in my mind those people are completely missing the point of what Chomsky is trying to do, which is to understand the nature of "intelligence" in the sense of being able to make leaps in adaptation that might indeed be solved by doing enough munching of enough data, but do it in a way that is fundamentally more efficient.

 

When I think about Norvig's argument I can't help but think of the old saw about if you have enough monkey's typing on enough typewriters, eventually one of them will type the entire works of Shakespeare, the problem being the definition of the word "enough". Norvig's logic really completely depends on--to quote Shakepeare--there being (almost) nothing new under the sun. Unless you have something "close" to a desired result in your learning set, your neural network is unlikely to produce that result.

 

This really points at the more limited definition of "intelligence" that we get from Turing: if an observer cannot distinguish between a human and a computer, then we can call it "intelligent". Watson playing Jeopardy was an excellent example of this, and we all marveled at how human Watson could be *in real time*. But given enough time spent observing, enough of those hilariously off answers would creep in and start to allow that observer to fail Watson eventually. All that Norvig's approach of "just get more data to have it learn from" does is increase the amount of observing time necessary for it to hit the hilarious failure.

 

Moreover it cannot be overemphasized that projects like Watson require huge amounts of time and resources to solve an *extremely limited problem set*. Yes, Ken Jennings no longer even has to think because of all the money he won on Jeopardy, but just knowing how to play Jeopardy would not allow him to write his autobiography or be the producer of another game show. It is exactly that inability to *use* all that "knowledge" that is the limitation of the statistical approach.

 

Now the other important point here is that it's not that Norvig's preferred technology is any more of a dead end than Chomsky's: remember that the brain is a huge neural network and it *does indeed* implement the more sophisticated form of intelligence that Chomsky is seeking to harness. But the point is that the neural network in the case of the brain is used to *implement* a conceptual framework for that intelligence. Once you start to think about the fact that silicon-ware vs. wet-ware is a *platform issue*, you realize it has nothing do with the program that's implemented on top of that hardware (in either silicon or neurons) designed to implement "generalized intelligence." Norvig is using the neuron model directly to sift through data to simply ensure that within some well-defined problem set that "reasonable" answers are obtained. Going outside the problem set runs into the same problems that the grammar/logic folks ran into decades ago with the recognition of the need for "world knowledge" to achieve "generalized intelligence".

 

That is to say, having neural networks is not sufficient to achieve such generalized intelligence, there's got to be something programmed in to the network that actually implements a system for dealing with abstract concepts. Since a brain could do it, it could indeed be all neural nets, but it might take 200 million years of trials to develop, just as it did to get to our brains. Chomsky is in essence arguing that if we can figure out what that "system for dealing with abstract concepts" is, we could short circuit the process and maybe get it done in our lifetimes, and deal with a nagging hole in the "pure neural network" approach that is coming not from the computer or cognitive science fields, but that other favorite topic of Chomsky's: pubic policy which I will get to in a second.

 

Having spent quite a bit of time with both technologies, I have to say I get really tired of the debate, because as I see it, any true generalized intelligence is going to require BOTH approaches. Neural networks are excellent for tuning and optimizing behavior using real-world feedback loops to implement solutions for limited problem sets. But if you're going to put the big pieces together, you absolutely are going to need logical/semantic programming.

 

The Public Policy issue that has flared recently has to do with how we deal with "robots" that are autonomous. The two most notable examples are self-driving cars and military drones. With both there is an increasing desire to have these operate without human intervention, either because the human cannot be trusted (e.g. a driver who's had too many to drink), or human intervention is increasingly impractical (military drones needing to be able to do without human input due to communications delays). The question becomes, as a legal and moral and optimized outcome, when can we ENTIRELY trust the computer to "do the right thing?" Issac Asimov famously posited that we needed to logically program in his Laws of Robotics, but it's not entirely clear how its possible to merge such logic into a black box neural network with any assurance that the logic would be obeyed, when the network could have some data that simply avoids all the tests for adherence to that logic.

 

Unfortunately, it seems to me that that breakthrough of merging logical/conceptual frameworks with statistically based modules is what we really need before we have "real artificial intelligence."

 

People locked into such scientific battles like to hear "you're both right" even less than "he's right and you're wrong." But lets hope for (and lobby for!) just that sort of change in thinking.

 

Opinions?

 

Colorless green ideas sleep furiously, :phones:

Buffy

Link to comment
Share on other sites

my opinion is that without a human-equivalent body, machine intelligence must remain a mimic. hoovering up data, no matter how much time is allotted, and putting it in a pre-made container is building from the middle up. human intelligence, as well as human bodies, are made from the bottom up. as humans, when we are born our feedback loops are rather exclusively bodily sensations and even though our neural networks are hardwired for intelligence, -and language- that intelligence only starts operating even at turing's level after quite a few years. (and not always then.) our brains evolved in bodies already equipped for smell, sight, hearing, taste & touch, and that is an altogether different thing than trying to stick senses onto a neural net, even if that is achieved. before machines can think as we do, they will have to first hunger as we do, pain as we do, laugh as we do, and love as we do.

 

there is nothing either good or bad, but thinking makes it so. :phones:

Link to comment
Share on other sites

my opinion is that without a human-equivalent body, machine intelligence must remain a mimic.

I'm going to try to get you to explain that further. Norvig would look at this statement and ask "where the heck did that come from?" because I think you're after a goal that is at least one or two levels of abstraction above Chomsky (even though Chomsky would probably get what you were trying to say!).

 

Let me break a few things out:

 

...hoovering up data, no matter how much time is allotted, and putting it in a pre-made container is building from the middle up. human intelligence, as well as human bodies, are made from the bottom up.

 

My interpretation of what you're trying to get at with bottom middle and top is this:

 

  • Bottom = sensory input
  • Middle = neural net that processes the input
  • Top = conceptual understanding of the meaning of the inputs

 

If that's what you mean, then I think I agree: that in fact maps onto how the brain actually developed over the last 250 million years. One of the things I'm arguing here is that neural nets are actually really good signal processing devices, and if you look at the self-driving cars, all of the "sensory inputs" are indeed processed by neural net software, and these are actually pretty good. It took lots of training to get to the point of having the camera and radar and other inputs to be able to simply say "yah, the road goes that'a way" but the most recent versions are really quite amazing (although it's notable that the designers took the shortcuts of using not just stereo vision cameras, but added non-visible wavelengths and radar because the simple nets in common usage simply learn faster when they have more orthogonal data inputs).

 

I argue that really that the statistical/neural net approach is really limited because of the difficulty in applying the current level of technology to doing more general tasks that deal with that top cognitive level. Watson's Jeopardy skills when you think about it was really straight forward: use statistics to find your way from the words in the question to an answer. Notably, Watson still had to use grammar knowledge to put together answer responses that were actual English though! Mostly the statistics found the web page with the answer on it. But it did not "parse" the question to try to "understand it" beyond keys like "What" means say what it is, "When" means say when it happened, etc. but that's about the semantics of the response more than finding the answer.

 

With the car though, "how to drive" is an interaction between the sensory input/feedback--how *much* to turn the wheel which has a net in it--and logical rules needed to keep from crashing--"don't go down a road that's thinner than the car" or "keep the car between the lines" that are all stated as logical/semantic rules that interact with--on input and output--with the nets that "see" and "do".

 

So the thing to understand is that the top level--as is defined by the AI folk--a set of rules on "what" to do.

 

I think when you say:

 

...as humans, when we are born our feedback loops are rather exclusively bodily sensations and even though our neural networks are hardwired for intelligence, -and language- that intelligence only starts operating even at turing's level after quite a few years. (and not always then.) our brains evolved in bodies already equipped for smell, sight, hearing, taste & touch,

 

...you're describing the stack as I've clarified it above, just within the context of learning and expressing an opinion about nature/nurture that aren't really relevant to the Turing Test, since it's not temporal and you can adapt it to apply to any stage of human development, for example, "here's a real baby or a baby robot: based on it's ability to crawl/squeal/poop, can you tell if it's human or robot?"

 

And while it's of course true that the efficiency of the human input/output sensory mechanisms is very efficient because it's highly evolved, I think it's arguable that with things like cochlear implants and artificial eyes, that we are on our way to cybernetic replacements of exactly those systems that you've made the base of the human intelligence pyramid.

 

So, when you say:

 

...and that is an altogether different thing than trying to stick senses onto a neural net, even if that is achieved. before machines can think as we do, they will have to first hunger as we do, pain as we do, laugh as we do, and love as we do.

 

I'd disagree that the "natural" versions of those can't be replaced: I've seen my own brain rewire itself to artificial contacts and hearing aids in amazingly rapid fashion (a matter of months), and pushing the non-native hardware a little further up the processing stream is not that hard to extrapolate. But the next level up is that "what do I do with the inputs and what outputs do I send"--no matter whether its meat-ware or silicon-ware--is still the higher level cognitive/conceptual "top" level.

 

But when you mention "hunger...pain...laugh(ter)," I think you're talking about something even higher than that, which is "why we do what we do". And at that level I think you're exactly on point with a KEY element of intelligence that we simply have no way to pick apart. One that Chomsky would say "I'm not trying to deal with that," and Norvig would say "huh? who cares?"

 

So you've hit on what AI folk have really punted on for ages, in spite of the fact that it's one of the MOST popular themes in science fiction!

 

Can I program a computer to "feel" not in the sensory meaning of the word, but to evoke an emotion? What the heck IS an emotion anyway? I can come up with all sorts of arguments why we evolved them and why they're useful, but they are the MOST abstract cognitive concept we know of. They might be seen as motivations that drive meta-meta-rules. There may be other interpretations. But figuring out how they interact with all the levels of cognition is essential to figuring out how to create what we'd call in this household "real real intelligence."

 

What do you think?

 

there is nothing either good or bad, but thinking makes it so. :phones:

To me it is a prison! :phones:

Buffy

Link to comment
Share on other sites

my opinion is that without a human-equivalent body, machine intelligence must remain a mimic.

I'm going to try to get you to explain that further. Norvig would look at this statement and ask "where the heck did that come from?" because I think you're after a goal that is at least one or two levels of abstraction above Chomsky (even though Chomsky would probably get what you were trying to say!).

 

our intelligence is rooted not in what we answer, but in what we ask.

 

Let me break a few things out: ...

 

give me a bit to try and put your things back together. just woke and have to get my juices flowing with a pot o' joe and the sunday morning political news. :coffee_n_pc:

 

As soon as questions of will or decision or reason or choice of action arise, human science is at a loss. :phones:

Link to comment
Share on other sites

...

Let me break a few things out:

 

 

So, when you say:

...and that is an altogether different thing than trying to stick senses onto a neural net, even if that is achieved. before machines can think as we do, they will have to first hunger as we do, pain as we do, laugh as we do, and love as we do.

 

 

I'd disagree that the "natural" versions of those can't be replaced: I've seen my own brain rewire itself to artificial contacts and hearing aids in amazingly rapid fashion (a matter of months), and pushing the non-native hardware a little further up the processing stream is not that hard to extrapolate. But the next level up is that "what do I do with the inputs and what outputs do I send"--no matter whether its meat-ware or silicon-ware--is still the higher level cognitive/conceptual "top" level.

 

But when you mention "hunger...pain...laugh(ter)," I think you're talking about something even higher than that, which is "why we do what we do". And at that level I think you're exactly on point with a KEY element of intelligence that we simply have no way to pick apart. One that Chomsky would say "I'm not trying to deal with that," and Norvig would say "huh? who cares?"

...

To me it is a prison! :phones:

Buffy

 

i am talking not only about something higher, but something lower as well. i am talking -i think- about the whole of it. i'm talking about -i think- something akin to hofstadter's strange loops. minding of course that dougie is pretty cagey about affirming or denying whether or not machines can achieve a huneker score on a par with humans, he is dealing with it and showing that he -and per se we- care.

 

I would like to understand things better, but I don’t want to understand them perfectly. :phones:

Link to comment
Share on other sites

hello? well, just plopping in with some afterthoughts. have you read i am a strange loop buffy? i gave my copy to a friend not long after i read it but your questions got me hunekering to review it. i have put in the call for its return, but -to apply hofstader's law- its probably going to take longer than i expect even when i consider hofstader's law.

 

while chomsky & norvig may well be 2 of the biggest names in AI, hofstadter may well be 1 of the biggerest. his quotes online are few and far between and passages of his books fewer and fartherer betweener than that. short of each of us having his text at hand, discussing it is a near exercise in futility.

 

anywho, that may be just as dougie would have it, so i'll leave you with a couple of -hopefully- on topic quotes from the lean pickings.

 

Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not. ~ Douglas R. Hofstadter

 

Replying to following question by Deborah Solomon in Questions for Douglas Hofstadter: "Your entry in Wikipedia says that your work has inspired many students to begin careers in computing and artificial intelligence." He replied "I have no interest in computers. The entry is filled with inaccuracies, and it kind of depresses me." When asked why he didn't fix it, he replied, "The next day someone will fix it back." Douglas Hofstadter @ wikiquote

Edited by Turtle
Link to comment
Share on other sites

I was reading this article the other day and it got me thinking about a topic that's been touched on in the True Ai, Are We Getting Close Yet? thread: a lot of the heated rhetoric in the community concerning AI really has everything to do with a lack of a common understanding of the word "Intelligence", and maybe even the word "Artificial".....

 

 

 

People locked into such scientific battles like to hear "you're both right" even less than "he's right and you're wrong." But lets hope for (and lobby for!) just that sort of change in thinking.

 

Opinions?

 

My feeling is that human intelligence is a hodgepodge of data-processing stratagem aimed at survival/procreation in the peculiar, intensely social & intellectually competitive environment that humans find themselves evolving in. It appears that, arbitrarily, the sum of these neurological survival tricks may be applied - perhaps clumsily - to mathematics & other abstractions, but there’s no reason to think that the human is an ideal thinking machine from an engineering perspective.

 

And I agree with Turtle that the study of human intelligence must be a study of the human body as well as the brain. It's turning out - I've come to understand - that the mind-body distinction in a human is not nearly so neat&clean as the blood-brain barrier in a human.

 

Seems to me there are two branches of AI emerging out of computer science that are each acting under different philosophies, with different goals, and each - as you observe - talking right past each other in their efforts to argue.

 

One aspires to emulate human intelligence, in all its sluggish glory. This is a noble pursuit, a tool to help us understand ourselves. For, the more we know of the human condition, the better can we actuate quality of life for everyone.

 

The other aspires to build idealized data-processing systems regardless how the humans pull it off. This, again, is noble. For, such systems could augment our own endowments by honing in on our many inherited blind-spots and prejudices.

 

An engineered intelligence, not encumbered with ego or zeal, should immediately see how these two philosophies need one another more than they need to argue.

 

 

Colorless green ideas sleep furiously, :phones:

Buffy

 

Great topic! Thanks, Buffy. I follow these fields and intend to read your links with more careful detail when I have more time. Fun stuff.

Link to comment
Share on other sites

Hi Buffy, long time no see.

 

Opinions?

Why bother with the structural aspects of analytical proofs when the status quo is based on axiomatic proofs, one offs in isolation with only the structual aspects you require to make your point being considered. Axiomatic proofs just reveal a tissue thin structure with many holes and conflicts that appear when you attempt to integrate them into a combined structured form.

 

http://en.wikipedia.org/wiki/Euclidian_geometry

 

Axiomatic formulations

Geometry is the science of correct reasoning on incorrect figures.

—George Polyá, How to Solve It, p. 208

 

Euclid's axioms: In his dissertation to Trinity College, Cambridge, Bertrand Russell summarized the changing role of Euclid's geometry in the minds of philosophers up to that time.[48] It was a conflict between certain knowledge, independent of experiment, and empiricism, requiring experimental input. This issue became clear as it was discovered that the parallel postulate was not necessarily valid and its applicability was an empirical matter, deciding whether the applicable geometry was Euclidean or non-Euclidean.

Hilbert's axioms: Hilbert's axioms had the goal of identifying a simple and complete set of independent axioms from which the most important geometric theorems could be deduced. The outstanding objectives were to make Euclidean geometry rigorous (avoiding hidden assumptions) and to make clear the ramifications of the parallel postulate.

...

 

Constructive approaches and pedagogy

 

The process of abstract axiomatization as exemplified by Hilbert's axioms reduces geometry to theorem proving or predicate logic. In contrast, the Greeks used construction postulates, and emphasized problem solving.[56] For the Greeks, constructions are more primitive than existence propositions, and can be used to prove existence propositions, but not vice versa. To describe problem solving adequately requires a richer system of logical concepts.[56] The contrast in approach may be summarized:[57]

Axiomatic proof: Proofs are deductive derivations of propositions from primitive premises that are ‘true’ in some sense. The aim is to justify the proposition.

Analytic proof: Proofs are non-deductive derivations of hypothesis from problems. The aim is to find hypotheses capable of giving a solution to the problem. One can argue that Euclid's axioms were arrived upon in this manner. In particular, it is thought that Euclid felt the parallel postulate was forced upon him, as indicated by his reluctance to make use of it,[58] and his arrival upon it by the method of contradiction.[59]

 

Andrei Nicholaevich Kolmogorov proposed a problem solving basis for geometry.[60][61] This work was a precursor of a modern formulation in terms of constructive type theory.[62] This development has implications for pedagogy as well.[63]

 

If proof simply follows conviction of truth rather than contributing to its construction and is only experienced as a demonstration of something already known to be true, it is likely to remain meaningless and purposeless in the eyes of students.

Edited by LaurieAG
Link to comment
Share on other sites

  • 3 weeks later...

hello? well, just plopping in with some afterthoughts. have you read i am a strange loop buffy? i gave my copy to a friend not long after i read it but your questions got me hunekering to review it. i have put in the call for its return, but -to apply hofstader's law- its probably going to take longer than i expect even when i consider hofstader's law.

Yep!

 

Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not. ~ Douglas R. Hofstadter

 

That's kind of my point. I admire Chomsky's modesty in trying to say that he's really only trying to solve a very specific problem, whereas Norvig's fans (even if he might not agree with this more extreme restatement) would froth that by definition, neural networks are all there is of AI.

 

I would argue that they do a good job of mimicking solutions to very limited problems, and with simulated multi-million-year evolution might provide a broader style of intelligence, but that's as impractical as waiting for the monkey's to produce Hamlet, and in all likelihood would produce something different from what you intended at the start.

 

What other aspects of intelligence do you espy?

 

When you're not looking at it, this sentence is in Spanish, :phones:

Buffy

Link to comment
Share on other sites

Why bother with the structural aspects of analytical proofs when the status quo is based on axiomatic proofs, one offs in isolation with only the structual aspects you require to make your point being considered. Axiomatic proofs just reveal a tissue thin structure with many holes and conflicts that appear when you attempt to integrate them into a combined structured form.

 

I would say that the status quo--which is in turmoil because of this debate--is having problems exactly because of the conflict between the analytical and axiomatic, with one side attempting to make your point that the axiomatic is "a tissue thin structure", completely missing the point that the axiomatic is about understanding, and--I might add--essential to critical thinking.

 

But I agree that rather than giving demerits for style, sticking to the substance of--and what we can learn from--the debate is far more interesting.

 

Relying on words to lead you to the truth is like relying on an incomplete formal system to lead you to the truth. A formal system will give you some truths, but as we shall soon see, a formal system, no matter how powerful—cannot lead to all truths, :phones:

Buffy

Link to comment
Share on other sites

My feeling is that human intelligence is a hodgepodge of data-processing stratagem aimed at survival/procreation in the peculiar, intensely social & intellectually competitive environment that humans find themselves evolving in. It appears that, arbitrarily, the sum of these neurological survival tricks may be applied - perhaps clumsily - to mathematics & other abstractions, but there’s no reason to think that the human is an ideal thinking machine from an engineering perspective.

I like this analysis, because AI advocates do seem to proceed from the axiom that our *current* intelligence is somehow superior, or an end point. In certain respects, it's probably fair to say that human intelligence was at its zenith just past the end of the last ice age. :blink:

 

This is probably one of the biggest problems in AI, having caused endless problems with Chomsky's linguistics--aspects of language that are highly evolved are quite logical, but with so many non-deterministic twists that make major elements completely illogical--as well as learning systems with their spectacular mimicking fails:

Alex Trebek: "U.S. Cities: Its largest airport is named for a World War II hero; its second largest for a World War II battle."

Watson: "What is Toronto?"

 

Don't you wish there was a knob on the TV to turn up the intelligence? There's one marked 'Brightness,' but it doesn't work, :phones:

Buffy

Link to comment
Share on other sites

A formal system will give you some truths, but as we shall soon see, a formal system, no matter how powerful—cannot lead to all truths, :phones:

Unfortunately that's where axiomatic proofs work best, it just requires one identified untruth to bring the foundations of any formal system, no matter how powerful, into question.

Link to comment
Share on other sites

  • 1 month later...

Read Data:

 

“Buffy” I read your OP.

 

If (Provocation) > (apathy) then Gosub Be Annoying

 

 

 

:Be Annoying

 

 

Despite that, I will offer:

 

I thought the Turing Test should be the Gold Standard, until I realised that many humans wouldn’t pass the Turing test.

 

I think that the creation of AI would be a Game Changer. But will it happen?

 

The path to it? Brute force or child-learning? I think it will be like fusion power. Always 20 years away. And Fusion power is much simpler than AI. And fusion power is, in 2012, twenty years away.

 

The path to it? No one has a clue.

 

Technology has a way of being unpredictable. Things that you thought would take a twenty years, happen next year. (Like a bipedal robot) Thinks that you thought would happen next year, never happen.

 

I realise I’ve just said nothing at great length. Damn that Turing test!

 

(end if)

Link to comment
Share on other sites

i would like ai if it was friendly, and also listend to your requests

 

Unfortunately, usually when it's friendly it is insincere, and when it listens, it rarely hears....

 

Life is made up of constant calls to action, and we seldom have time for more than hastily contrived answers, :phones:

Buffy

Link to comment
Share on other sites

  • 5 months later...

I like this analysis, because AI advocates do seem to proceed from the axiom that our *current* intelligence is somehow superior, or an end point. In certain respects, it's probably fair to say that human intelligence was at its zenith just past the end of the last ice age. :blink:

 

This is probably one of the biggest problems in AI, having caused endless problems with Chomsky's linguistics--aspects of language that are highly evolved are quite logical, but with so many non-deterministic twists that make major elements completely illogical--as well as learning systems with their spectacular mimicking fails:

 

 

Don't you wish there was a knob on the TV to turn up the intelligence? There's one marked 'Brightness,' but it doesn't work, :phones:

Buffy

 

You're bit about the Norvig / Chomsky debate reminded me of a quote by Ashleigh Brilliant 'Life is the only game, who sole purpose is to discover the rules of the game.'

 

I think Chomsky's position is simply the end result of what intelligence started out as advocated by Norvig - namely simple, single celled life forms versus highly developed multi-celled ones as you so rightly point out (See my approach to language for instance, which returns to simplicity - is there anybody out there in AI land that might find this approach helpful apart from Norvig? See also the work or Rex Jung, Simone Ritter et al on creativity and problem solving, which has found hard work (deep concentration) ensures new discoveries are ignored but a playful attitude ensures new data is discovered).paigetheoracle/logic-lists-english and I hope this link works! (otherwise manually seeking out the address will get you there anyway, via Pinterest.com). :o

 

My point, if you don't see it immediately is that Norvig is the small child standing on the sea of discovery and Chomsky, the old man who has crossed this sea a thousand times, so has forgot that we all start as explorers (children) and end up knowing everything (all the rules) only at the end of the journey. Chomsky is like a Sunday Driver - obsessed with his vehicle's appearance that he forgets it's just meant to get him from A to B. Norvig and indeed the founder of the internet, Tim Berners-Lee, both understand that language is for the spreading of memes (ideas) and is not sacred in itself, anymore than the body is in relation to genes: Rough and ready, quickly spread, rather than smooth, slowly spread ones (quantity over quality). B)

Edited by paigetheoracle
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...