Jump to content
Science Forums

Chomsky Vs Norvig And The Missing Debate About The Nature Of Intelligence


Recommended Posts

Hi Buffy and Everybody ;)

This thread is rather complicated not to mention its issues,

and Im happy that it exists...Thank you Buffy.

 

Im at loss of where and what to say so I just begin somewhere:

 

The real problem is that “the problem” is actually unbelievably complex;

I believe that is a good approximation!

 

Chomsky's goal is based upon the idea he can understand reality!

Whether its a fair description or not, understanding is possible

and that is an undeniable but poorly understood part of Reality.

 

Norvig's goal is based upon the idea that that the expectation of “future circumstances” can be statistically calculated as consequences of the present circumstances.

Hi Norvig! Please answer truthfully to both my following questions in their order:

1 What is the answer to my second question?

2 What was not the answer to my first question?

 

I have proved that “What is” is “what is” is the only correct explanation of reality.

It is a static view. (x=x)

Other ppl claim a dynamic view is necessary. (x="x")

 

I belong to neither camp...

I belong to both!

Roughly:

1 x=x

2 x="x"

If a=a then a has static existence.

If a="a" then a has dynamic existence.

If a=a AND a="a" then a has continuous dynamic existence.

(One of these days Ill work out what it means.)

 

Life is “success”: a collection of circumstances (macroscopic collections of events) which provide valid statistics for the behavior which yields survival.

And survival of life is continuous and dynamic.

 

As I have said, explanations are procedures (think computer programs) which produce survival enhancing behavior. Understanding is an unnecessary figment of one's imagination.

You unnecessarily imagine it to be.

Link to comment
Share on other sites

Annsi: Thank you for joining in!

 

There's lots of huffing and puffing about this topic, so putting names on the positions really gets a little dangerous, and because of the fact that the participants don't agree on what they're arguing about, silly bickering ensues.

 

Just wanted to clarify some issues you bring up:

 

Chomsky got into intelligence because he's a linguist, and so since he's still primarily trying to solve issues in linguistics, it's a little bit unfair to say he's "approaching intelligence as if it were solely about language."

 

But more than that, I think you're missing the reason he got into "intelligence" at all: that is that one of the most fundamental realizations about trying to solve "natural language understanding" which relied heavily on Chomsky's grammars, could not really be done effectively without two things:

 

  1. World knowledge in the form of mappings of words onto concepts
  2. An integral adaptive ability to perform conceptual reasoning via inference and deduction

 

In the early years of AI/NL, both of these proved insurmountable with existing technology, and of course the latter is a central theme of your (very well thought out!) definition of general intelligence. So, in trying to solve his primary goal of coming up with a universal model for language he *had* to get into the much broader subject of intelligence.

 

The folks that insist that language is *necessary* for intelligence are just speciesist (and usually racist) idiots who have other agendas, but it's an interesting issue to think through *where* any innate language ability in humans came from, and how far back it's precursor's evidenced themselves. Bee dances and bird calls certainly qualify as "languages" in the most rudimentary sense, and while the grammars are simplistic, they certainly have them.

 

This brings up the recursion debate, and I think Chomsky got off into the weeds on any insistence that it was *necessary* for language, but I think a lot of that came from goading from the skeptics. When you look at language as an evolutionary product, it's impossible to delineate some magic line where in the progression from Bee Dances we get "true language". That's the sort of classification hookum that is fake science in my mind.

 

I agree that Norvig has not done anything to address your general intelligence definition, but it's the mindless fanboy belief of his followers that has driven off into the notion that pure statistical approaches are *good* solutions to any AI problem, when practical application of "universal grammar" (using it as a symbol for all logically designed structures) generally always save time.

 

I still agree with Dick though that such "grammars" are unnecessary and in fact biological evolution is proof of it. And while I think that Chomsky has some justification for saying that universal grammars are innate (albeit some features being unused like recursion), it's obvious that they evolved statistically! :cheer:

 

 

Though I am not naturally honest, I am so sometimes by chance, :phones:

Buffy

Link to comment
Share on other sites

Annsi: Thank you for joining in!

 

Here btw is an interesting language phenomenon. In an unfamiliar word - like my name - people tend to not be conscious of where the double consonant is. They are almost invariably aware there is a double consonant, but then my name is written with almost even distribution as "Annsi", "Anssi", or "Ansii" :D

 

This brings up the recursion debate, and I think Chomsky got off into the weeds on any insistence that it was *necessary* for language, but I think a lot of that came from goading from the skeptics. When you look at language as an evolutionary product, it's impossible to delineate some magic line where in the progression from Bee Dances we get "true language". That's the sort of classification hookum that is fake science in my mind.

 

Exactly. Even forgetting the perspective of evolutionary product and viewing language as a method with which we interact with the world, I can't really define where gesturing/whistling/performing etc stops, and "language" begins.

 

Have you ever heard Chomsky even trying to define what he means by "languge"? It seems to me that he doesn't see a reason to define and, and instead just chose to see "language" as whatever looks like "language" to him intuitively, and started to generalize it. Which basically means that "anything that can be described by Universal Grammar" is to be taken as "language". Which just means it's a trivial tautology to claim "any language can be described by Universal Grammar". That's pretty silly circular reasoning, but I'm afraid it is terribly common in many fields, and we are all guilty of it from time to time.

 

But so, you commented that Chomsky got into intelligence because he realized it was necessary for solving "natural language understanding". I am left wondering, what does he hope to gain from generalizing language in the form of Universal Grammar?

 

Let's consider the fact that his "universal" rules apply only to some sub-set of "semantical communication methods", and let's considering that we can simply choose to call that sub-set "human languages" (sorry, Pirahãs, let's call your thing some sort of "weird whistling" instead :), so now what have we learned about semantical understanding in broader sense?

 

And if we can establish the answer is "nothing", why is it that Universal Grammar isn't viewed simply as an analysis about (most) of the "currently existing human languages", but is in fact not "universal" to semantical understanding at all? This just brings me back to me wondering, what is it exactly that Chomsky hopes to gain...

 

I agree that Norvig has not done anything to address your general intelligence definition, but it's the mindless fanboy belief of his followers that has driven off into the notion that pure statistical approaches are *good* solutions to any AI problem, when practical application of "universal grammar" (using it as a symbol for all logically designed structures) generally always save time.

 

Well, like you imply on the OP, I think even from the practical perspective we are finally on the tipping point. Statistical approaches certainly can lead into general solutions, I think this is philosophically sound stance because it is pretty clear all of our understanding must be fundamentally based on inductive ideas on our "past". All other types of reasonings are necessarily standing on top of inductively arrived ideas. But of course it is important to understand that Norvig is not really after a general solution, but his statistics are based on a number of pre-defined notions (as far as I know, he defines what constitutes "words" and "sentences" and so on).

 

In a funny way though, if his algorithms were general enough, it would be entirely possible that the system would form effectively a "semantical internal model" of the data. But that model would be completely different from how any of us understands languages or the meanings of the words. It's rather like, words themselves would be the elemental entities of its universe.

 

The countdown to someone suggesting that we are also really living inside a huge word processor in t - 4, 3, 2, 1...

 

I still agree with Dick though that such "grammars" are unnecessary and in fact biological evolution is proof of it. And while I think that Chomsky has some justification for saying that universal grammars are innate (albeit some features being unused like recursion), it's obvious that they evolved statistically! :cheer:

 

I must though wonder what is the purpose of Chomsky hypothesizing about that question. It seems more like this would be of interest to the field of biology, as from the point of intelligence it makes no difference whether this ability is innate or learned. I mean, even if we don't have any biological short-cuts hard-wired in us, clearly it would be equally possible to adopt the concept via general learning mechanisms. And I would go so far as to say that the fact that it takes years for us to pick up correct grammar is excellent evidence towards the idea that we do in fact have to simply learn it, like everything else.

 

Unless if by "innate" he means simply that we have enough learning capacity to learn something that abstract, then I think the question of innate vs. learned is just semantics, no pun intented.

 

-Anssi

Link to comment
Share on other sites

Tell me the name of a color that rhymes with 'bed' ?

 

OK, I smell the gears turning....

 

Bingo, you are correct.

 

==

 

Is not the essence of intelligence proper selection from a set of possibilities (either real of abstract) that leads to long-term survival and reproduction ? For a non-living entity, the process of 'selection' can be replaced by the process of mathematical 'step-functions', see here:

 

http://artint.info/html/ArtInt_180.html

Edited by Rade
Link to comment
Share on other sites

  • 3 months later...

Humans are "hard-wired" for Language, like Chomsky suggests, thus the ability of children to pick up language so easily. Regardless of the language involved, be it Welsh, Chinese, or English.

At least in terms of spoken language. Even the dumbest illiterate can at least somewhat verbally communicate in their native language.

Without having a Time-Machine to go back to developing primitive cultures, almost assuredly Tribes that formed "Languages" were more apt to get along and survive, thus passing on those traits...

 

Thing is most communication between humans in personal contact with one another is Non-Verbal body language, and then the inflection and tone of the verbal communication.

 

As this applies to AI, humans of course have to initially program the AI, which is usually algorithms and computer code. From there AI can develop its own.

Even when computers master all written and spoken language, it would still be difficult for AI to undersand inflection of voice, sarcasm, slang, etc...

 

example:

"Get out of here" - sarcastic tone meaning 'you must be joking'

"Get out of here" - serious tone, you are to leave

 

So this brings full circle the "Nature" of intelligent language, as one must also not only translate/understand the words themselves, but interpret the tone and circumstance and body language. A tough task for any AI to "learn"

 

Great thread, but seems people tend to over-complicate it.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...