Reply to comment

News from the world of maths:

Monday, July 28, 2008

Disney Pixar have just released the movie, WALL-E. A bleak, post-apocalyptic tour-de-force, the movie depicts the gentle romance between two robots of the future: WALL-E, the not-so-bright and not-so-attractive "guy" with the big heart and sweet personality, and EVE, the sleek, sexy, totally out-of-his-league babe.

Pixar designed these robots so that we see them as human. But what exactly is WALL-E? Is he pure fantasy and speculative fiction? Or is he — is artificial intelligence — simply the way of the future?

Read more...

Labels:

posted by westius @ 11:45 AM

3 Comments:

At 10:57 AM, Anonymous Patrick Andrews said...

"Instead of programming a computer to abide by the traditional step-by-step rules approach, we model it like the neurons in the human brain where the results of the program depend on the "strengths" of each particular neuron."

Neural computing runs on a computer...it too is algorithmic.

It seems to me that people are algorithmic as well but that each of our internal programs can achieve intuitive leaps, insights and creative ideas by accepting as input, data in many different forms...eg a certain spreadsheet of numerical data can be interpreted, from across the room as a portrait of Einstein.

 
At 2:54 PM, Anonymous Anonymous said...

The reason why computers will never be 'human', is because human reasoning, morality and intellect is not optimal. If a machine should have the computational power to rival human thinking, pressing it into the mold of a human brain will be diametrically opposed to the aims of such a machine. We like to consider ourselves to be the pinnacle of evolution, but we are forgetting that i) evolution is an on-going process, and ii) humans are not as clever as all that. In fact, looking around you, from Big Brother to the Iraq war, would a computer with a vast intelligence feel nothing but derision for our species?

 
At 5:09 PM, Blogger Terry said...

I agree with many of the comments left by Patrick Andrews.
It seems to me that the advancement of artificial intelligence is handicapped to a considerable extent by the intellectual capacity of the computer technocrat.
Using a trivial example to illustrate, while working for the now-defunct computer manufacturer Sperry Univac, I asked a visiting VP what the company's position was following IBM's introduction of its (then) 'Personal Computer'. He replied that the company regarded the PC as a fad and the future was in mainframe computers! It is this level of intellect that presents the handicap mentioned earlier.
Some years later, I was to have discussions with Professor Donald Michie of Edinburgh University about what he considered to be a problem in "pattern matching". Basically, the problem was to take a large amount of information about daily trading in a retail environement and establish through analysis what represented 'normal' trading. By this means, what repesented 'abnormal' trading could be established. It was abnormal trading that was the holy grail being sought in this particular excercise. Sadly, we did not get very much further than stating the problem. The overall solution, though, was believed to lie in the computer's ability to learn from key factors in the environment of the problem. It was this ability, residing in the human counterpart (given the right degrees of application, intellect and experience), that could provide answers for discrete cases, albeit at the expense of considerable time and effort.
Terry Schooling

 

Reply

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

More information about formatting options

To prevent automated spam submissions leave this field empty.
By submitting this form, you accept the Mollom privacy policy.