The biggest physics experiment ever — the Large Hadron Collider (LHC) — is due to start on September the 10th. The LHC is a particle accelerator. It's a 27km underground tunnel located near Geneva at the European Organisation for Nuclear Research (CERN). Protons will be sent racing through the tunnel on collision course with each
other, and scientists hope that the remains of these collisions will answer some of the biggest secrets of the Universe.
Watch this space for more information. Meanwhile, you can find out more in the Plus articles
The Channel 4 programme Countdown has started the search to find an arithmetician for the new series in 2009.
If you are a number-cruncher yourself, you could be just what they're looking for. Find out how to apply on their website. You don't have to be female and you don't have to have been on telly before, but you do need to have a way with numbers and lots of charisma. The closing date for applications is the 19th of
Zipf's law arose out of an analysis of language by linguist George Kingsley Zipf, who theorised that given a large body of language (that is, a long book — or every word uttered by Plus employees during the day), the frequency of each word is close to inversely proportional to its rank in the frequency table. We thought we
would test this out on Plus. What does this imply about how we use language and how it evolved?
Is it really a mystery? I have at least one idea off the top of my head.
Since one of the ways you can construct power law distributed networks (competitive scale networks) is through growth/decay rules (e.g. the next added link will have the highest probability of connecting to the node with the higher degree or existing connections) and thinking a little about how language evolves by adopting and abandoning words, it seems likely that words frequency could follow a
power law because they are added to and removed from over time with a similar set of rules (at some level).
The only question is what exactly do such network nodes and their degrees map to?
Nodes seems map to words or perhaps the idea represented by the word or word-sound or word-ideas. If the nodes map to ideas then there is also a link to memes and various mind-external scale-free structures.
Nodal degree seems to related to usage of the word - either simply the frequency of usage or something deeper that results in that frequency.
Mathematics is often used to study evolution, whether that be the evolution of animal species, the evolution of viruses or the evolution of language. A recent study has taken this one step further by modelling the evolution of national cuisine, and it was found that even though there are wall-to-wall celebrity chefs on television these days trying to broaden our culinary horizons, our cultural
cuisines are largely the same as they were almost 100 years ago.
After every Olympics, there is speculation about which country performed best. Should we really be surprised when China, with its huge population, and the US, with its combination of high GDP and population, top the medal table? Can we take a look at the medal tables and see which countries did indeed perform better than expected?
The model provided by the fancy pants bureau thingy is stupid.
it has A(Log[X])+B(Log[Y/X]), which is the same as A(Log[X])+B(Log[Y])-B(Log[X]), which is the same as (A-B)(Log[X])+B(Log[Y), and as A and B are just constants, A-B can be anything. say C. They've added in a completely unneccesary element.
WELL statistics and dam statistics. The problem is your raw data is wrong and as with all statistics the way you look at is partial. Is there a relation between GDP and medals, should there be? Difficult to say, I would have thought it is about social structure and expectations as well. GB did well in various areas because there was an expectation, resources and talent coming together and it
did badly in others because one or more of these was missing.
Disney Pixar have just released the movie, WALL-E. A bleak, post-apocalyptic tour-de-force, the movie depicts the gentle romance between two robots of the future: WALL-E, the not-so-bright and not-so-attractive "guy" with the big heart and sweet personality, and EVE, the sleek, sexy, totally
Pixar designed these robots so that we see them as human. But what exactly is WALL-E? Is he pure fantasy and speculative fiction? Or is he — is artificial intelligence — simply the way of the future?
"Instead of programming a computer to abide by the traditional step-by-step rules approach, we model it like the neurons in the human brain where the results of the program depend on the "strengths" of each particular neuron."
Neural computing runs on a computer...it too is algorithmic.
It seems to me that people are algorithmic as well but that each of our internal programs can achieve intuitive leaps, insights and creative ideas by accepting as input, data in many different forms...eg a certain spreadsheet of numerical data can be interpreted, from across the room as a portrait of Einstein.
The reason why computers will never be 'human', is because human reasoning, morality and intellect is not optimal. If a machine should have the computational power to rival human thinking, pressing it into the mold of a human brain will be diametrically opposed to the aims of such a machine. We like to consider ourselves to be the pinnacle of evolution, but we are forgetting that i) evolution
is an on-going process, and ii) humans are not as clever as all that. In fact, looking around you, from Big Brother to the Iraq war, would a computer with a vast intelligence feel nothing but derision for our species?
I agree with many of the comments left by Patrick Andrews.
It seems to me that the advancement of artificial intelligence is handicapped to a considerable extent by the intellectual capacity of the computer technocrat.
Using a trivial example to illustrate, while working for the now-defunct computer manufacturer Sperry Univac, I asked a visiting VP what the company's position was following IBM's introduction of its (then) 'Personal Computer'. He replied that the company regarded the PC as a fad and the future was in mainframe computers! It is this level of intellect that presents the handicap mentioned
Some years later, I was to have discussions with Professor Donald Michie of Edinburgh University about what he considered to be a problem in "pattern matching". Basically, the problem was to take a large amount of information about daily trading in a retail environement and establish through analysis what represented 'normal' trading. By this means, what repesented 'abnormal' trading could be
established. It was abnormal trading that was the holy grail being sought in this particular excercise. Sadly, we did not get very much further than stating the problem. The overall solution, though, was believed to lie in the computer's ability to learn from key factors in the environment of the problem. It was this ability, residing in the human counterpart (given the right degrees of
application, intellect and experience), that could provide answers for discrete cases, albeit at the expense of considerable time and effort.