Paradoxe EPR : des signaux plus rapides que la lumière ?

Un groupe de physiciens de l’Université de Genève (UNIGE) dirigé par Nicolas Gisin, l’un des pionniers de la téléportation quantique, vient de poser des bornes à l’hypothétique vitesse de propagation d’un signal, dépassant la vitesse de la lumière, proposé pour expliquer classiquement le paradoxe EPR.

Advertisements

Il existe en mécanique quantique un effet très célèbre dénommé paradoxe d’Einstein-Podolski-Rosen, ou paradoxe EPR. C’est en 1935 qu’Einstein et ses deux jeunes collègues publièrent un article tentant de prouver que la mécanique quantique ne pouvait pas être la description ultime de ce qu’était un quanta de lumière, ou un quanta de matière, car conduisant à des phénomènes violant au minimum l’esprit de la relativité restreinte.

Deux particules, comme des photons produits par la désintégration d’une autre particule, y apparaissaient alors comme un tout indissociable et toute mesure de l’une de ces particules, produisant une modification de l’état de cette dernière, entraînait instantanément une modification de l’état de l’autre, quand bien même celles-ci soient séparées par une distance de plusieurs millions d’années-lumière. Une conclusion qui semblait bien peu compatible avec la théorie de la relativité d’Einstein qui implique qu’aucun signal ne peut se déplacer plus vite que la lumière dans l’Univers.

Pour décrire l’état particulier de ces paires de particules en mécanique quantique, on parle de paires de particules intriquées, et il y a une théorie mathématique permettant de définir ce qu’on entend par intrication pour des systèmes physiques.

Des actions à distance fantômes

En fait, une analyse soignée du phénomène montre, comme le fit Niels Bohr, qu’il est possible de conserver à la fois la théorie d’Einstein et les lois de la mécanique quantique si l’on admet qu’il existe une sorte de « non-localité ». Les objets dans l’Univers ne seraient pas fondamentalement dans l’espace et dans le temps et c’est juste par une sorte d’effet de perspective que nous fractionnerions une réalité constituée d’un seul bloc, et fondamentalement au-delà de l’espace et du temps, en une série de particules et/ou d’ondes dans l’espace et le temps.

Cela ne veut pas dire que l’espace et le temps soient des illusions, mais juste que les images que nous avons de la réalité avec ces concepts sont des approximations trompeuses, bien que justes dans un certain domaine de notre expérience. Une conclusion déjà atteinte par Platon, Kant et les philosophes hindous avec la notion de « Maya ».

Cette conclusion est rejetée par les physiciens qui suivent les travaux de John Bell.


A gauche John Bell et à droite le prix Nobel Martinus Veltman.
Crédit : CERN, AIP Emilio Segre Visual Archives.

Rappelons que c’est ce dernier qui dans les années 60 avait découvert une série d’inégalités mathématiques permettant de savoir qui, d’Einstein ou des tenants de l’interprétation standard de la mécanique quantique, l’interprétation de Copenhague, avaient raison. Or, en 1982, le physicien français Alain Aspect avait effectivement montré que le phénomène de non-localité en accord avec les lois de la mécanique quantique orthodoxe était bel et bien une réalité.

Dans les années 50, l’expérience originale d’Einstein utilisant des mesures de positions et de vitesses avec des particules de matière avait été traduite théoriquement en termes d’expériences sur la polarisation des photons par David Bohm. Ce sont donc ces expériences qu’Aspect et ses collègues réalisèrent. En violant les célèbres inégalités de Bell, les bizarres « actions à distance fantômes » (selon les mots d’Einstein) impliquées par l’intrication quantique étaient bien là.

La cause semblait entendue mais John Bell et d’autres n’en démordirent pas. La mécanique quantique, avec les inégalités de Heisenberg, le principe de complémentarité de Bohr, et toutes les amplitudes de probabilités qu’utilise cette dernière ne pouvait pas être l’expression ultime de la réalité selon leur intuition.

Bell se tourna alors vers une approche particulièrement iconoclaste de la part d’un défenseur des idées d’Einstein.

Une hypothèse iconoclaste

Et si non seulement la mécanique quantique mais aussi la théorie de la relativité restreinte étaient fausses dans le même sens où la théorie de Newton est fausse par rapport à ces dernières ?

Ne pourrait-il pas exister, au fond, une sorte de référentiel absolu, un peu comme dans la physique de Newton pré-relativiste, où une sorte de dynamique sub-quantique prendrait place avec certaines interactions pouvant effectivement se déplacer plus vite que la lumière ?

Dans ce cas là, les images bien classiques d’ondes et de particules dans l’espace et dans le temps, et le déterminisme, pourraient être restaurés.

Autant dire qu’une telle éventualité semble bien peu naturelle et les derniers tests de la relativité restreinte d’Einstein montrent que celle-ci est particulièrement solide. Mais au fond, qu’en savons-nous réellement ?


La localisation géographique des expériences du groupe de physiciens Suisses. Crédit : Nature.

C’est dans ce cadre que l’on peut replacer les travaux du groupe de Nicolas Gisin à l’Université de Genève. Utilisant les fibres optiques du réseau de Swisscom s’étendant sur 18 km entre Satigny et Jussy dans la région de Genève, les physiciens ont réalisé une expérience de type EPR avec des paires de photons intriquées. En profitant de la rotation de la Terre sur une période de 24 h, il est alors possible de tester des théories reposant sur l’existence d’une sorte de référentiel absolu, un éther en quel que sorte, par rapport auquel la Terre ne se déplacerait pas avec une vitesse supérieure à un millième de celle de la lumière.

La conclusion des chercheurs est la suivante comme ils l’expliquent dans Nature : si un tel référentiel absolu existait, la vitesse des interactions entre particules intriquées devrait être au moins 10000 fois plus rapide que la lumière pour expliquer les corrélations quantiques bizarres se manifestant avec le phénomène de non-localité observé.


Le début de l’article d’Einstein-Podolski-Rosen

Source : Futura Sciences

Are We Spiritual Machines?

The twenty-first century will see a blurring of the line between human and machine as neural implants become more prevalent. Eventually, machines will become “spiritual”–or as Kurzweil means it, “conscious.”


Ray Kurzweil vs. the Critics of Strong A.I.

In the closing session of the 1998 Telecosm conference, hosted by Gilder Publishing and Forbes at Lake Tahoe, inventor and author Ray Kurzweil engaged a number of critics. He advocated “Strong Artificial Intelligence” (AI), the claim that a computational process sufficiently capable of altering or organizing itself can produce “consciousness.” The session had an unexpectedly profound impact, not least because a number of important issues from technology to philosophy converge on this one issue. This volume reproduces and expands upon that initial discussion.

Esteemed AI advocate Ray Kurzweil opens the volume arguing that by 2019, a personal computer will rival the processing power of the human brain. He is convinced that artificial intelligence–with the capability to “feel” and think like a human–will necessarily emerge. The twenty-first century will see a blurring of the line between human and machine as neural implants become more prevalent. Eventually, machines will become “spiritual”–or as Kurzweil means it, “conscious.”

Kurzweil also sees an analogy between technological evolution and traditional accounts of Darwinian evolution. Under Darwinism, life-forms took billions of years to develop but then exploded in short burst of diversification. Kurzweil calls this the “law of accelerating returns” where technological innovation in the 20th century surpassed all previous centuries combined. At this rate, computation power currently doubles every year. By 2050, a personal computer will have the computing power of all the human brains on earth. Kurzweil believes that simply by reverse-engineering the human mind it can be reproduced. Eventually, human minds will be downloaded and “cloned.” He then envisions software-based “humans” which can effectively live forever, or at least as long as their hardware lasts. Humans can become like God.

Skeptics of Kurzweil then have their chance to respond.

Senior Discovery Institute Fellows George Gilder and Jay Richards write that Kurzweil’s vision of the future “seems to be a substitute vision for those who have lost faith in the traditional object of religious belief.” Gilder and Richards observe that in the debate over A.I., one’s premises have a large effect upon one’s conclusions. For example, if humans are only “a carbon-based, complex, computational, collection of atoms” then of course we can truly replicate human behavior through machines. But if we’re not simply matter-machines, then perhaps A.I. will never truly be able to reproduce a human mind.

Senior Discovery Institute Fellow William Dembski takes aim at Kurzweil’s arguments, saying they lead to an “impoverished spirituality.” Since materialism predicts that mind is reducible to matter, Dembski argues that “[m]achine spirituality neglects much that has traditionally been classified under spirituality” for “[t]he spiritual experience of a machine is necessarily poorer than the spiritual experience of a being that communes with God.” “How,” Dembski asks, “can a machine be aware of God’s presence?” There is something vastly deficient about Kurzweil’s conceptions of spirituality.

John Searle explains that when Deep Blue beat Gary Kasparov, that Deep Blue wasn’t really “thinking” about chess while Kasparov fully understood the game he was playing. Deep Blue could duplicate the playing of chess, but wasn’t really “playing chess.” In short, Kurzweil would say “if it looks the same and feels the same, then it really is the same.” Searle pushes us to ask deeper questions: “Is it really the same?”

Michael Denton makes a similar argument: he concedes that if living organisms really are in all respects analogous to inorganic machines, then Kurzweil may very well be right. But Denton has doubts. Living organisms undergo complete reproduction of both “hardware” and “software”–something that no machine can do. Moreover, living organisms cannot be reduced to genes–meaning that something more than “software” is necessary for life.

This debate will likely not end any time soon but may be concluded based upon how technology advances in this century. For a preview of debates to come, the prophecies of Kurzweil–and the responses from critics–are well worth reading. Fifty years from now, one side will be able to say “We were right.”

Contributors not associated with Discovery include Thomas Ray, Michael Denton, Ray Kurzweil, and John Searle.

By: Richards, Jay W.
Free Press
June 1, 2002

Can we make software that comes to life?

Is evolution about to enter a new phase? Today, 300 biologists, computer scientists, physicists, mathematicians, philosophers and social scientists from around the world are gathering in Winchester. Their aim is to address one of the greatest challenges in modern science: how to create a genuine artificial life form.

Is evolution about to enter a new phase? Today, 300 biologists, computer scientists, physicists, mathematicians, philosophers and social scientists from around the world are gathering in Winchester. Their aim is to address one of the greatest challenges in modern science: how to create a genuine artificial life form.


Intelligent design: self-aware computers such as Pixar’s Wall-E
are surprisingly tricky to put together

The idea that life owes its existence to some “vital essence” or “animating spark” has long been discredited in scientific circles. Instead, it is believed that the first living thing emerged after a chemical reaction crossed the watershed that divides inanimate objects from the kind of self-replicating “organic” reactions that run our cells.

Researchers into artificial life, or “ALife”, take two basic approaches. In “wet” ALife, scientists either tinker with microbes and other forms of simple life, or try to cook up cocktails of chemicals in water (hence “wet”) that have the capacity to extract energy and raw materials from the environment, to grow and reproduce, and ultimately to evolve. Meanwhile, “in silico” ALifers use silicon chips to try to kindle the spark of life in the heart of a computer.

In the latter field, a celebrated experiment was carried out almost two decades ago by Dr Thomas Ray, at the University of Delaware. He created the first successful attempt at Darwinian evolution inside a computer, in which organisms – scraps of computer code – fought for memory (space) and processor power (energy) within a cordoned-off “nature reserve” inside the machine.

His evocative experiment was called “Tierra”, after the Spanish for “Earth”. Back in 1993, when I met him in Oxford, it seemed to be a vital tool in helping us understand why the world is seething with diversity, from rainforest to coral reef.

For evolution to occur, Dr Ray had to allow his programs to mutate. The “Tierran” programming language he devised was robust enough that it could often work after mutations. He also had natural selection: a program called the reaper killed off old and faulty software, enabling more successful organisms to monopolise resources.

On January 3 1990, he started with a program some 80 instructions long, Tierra’s equivalent of a single-celled sexless organism, analogous to the entities some believe paved the way towards life. The “creature” – a set of instructions that also formed its body – would identify the beginning and end of itself, calculate its size, copy itself into a free region of memory, and then divide.

Before long, Dr Ray saw a mutant. Slightly smaller in length, it was able to make more efficient use of the available resources, so its family grew in size until they exceeded the numbers of the original ancestor. Subsequent mutations needed even fewer instructions, so could carry out their tasks more quickly, grazing on more and more of the available computer space.

A creature appeared with about half the original number of instructions, too few to reproduce in the conventional way. Being a parasite, it was dependent on others to multiply. Tierra even went on to develop hyper-parasites – creatures which forced other parasites to help them multiply. “I got all this ecological diversity on the very first shot,” Dr Ray told me.

Other versions of computer evolution followed. Researchers thought that with more computer power, they could create more complex creatures – the richer the computer’s environment, the richer the ALife that could go forth and multiply.

But these virtual landscapes have turned out to be surprisingly barren. Prof Mark Bedau of Reed College in Portland, Oregon, will argue at this week’s meeting – the 11th International Conference on Artificial Life – that despite the promise that organisms could one day breed in a computer, such systems quickly run out of steam, as genetic possibilities are not open-ended but predefined. Unlike the real world, the outcome of computer evolution is built into its programming.

His conclusion? Although natural selection is necessary for life, something is missing in our understanding of how evolution produced complex creatures. By this, he doesn’t mean intelligent design – the claim that only God can light the blue touch paper of life – but some other concept. “I don’t know what it is, nor do I think anyone else does, contrary to the claims you hear asserted,” he says. But he believes ALife will be crucial in discovering the missing mechanism.

Dr Richard Watson of Southampton University, the co-organiser of the conference, echoes his concerns. “Although Darwin gave us an essential component for the evolution of complexity, it is not a sufficient theory,” he says. “There are other essential components that are missing.”

One of these may be “self-organisation”, which occurs when simpler units – molecules, microbes or creatures – work together using simple rules to create complex patterns and behaviour.

Heat up a saucer of oil and it will self-organise to form a honeycomb pattern, with adjacent “cells” forming as the oil turns by convection. In the correct conditions, water molecules will self-organise into beautiful six-sided snowflakes. Add together the correct chemicals in something called a BZ reaction, and one can create a “clock” that routinely changes colour.

At the Winchester conference, Prof Takashi Ikegami, from the University of Tokyo, will explain the ways that self-organisation operates among birds, to help them form flocks, and in robots, children, flies and cells, too. Another keynote speaker will be Prof Peter Schuster of the University of Vienna.

With the Nobel Laureate Manfred Eigen, he came up with the idea of the “hypercycle” – different components “feeding on each others’ waste” while maintaining an (often fragile) overall stability. This scheme was used to show how simple chemicals co-operated to create the first living things billions of years ago.

“Evolution on its own doesn’t look like it can make the creative leaps that have occurred in the history of life,” says Dr Seth Bullock, another of the conference’s organisers. “It’s a great process for refining, tinkering, and so on. But self-organisation is the process that is needed alongside natural selection before you get the kind of creative power that we see around us.

“Understanding how those two processes combine is the biggest challenge in biology.”

ARTIFICIAL LIFE LOG

A simple single-celled amoeba has been turned into a computer by Drs Masashi Aono and Masahiko Hara at the Japanese research institute Riken. They harnessed the way that the creature responds to light to allow it to solve a famous puzzle called the travelling salesman problem.

The set-up is this: a sales rep has, say, six cities to visit. To minimise his travel costs, he must find the shortest route between them, one that visits every city just once. Once the number of cities grows to several hundred, the task will become too complex for even the cleverest computers.

In this case, the slime mould Physarum polycephalum solved the problem for four cities. The team harnessed two facts: the creature wants to have the biggest body area, but dislikes light. Thus they forced it to search for nutrients down specific branches – the routes between the cities – while it tried to minimise its exposure to illumination.

Hugo Marques of Essex University will discuss a pioneering attempt to give computers some imagination – which he believes “may be a significant step towards building a robot with a mental life”. He will try to mimic the relationship between the human brain and body by giving a robotic consciousness a skeleton to inhabit.

Scientists are also hoping to enhance our ability to study pollution and climate change by using “smart dust” – wireless chips with their own power supply and sensors that link to each other via radio. The Winchester conference will hear a proposal by Prof Davide Anguita and Dr Davide Brizzolara at the University of Genoa for a marine equivalent called smart plankton, which will provide “shoal intelligence”.

A new way to fight junk emails has been developed by Alaa Abi-Haidar of Indiana University and Luis Rocha of the Instituto Gulbenkian de Ciência, in Portugal. It is inspired by the way the body’s immune system fights off invading diseases and, according to Dr Abi-Haidar, promises greater resilience than existing systems to changes in the ratio of spam received compared to normal email (“ham”).

Roger Highfield

Source : telegraph.co.uk

Web pages have ‘come alive and started breeding’

Living web sites that grow, develop and evolve to suit the taste of the people that read them are now finding their way on to the internet.

Living web sites that grow, develop and evolve to suit the taste of the people that read them are now finding their way on to the internet.

For two decades, computer scientists have played around with evolutionary software that can gradually evolve and mutate to carry out a task efficiently, or hone the design of a wing, robot or whatever, without the need for a programmer to get involved.


A grouping of some of the sites with human controlled
design properties or genetic design evolution

Now these techniques are being used to allow web sites to keep themselves up to date and to adapt to the latest fads and fashion, reports New Scientist.

Not only are they quicker to evolve than possible with human intervention, they offer the chance to come up with new ways to organise material in the web that work best for users.

Matthew Hockenberry and Ernesto Arroyo of Creative Synthesis, a non-profit organisation in Cambridge, Massachusetts, have created evolutionary software that alters colours, fonts and hyperlinks of pages in response to what seems to grab the attention of the people who click on the site. See www.creativesynthesis.net for more.

To start, he used mouse-tracking software developed by Arroyo while at the Massachusetts Institute of Technology on 24 people who were asked to use a basic web site template for a blog.

Once the blog went live, control of the design was out of their hands.

The software treated each feature as a “gene” that was randomly changed as a page was refreshed.

After evaluating what seemed to work, it killed the genes associated with lower scoring features – say the link in an Arial font that was being ignored – and replaced them with those from higher scoring ones say, Helvetica.

“We see a lot of terrible designs for the first 100 or so generations,” Hockenberry tells New Scientist.

But the pages gradually morph to be more pleasing. Interestingly, they do not simply reflect a consensus of what people want to see, since the random element means the exercise is truly creative.

“The mutations will always occur and while they are responsive to human attention, they are not bound by them.

It is possible to develop unique mutations that may actually influence human goals (rather than the other way around).”

Prof Gregg Vanderheiden of the University of Wisconsin-Madison, says sites that cater to people with disabilities would particularly benefit from evolving pages.

And evolutionary computing researcher Charles Ofria of Michigan State University in East Lansing says the idea might remove the need to constantly test websites on users in the way that companies like Amazon, Google and Facebook now do.

The work is reminiscent of the way that evolutionary methods were used to create organic art by the America Karl Sims – at an exhibition, art was continually evolving by breeding the images that people liked to look at, and killing those that were unpopular.

“A lot of the work done in genetic / organic art certainly serves as a significant intellectual inspiration,” says Hockenberry.

“The most significant difference is the goal of targeting the real public in a process. We want to add a sense of responsibility to this genetic growth. Does the process make sense? Does it do something useful? How do people work within this process and support it?

“Most of the examples of using genetic algorithms are about making something – and then showing the result for interaction. We want human creativity to be a driving force within a process of computer genetic evolution. So while pages might be growing – it still matters if humans take care of them and they can still influence the growth in very significant ways.”

Roger Highfield

Source : telegraph.co.uk

‘Virophage’ suggests viruses are alive

“There’s no doubt this is a living organism,” says Jean-Michel Claverie, a virologist at the the CNRS UPR laboratories in Marseilles, part of France’s basic-research agency. “The fact that it can get sick makes it more alive.”

Evidence of illness enhances case for life.


Giant mamavirus particles (red) and satellite viruses
of mamavirus called Sputnik (green).

The discovery of a giant virus that falls ill through infection by another virus1 is fuelling the debate about whether viruses are alive.

“There’s no doubt this is a living organism,” says Jean-Michel Claverie, a virologist at the the CNRS UPR laboratories in Marseilles, part of France’s basic-research agency. “The fact that it can get sick makes it more alive.”

Giant viruses have been captivating virologists since 2003, when a team led by Claverie and Didier Raoult at CNRS UMR, also in Marseilles, reported the discovery of the first monster2. The virus had been isolated more than a decade earlier in amoebae from a cooling tower in Bradford, UK, but was initially mistaken for a bacterium because of its size, and was relegated to the freezer.

Closer inspection showed the microbe to be a huge virus with, as later work revealed, a genome harbouring more than 900 protein-coding genes3 — at least three times more than that of the biggest previously known viruses and bigger than that of some bacteria. It was named Acanthamoeba polyphaga mimivirus (for mimicking microbe), and is thought to be part of a much larger family. “It was the cause of great excitement in virology,” says Eugene Koonin at the National Center for Biotechnology Information in Bethesda, Maryland. “It crossed the imaginary boundary between viruses and cellular organisms.”

Now Raoult, Koonin and their colleagues report the isolation of a new strain of giant virus from a cooling tower in Paris, which they have named mamavirus because it seemed slightly larger than mimivirus. Their electron microscopy studies also revealed a second, small virus closely associated with mamavirus that has earned the name Sputnik, after the first man-made satellite.

With just 21 genes, Sputnik is tiny compared with its mama — but insidious. When the giant mamavirus infects an amoeba, it uses its large array of genes to build a ‘viral factory’, a hub where new viral particles are made. Sputnik infects this viral factory and seems to hijack its machinery in order to replicate. The team found that cells co-infected with Sputnik produce fewer and often deformed mamavirus particles, making the virus less infective. This suggests that Sputnik is effectively a viral parasite that sickens its host — seemingly the first such example.

The team suggests that Sputnik is a ‘virophage’, much like the bacteriophage viruses that infect and sicken bacteria. “It infects this factory like a phage infects a bacterium,” Koonin says. “It’s doing what every parasite can — exploiting its host for its own replication.”

Sputnik’s genome reveals further insight into its biology. Although 13 of its genes show little similarity to any other known genes, three are closely related to mimivirus and mamavirus genes, perhaps cannibalized by the tiny virus as it packaged up particles sometime in its history. This suggests that the satellite virus could perform horizontal gene transfer between viruses — paralleling the way that bacteriophages ferry genes between bacteria.


Virophages may be common in plankton blooms.
J. SCHMALTZ/NASA

The findings may have global implications, according to some virologists. A metagenomic study of ocean water4 has revealed an abundance of genetic sequences closely related to giant viruses, leading to a suspicion that they are a common parasite of plankton. These viruses had been missed for many years, Claverie says, because the filters used to remove bacteria screened out giant viruses as well. Raoult’s team also found genes related to Sputnik’s in an ocean-sampling data set, so this could be the first of a new, common family of viruses. “It suggests there are other representatives of this viral family out there in the environment,” Koonin says.

By regulating the growth and death of plankton, giant viruses — and satellite viruses such as Sputnik — could be having major effects on ocean nutrient cycles and climate. “These viruses could be major players in global systems,” says Curtis Suttle, an expert in marine viruses at the University of British Columbia in Vancouver.

“I think ultimately we will find a huge number of novel viruses in the ocean and other places,” Suttle says — 70% of viral genes identified in ocean surveys have never been seen before. “It emphasizes how little is known about these organisms — and I use that term deliberately.”

Helen Pearson.

Source : Nature

References

1 La Scola, B. et al. Nature doi:10.1038/nature07218 (2008).

2 La Scola, B. et al. Science 299, 2033 (2003).

3 Raoult, D. et al. Science 306, 1344–1350 (2004).

4 Monier, A., Claverie, J.-M. & Ogata, H. Genome Biol. 9, R106 (2008).