There’s a risk that XDREAM could become a glorified Rorschach test, in which researchers see what they want to see. “It was exciting to finally let a cell tell us what it’s encoding instead of having to guess,” says Ponce, who is now at Washington University in St. Over 250 such generations, the synthetic images became more and more effective, until they were exciting their target neurons far more intensely than any natural image. Xiao had previously trained XDREAM using 1.4 million real-world photos so that it would generate synthetic images with the properties of natural ones. Sets of those gray, formless images, 40 in all, were shown to watching monkeys, and the algorithm tweaked and shuffled those that provoked the strongest responses in chosen neurons to create a new generation of pics. That was the idea behind XDREAM, an algorithm dreamed up by a Harvard student named Will Xiao. So why not ask the neurons what they want to see? “Just because a cell responds to a specific category of image doesn’t mean you really understand what it wants,” says Livingstone.
![xdream watch xdream watch](https://i.ytimg.com/vi/SXKmUE_Jmwk/hqdefault.jpg)
But here’s the catch: Those scientists always chose which kinds of shape to test, and their intuition might not reflect the actual stimuli to which the neurons are attuned. Since then, other neuroscientists have identified neurons that respond to colors, curvatures, faces, hands, and outdoor scenes. The first hints of that vocabulary emerged in 1962, when Torsten Wiesel and David Hubel showed that specific neurons in the brain’s visual centers are tuned to specific stimuli-lights moving in particular directions, or lines aligned in particular ways. With and /iTaI4nBEcE - Peter Schade January 18, 2019 New pre-print on #bioRxiv 'Evolving super stimuli for real neurons using deep generative networks'. “It exposes the visual vocabulary of the brain, in a way that’s unbiased by our anthropomorphic perspective.” “If cells are dreaming, are what the cells are dreaming about,” says Ponce. And collectively, they tell us something interesting about how our brain makes sense of the world, and how much we still don’t understand about that process. But each one is close to the ideal stimulus for a particular neuron. You really wouldn’t want to hang them on your wall.
![xdream watch xdream watch](https://i.ytimg.com/vi/KDkm1fDdFik/hqdefault.jpg)
XDREAM’s images look like glitchy Kandinsky paintings viewed during a bad trip. And when the team hooked XDREAM up to another of the monkey’s visual neurons, it produced a distorted image of a face in a white mask. Diane is one of the monkeys’ caretakers, who feeds them while wearing blue scrubs and a white face mask. “And then, a few days later, we evolved Diane,” she adds. “We all looked at it and said, ‘Oh, that’s Anthony,’” says Margaret Livingstone, a neuroscientist at Harvard Medical School.
#Xdream watch Patch#
Soon a red patch appeared next to it, which reminded the watching researchers of the red collar worn by a monkey who lives in the cage opposite Ringo’s.
![xdream watch xdream watch](https://xdsports.uk/images/thumbs/000/0002418_viso-ii_300.png)
Two black dots with a black line beneath them, all against a pale oval. But as time passed, “from this haze, something started staring back at us,” says the neuroscientist Carlos Ponce. As the images evolved, the neuron fired away, and the team behind XDREAM watched from a nearby room.Īt first, the pictures were gray and formless. The pictures were created by an artificial-intelligence algorithm called XDREAM, which gradually tweaked them to stimulate one particular neuron in Ringo’s brain, in a region that’s supposedly specialized for recognizing faces. In April 2018, a monkey named Ringo sat in a Harvard lab, sipping juice, while strange images flickered in front of his eyes.