Artificial Intelligence (AI) can read your mind… if you are hooked up to an fMRI machine and it is trained on how you process visual information.
Researchers at Osaka University in Japan have found that AI can be trained to reconstruct high resolution images from human brain activity, gathered from MRI scans, which bear a striking resemblance to the source image being shown to the participants.
A preprint of the study by Yu Takagi and Shinji Nishimoto describes how the researchers used the deep learning model Stable Diffusion to translate the images in people's heads into AI reconstructions, using data from the fMRI.
“The study uses functional magnetic resonance imaging (fMRI) to map brain activity, looking at tiny changes in blood flow that indicate when certain parts of the brain are working,” Joseph Early, a doctoral student in AI at the Alan Turing Institute, who was not involved in the study, told Newsweek.
“By showing people pictures while they’re undergoing fMRI scans, the parts of the brain that ‘light up’ in response to different images can be identified,” Early said. “These different responses are mapped to a format that is familiar to the existing generative image models, and can then be used to generate new images.”
To train the AI, each participant was shown 10,000 images while inside an fMRI scanner. This was repeated three times and the resulting MRI data was fed to a computer so that it could learn how each participant’s brain processed the images.
Interestingly, the AI was better at “reading” some people’s brain activity than others.
Despite these differences, in most cases there are clear similarities between the objects, color schemes and compositions of each image shown to the participants and the resulting reconstruction by the AI.
Takagi, an assistant professor at Osaka University and co-author of the paper, said that the researchers were very surprised by the results of their study.
“The most interesting part of our research is that the diffusion model—so called image-generating AI which [...] was not created to understand the brain—predicts brain activity well and can be used to reconstruct visual experiences from the brain,» he told Newsweek.
In the study, the AI was able to see what the participants were seeing by analyzing their brain activity. But Takagi said that this technique could theoretically be used to assemble images straight from a person’s imagination.
“When we see things, visual information captured by the retina is processed in a brain region called the visual cortex located in the occipital lobe,” he said. “When we imagine an image, similar brain regions are activated. It is [therefore] possible to apply our technique to brain activity during imagination, but it is currently unclear how accurately we can decode such activity.”
Tagaki said that this technology could potentially be used in the development of brain-machine interfaces in clinical and creative contexts.
“What is unique about this tool is that it does not require physical manipulation of a device,” Laura Herman, a doctoral student at the University of Oxford’s Internet Institute, told Newsweek. “Therefore, there are exciting possibilities for creatives with physical disabilities who have historically been excluded from using creative tools that may require certain motor abilities.
“That said, the dangers of this tool are enormous—and any risk is exacerbated for vulnerable communities, such as those with disabilities. It is difficult to overstate the privacy and security risks that come with allowing access to one’s fMRI data.
“As we’re seeing here, this data can be used to literally reconstruct internal, private thoughts; in the hands of the wrong actors, this data would enable an unprecedented level of surveillance by monitoring the very thoughts in one’s brain. Though the technological outputs may be enticing, it is difficult to imagine that they would be worth sharing your intimate fMRI data.”
AI doctoral student Early said that deploying such a technology outside of a lab environment would be very difficult, so we can hold off worrying about AI-powered mind readers for now.
“Firstly, fMRI scans are needed to measure the brain activity that is used to generate images, and the machines that perform these scans often cost over $1 million,” he said.
“Secondly, in its current state, the method needs to learn to map an individual’s brain activity: everyone will have different responses when shown the same image, so the method needs to be personalized to each user,” Early added.
At present, generating images from a person’s brain activity is both costly and time-intensive. At this point in time, work in this area is still heavily research focused, but Tagaki said that the study was an interesting demonstration of the similarities and differences between how an AI and the human brain interpret the world.
“We believe that our work demonstrates the potential of the integration of AI and neuroscience research communities, and provides some implications for how the two fields might interact in the future,” he said.
谷歌seo推广 游戏出海seo,引流,快排,蜘蛛池租售;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Slots Fortune Tiger Slots;
谷歌seo推广 游戏出海seo,引流,快排,蜘蛛池租售;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Slots Fortune Tiger Slots;
谷歌seo推广 游戏出海seo,引流,快排,蜘蛛池租售;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Slots Fortune Tiger Slots;
谷歌seo推广 游戏出海seo,引流,快排,蜘蛛池租售;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Fortune Tiger;
Fortune Tiger Slots Fortune Tiger Slots;