All investment/financial opinions expressed by NFTevening.com are not recommendations.
This article is educational material.
As always, make your own research prior to making any kind of investment.
Listeners:
Top listeners:
Electromusic FM RADIO ONLINE 24/7
London Calling Podcast Yana Bolder
The future of AI music is about to get a whole lot better. Imagine being able to recreate a song simply by thinking about it! Thanks to artificial intelligence, this futuristic concept is becoming a reality. Scientists have achieved a groundbreaking feat by using AI to replicate music based on brain activity patterns, ushering in a new era of understanding the human mind’s interaction with music.
TL;DR:
In a recent study published in PLOS Biology, researchers from the University of California at Berkeley successfully used AI to recreate recognizable audio of Pink Floyd’s iconic song “Another Brick in the Wall, Part 1” by analyzing brain activity. The research involved monitoring electrical signals directly from the brains of epilepsy patients undergoing seizure treatment. Then, as these patients listened to the song, electrodes on their brain’s surface captured auditory processing regions’ activity.
The recorded brain data was then fed into machine learning algorithms. These algorithms deciphered patterns in the brain’s auditory cortex responses to musical components like pitch, tempo, vocals, and instruments. So, the AI models learned to associate specific neural activity with corresponding acoustic features. To put it simply. – electrodes recorded brain data when listening to a song. Next, the data was fed into machine learning algorithms. Following this, the AI models associated the brain data (neural activity) with musical aspects.
Now, the trained AI models can generate new ‘spectrographic’ representations from brain data alone. These representations can then be turned into waveforms and then audio. The audio, while not perfect, clearly resembles the song a person is thinking of. In this study, the audio bore a clear resemblance to “Another Brick in the Wall, Part 1.”
So, what does this all mean? Well, this achievement marks a significant advancement in decoding complex musical stimuli based solely on brain processing. If confirmed through further research, it could revolutionize thought decoding, which has previously been limited to individual words or letters.
Dr. Robert Knight, a UC Berkeley neuroscientist and the study’s senior author, explained that the chosen Pink Floyd song’s intricate instrumentation served as a suitable test case. However, this approach holds potential for any genre of music and even capturing the melodic attributes of natural speech.
Moreover, the researchers envision applications beyond music recreation. This technology could eventually aid severely paralyzed patients or stroke victims in regaining speech abilities through thought. Brain-computer interfaces are already in progress to decode text from noninvasive brain scans, and adding melody and prosody dimensions could enable more comprehensive thought reconstruction. Thought-to-speech interfaces could give voice to the speech-impaired. Beyond clinical applications, decoding techniques offer opportunities to study memory, learning, and creativity by reading thoughts, bringing us closer to understanding what happens within the mind.
As Dr. Robert T. Knight aptly put it, “Today we reconstructed a song; maybe tomorrow we can reconstruct the entire Pink Floyd album.” This monumental breakthrough bridges the gap between the intricacies of music, the human brain, and the potential of AI.
All investment/financial opinions expressed by NFTevening.com are not recommendations.
This article is educational material.
As always, make your own research prior to making any kind of investment.
Written by: Soft FM Radio Staff
Sign up for the latest electronic news and special deals
EMAIL ADDRESS*
By signing up, you understand and agree that your data will be collected and used subject to our Privacy Policy and Terms of Use.
Copy rights Soft FM Radio.