The power of AI technology cannot be denied and it appears that with each passing day, we’re getting to hear more about developments.
A recent breakthrough in this regard is now being spoken about that translates brain function into a long stream of continuous text that continues to be created. It’s a massive development that enables a person’s thoughts to be read in the most non-invasive manner and that too, for the first time.
This innovative decoder may even end up reconstructing speech with the help of great accuracy when people are busy listening to stories or imagining them in their minds, silently. And that too, only makes use of fMRI scan data. Meanwhile, decoding systems of the past now require surgical implants and this latest advancement puts up the prospect of how new ways are restoring speech in some people and these are struggling in terms of making communications due to strokes or other kinds of motor neuron conditions.
One neuroscientist mentioned how such a development is truly shocking because it’s working out so well too. And working on something like this for 15 years and now finally seeing it happen before your eyes is really something.
Such achievements go above the limitations that arise with fMRI which means that while such methods may map out your brain’s activity at a specific area with great resolution, there might be some kind of time difference that does make the whole concept of tracking in real time not a very fruitful ordeal.
This kind of lag does exist because the fMRI scans are designed for measuring the kind of response in terms of blood flow to your brain’s functions. And that would peak and go back to the baseline over a period of 10 seconds. Therefore, it means this is the most powerful type of scanner that can’t improve on something like this. It’s not only very noisy but sluggish in terms of neural activity.
Such limitations do make it so much harder to evaluate your mind’s overall activity in terms of natural speech as it provides a huge mishmash of data that arises over a timespan of a few seconds.
But with the development of such huge language models, you’re representing speech and giving some scientists the chance to evaluate various neuronal patterns that correspond to words having a certain type of meaning instead of attempting to read things out, one word at a time.
Certainly, it was not very easy or straightforward. You really had to undergo a very intensive learning process for this study, three volunteers needed to be inside a scanner for a span of 16 hours each. And they were busy listening to podcasts. Then the decoder underwent training to match your brain’s activity and that meant saying hello to huge language models like GPT-1 which happened to be a precursor for ChatGPT.
In terms of non-invasive technology, experts are calling this a huge development when compared to what was done in the past.
Read next: ChatGPT Just Failed This Accounting Exam
by Dr. Hura Anwar via Digital Information World
No comments:
Post a Comment