Our machines slow down with time. It’s a known fact (and a source of frustration) and is generally accepted by us, albeit grudgingly. The hardware in our machines struggles to handle the latest software updates and inevitable leads to wear and tear.
But Google has been looking into this, and might have come up with an answer. In their latest research paper, they have proposed a solution using deep learning that might make our machines better with age!
There is a widely known issue in computing called prefetching. Basically, our computers process information at a rate that’s way more quicker than what’s being extracted from the memory. To avoid long queues, the computer will pull information well in advance by predicting what might be needed. But as technology has become more and more advanced, the predictions have become a rather tricky task.
This is what the research team at Google is aiming to solve – it’s deep learning model, made up of a gigantic simulated neural network, is working on revamping the prefetching process. In it’s research paper, Google has not posted any figures showing the improvement in speed yet.
You can read Google’s official research paper here.
I personally feel this is just the tip of the iceberg. The researchers are confident that their deep learning model, once improved further, could potentially be applied to all components of a machine, from the design chips to the integrated OS.
The model, or approach, could also be applied to machine learning algorithms. Imagine a model designed for marketing data could also be applied to financial datasets? But expectations at this point should be tempered. To even create these models is a computationally expensive task since a lot of data is required to even begin to think about improvements.
An MIT professor, Tim Kraska, is also working on a similar problem of improving machines using deep learning. These are exciting times in the machine learning community!
Lorem ipsum dolor sit amet, consectetur adipiscing elit,