Google scientists have modelled a fragment of the human brain at nanoscale resolution, revealing cells with previously undiscovered features.
Brain cable management be like
It’s all just cables now. Someone took the server out years ago and it just kept working out of habit.
Electricians will deny this is true but then just make up a new word for it (inductance)
This is one of the most incredible things I’ve ever seen. The complexity of life just continues to astound me.
Yes! That this thing could evolve into existence is practically a miracle
then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D.
Why AI for that?
ML is pretty common when working with a ton of data, from another article:
To make a map this finely detailed, the team had to cut the tissue sample into 5,000 slices and scan them with a high-speed electron microscope. Then they used a machine-learning model to help electronically stitch the slices back together and label the features. The raw data set alone took up 1.4 petabytes. “It’s probably the most computer-intensive work in all of neuroscience,” says Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the research. “There is a Herculean amount of work involved.”
Unfortunately techbros have poisoned the term AI 🥲
Source: Google helped make an exquisitely detailed map of a tiny piece of the human brain
That is amazing.
Jain’s team then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D.
The map is so large that most of it has yet to be manually checked, and it could still contain errors created by the process of stitching so many images together. “Hundreds of cells have been ‘proofread’, but that’s obviously a few per cent of the 50,000 cells in there,” says Jain.
Ah so it’s not a real model, just an AI approximation.
It still seems like a real model to me. Just because they used a fancy computer to turn a sequence of 2d slices into a 3d representation doesn’t mean it’s not real.
Google can do this, but can’t maintain Google assistant features we’ve had for years?
Fortunately the people working on brain research aren’t the same people programming assistant
Why is Google doing this research?!?
Harvard has been partnering with their research labs for the last decade to gain access to hardware and algos they wouldn’t have themselves