Google scientists have modelled a fragment of the human brain at nanoscale resolution, revealing cells with previously undiscovered features.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      7 months ago

      It’s all just cables now. Someone took the server out years ago and it just kept working out of habit.

  • Onii-Chan@kbin.social
    link
    fedilink
    arrow-up
    17
    ·
    7 months ago

    This is one of the most incredible things I’ve ever seen. The complexity of life just continues to astound me.

    • Gamma@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Yes! That this thing could evolve into existence is practically a miracle

  • MonkderDritte@feddit.de
    link
    fedilink
    arrow-up
    7
    ·
    7 months ago

    then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D.

    Why AI for that?

    • Gamma@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      7 months ago

      ML is pretty common when working with a ton of data, from another article:

      To make a map this finely detailed, the team had to cut the tissue sample into 5,000 slices and scan them with a high-speed electron microscope. Then they used a machine-learning model to help electronically stitch the slices back together and label the features. The raw data set alone took up 1.4 petabytes. “It’s probably the most computer-intensive work in all of neuroscience,” says Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the research. “There is a Herculean amount of work involved.”

      Unfortunately techbros have poisoned the term AI 🥲

      Source: Google helped make an exquisitely detailed map of a tiny piece of the human brain

  • Hazelnoot [she/her]@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    Jain’s team then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D.

    The map is so large that most of it has yet to be manually checked, and it could still contain errors created by the process of stitching so many images together. “Hundreds of cells have been ‘proofread’, but that’s obviously a few per cent of the 50,000 cells in there,” says Jain.

    Ah so it’s not a real model, just an AI approximation.

    • I_am_10_squirrels@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      It still seems like a real model to me. Just because they used a fancy computer to turn a sequence of 2d slices into a 3d representation doesn’t mean it’s not real.

    • Gamma@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      Fortunately the people working on brain research aren’t the same people programming assistant

    • Gamma@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      Harvard has been partnering with their research labs for the last decade to gain access to hardware and algos they wouldn’t have themselves