• Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    77
    ·
    1 year ago

    A full 100% sounds weird. It means complete overlap with the ASD assessment which itself isn’t bulletproof. Weird like there were some mistakes in the data. E.g. all ASD pictures taken on the same day and getting a date timestamp, “ASD” written in the metadata or filename, or different light in different lab.

    I didn’t see any immediate problems in the published paper, but if these were my results I’d be to worried to publish it.

    • sosodev@lemmy.world
      link
      fedilink
      English
      arrow-up
      55
      ·
      1 year ago

      It sounds like the model is overfitting the training data. They say it scored 100% on the testing set of data which almost always indicates that the model has learned how to ace the training set but flops in the real world.

      I think we shouldn’t put much weight behind this news article. This is just more overblown hype for the sake of clicks.

    • dave@feddit.uk
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      The paper mentioned how the images were processed (chopping 10% off some to remove name, age, etc). But all were from the same centre and only pixel data was used. Given the other work referenced on retinal thinning in ASD disorders, maybe it is a relatively simple task for this kind of model. But they do say using multi-centre images will be an important part of the validation. It’s quite possible the performance would drop away when differences in camera, etc. are factored in.

  • sosodev@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    ·
    edit-2
    1 year ago

    We need to be very careful with news outlets that focus on science hype. Often times they’re jumping to conclusions based on poorly written papers that have yet to be peer reviewed and reproduced.

    Just take a look at the homepage of this website. They post several times a day with much of it being obvious clickbait backed by very little journalistic integrity.

  • vzq
    link
    fedilink
    English
    arrow-up
    38
    ·
    edit-2
    3 months ago

    deleted by creator

      • Krzak@discuss.online
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        It’s easier to reason with a doctor than a computer. I can imagine you’d be in the system for good after such “evaluation” so it could mean slim chances of retesting

  • Bouchtroubouli@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 year ago

    Well, 100% accuracy after removing all the noise in the dataset …

    At least it proves that their method can separate two extremes. But what about the real life where 90% of the people are ?

  • echo@lemmings.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 year ago

    I am highly skeptical. My guess is that both the article and the research are highly flawed.

    • RGB3x3@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      Nothing in science is 100%. You could survey 100,000 people about what color the sky is and you wouldn’t get 100% saying it’s blue.

  • bionicjoey@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    It worries me that this research came out of South Korea; A country which I’ve heard is particularly stigmatizing of neurodivergence

  • SeeMinusMinus@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    What I want to see is how the test would go if there were people with other conditions as well. There is a good chance that it would easily misdiagnoses people if used outside of the context of just nt’s and autistic people. There could be countless other conditions that also cause whatever the ai is seeing.