The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
That’s one company and one model referring only to material discovery. There are other models and companies.
Yes, it’s an example of how there are claims being made that don’t hold up.
You can find that kind of example for literally every segment of science and society. Showing a single example out of many and then saying “see? The claims are false”. It’s disingenuous at best.
https://www.artsci.utoronto.ca/news/researchers-build-breakthrough-ai-technology-probe-structure-proteins-tools-life
https://www.broadinstitute.org/news/researchers-use-ai-identify-new-class-antibiotic-candidates
I think you’re not seeing the nuance in my statements and instead are extrapolating inappropriately, perhaps even disingenuously.
I’m not missing the nuance of what you said. It’s just irrelevant for the discussion in this thread.
My comment that you initially replied to was talking about much more than just LLMs, but you singled out the one point about LLMs and offered a single article talking about DeepMind’s results on material discoveries. A very specific
It’s about the relevance of AI as a tool for profit stemming from the top level comment implying an AI winter is coming.
But to go back to your point about the article you shared, I wonder if you’ve actually read it. The whole discussion is about what is effectively a proof-of-concept by Google, and not a full effort to truely find new materials. They said that they "selected a random sample of the 380,000 proposed structures released by DeepMind and say that none of them meet a three-part test of whether the proposed material is “credible,” “useful,” and “novel.” "
And in the actual analysis, which the article is about, they wrote: “we have yet to find any strikingly novel compounds in the GNoME and Stable Structure listings, although we anticipate that there must be some among the 384,870 compositions. We also note that, while many of the new compositions are trivial adaptations of known materials, the computational approach delivers credible overall compositions, which gives us confidence that the underlying approach is sound.”
Ultimately, everyone involved in analysing the results agreed the concept is sound and will likely lead to breakthroughs in the future, but this specific result (and a similar one done by another group), have not produced any significant and noteworthy new materials.
I’m not reading that because you clearly would rather argue than have a conversation. Enjoy the rest of your day.
Sure, just like you didn’t read the article you linked to.
I did read it btw, since you shared it.