• 4 Posts
  • 31 Comments
Joined 1 month ago
cake
Cake day: July 18th, 2025

help-circle
  • Historically, Firefox has had fewer security measures than Chrome. For example, full tab isolation was only implemented recently in Firefox, many years after Chrome. MV3-only extentions in Chrome also reduce the attack surface from that perspective.

    The counterpoint to this is that there are much fewer users of Firefox, so it less attractive to try to exploit. Finding a vulnerability in Chrome is much more lucrative, since it has the potential to reach more targets.


  • a CoT means externally iterating an LLM

    Not necessarily. Yes, a chain of thought can be provided externally, for example through user prompting or another source, which can even be another LLM. One of the key observations behind these models commonly referred to as reasoning is that since an external LLM can be used to provide “thoughts”, could an LLM provide those steps itself, without depending on external sources?

    To do this, it generates “thoughts” around the user’s prompt, essentially exploring the space around it and trying different options. These generated steps are added to the context window and are usually much larger that the prompt itself, which is why these models are sometimes referred to as long chain-of-thought models. Some frontends will show a summary of the long CoT, although this is normally not the raw context itself, but rather a version that is summarised and re-formatted.






  • RoadTrain@lemdro.idtoTechnology@lemmy.worldNo bias, no bull AI
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    9 days ago

    What if AI didn’t just provide sources as afterthoughts, but made them central to every response, both what they say and how they differ: “A 2024 MIT study funded by the National Science Foundation…” or “How a Wall Street economist, a labor union researcher, and a Fed official each interpret the numbers…”. Even this basic sourcing adds essential context.

    Yes, this would be an improvement. Gemini Pro does this in Deep Research reports, and I appreciate it. But since you can’t be certain that what follows are actual findings of the study or source referenced, the value of the citation is still relatively low. You would still manually have to look up the sources to confirm the information. And this paragraph a bit further up shows why that is a problem:

    But for me, the real concern isn’t whether AI skews left or right, it’s seeing my teenagers use AI for everything from homework to news without ever questioning where the information comes from.

    This is also the biggest concern for me, if not only centred on teenagers. Yes, showing sources is good. But if people rarely check them, this alone isn’t enough to improve the quality of the information people obtain and retain from LLMs.




  • I use GroundNews. Their biggest value to me is that I can see the headlines for the same coverage from different sources before I read the text. A lot of times this alone is enough to tell me if there is actual content there or just speculation/alarmism. If I do decide to read the content, it’s a very easy way to get a few different perspectives on the same matter, and over time I start to recognise patterns in the reporting styles even when I’m not reading through GroundNews.

    Another useful feature is that you can past an article link or headline and it will show you alternative sources for the same coverage. This doesn’t always find useful alternatives, but it’s a simple, easy way to do basic fact-checking.

    And while most people here might not appreciate it, when they aggregate multiple sources, they also have an LLM-written summary of the content of the articles. The (somewhat ironic) thing about these summaries is that often they’re the least biased, most factual interpretation of the news compared to all the sources covering it. This is because the summaries are generated from all the content, so when the LLM finds weak or contrasting information, it won’t report it as a fact; when most of the sources agree, then it will summarise the conclusion. This is an excellent use for LLM in my opinion, but you can use GroundNews perfectly fine without it.