Who would’ve thought? This isn’t going to fly with the EU.

Article 5.3 of the Digital Markets Act (DMA): “The gatekeeper shall not prevent business users from offering the same products or services to end users through third-party online intermediation services or through their own direct online sales channel at prices or conditions that are different from those offered through the online intermediation services of the gatekeeper.”

Friendly reminder that you can sideload apps without jailbreaking or paying for a dev account using TrollStore, which utilises core trust bugs to bypass/spoof some app validation keys, on a iPhone XR or newer on iOS 14.0 up to 16.6.1. (ANY version for iPhone X and older)

Install guide: Trollstore

  • OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    In practice, most common open source software is used and contributed to by hundreds of people. So it naturally does get audited by that process. Closed source software can’t be confirmed to not be malicious, so it can’t be confirmed to be secure, so back to my original point, it can’t be private.

    I didn’t go into that much detail in my original comment, but it was what I meant when I first wrote it. As far as “does everyone audit the software they use”, the answer is obviously no. But, the software I use is mostly FOSS and contributed to by dozens of users, sometimes including myself. So when alarms are rung over the smallest things, you have a better idea of the attack vectors and privacy implications.

    • BorgDrone@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      In practice, most common open source software is used and contributed to by hundreds of people. So it naturally does get audited by that process.

      Just working on software is not the same as actively looking for exploits. Software security auditing requires a specialised set of skills. Open source also makes it easier for black-hat hackers to find exploits.

      Hundreds of people working on something is a double-edged sword. It also makes it easy for someone to sneak in an exploit. A single-character mistake in code could cause an exploitable bug, and if you are intent on deliberately introducing such an issue it can be very hard to spot and even if caught can be explained away as an honest to god mistake.

      By contrast, lots of software companies screen their employees, especially if they are working on critical code.

      • OsrsNeedsF2P@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        I don’t know if you really believe what you’re saying, but I’ll continue answering anyways. I worked at Manulife, the largest private insurance company in Canada, and ignoring the fact our security team was mostly focused on pen testing (which as you know, in contrast to audits tells you nothing about whether a system is secure), but the audits were infrequent and limited in scope. Most corporations don’t even do audits (and hire the cheapest engineers to do the job), and as a consumer, there’s no way to easily tell which audits covered the security aspects you care about.

        If you want to talk about the security of open source more, besides what is already mentioned above, not only are Google, Canonical and RedHat growing their open source security teams (combined employing close to 1,000 people whose job is to audit and patch popular open source apps), but also open source projects can likewise pay for audits themselves (See Mullvad or Monero as examples).

        I will concede that it is possible for proprietary software to be secure. But in practice, it’s simply not, and too hard to tell. It’s certainly not secure when compared to similar open source offerings.