Safe Streets Rebel’s protest comes after automatic vehicles were blamed for incidents including crashing into a bus and running over a dog. City officials in June said…

    • rtxn@lemmy.world
      link
      fedilink
      English
      arrow-up
      48
      ·
      edit-2
      1 year ago

      It’s not a strawman argument, it is a fact. Without the ability to audit the entire codebase of self-driving cars, there’s no way to know if the manufacturer had knowingly hidden something in the code that might have caused accidents and fatalities too numerous to recount, but too important to ignore, that were linked to a fault in self-driving technology.

      I was actually trying to find an article I’d read about Tesla’s self-driving software reverting to manual control moments before impact, but I was literally flooded by fatality reports.

      • HobbitFoot @thelemmy.club
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 year ago

        We can’t audit the code for humans, but we still let them drive.

        If the output for computers driving is less than for humans and the computer designers are forced to be as financially liable for car crashes as humans, why shouldn’t we let computers drive?

        • Shayreelz@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          18
          ·
          1 year ago

          I’m not fully in either camp in this debate, but fwiw, the humans we let drive generally suffer consequences if there is an accident due to their own negligence

          • Obi@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 year ago

            Also we do audit them, it’s called a license. I know it’s super easy to get one in the US but in other countries they can be quite stringent.

          • HobbitFoot @thelemmy.club
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            And I’m not denying it. However, it takes a very high bar to get someone convicted of vehicular manslaughter and that usually requires evidence that the driver was grossly negligent.

            If you can show that a computer can drive as well as a sober human, where is the gross negligence?

        • rambaroo@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          1 year ago

          Because there’s no valid excuse to prevent us from auditing their software and it could save lives. Why the hell should we allow then to use the road if they won’t even let us inspect the engine?

          A car isn’t a human. It’s a machine, and it can and should be inspected. Anything less than that is pure recklessness.

          • HobbitFoot @thelemmy.club
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Why the hell should we allow then to use the road if they won’t even let us inspect the engine?

            How do you think a car gets approved right now? Do we take it apart? Do we ask for the design calculations of how they designed each piece?

            That isn’t what happens. There is no “audit” of parts or the whole. Instead, there is a series of tests to determine road worthiness that everything in a car has to pass. We’ve already accepted a black box for the electronics of a car. You don’t need to get approval of your code to show that pressing the brake pedal causes the brake lights turn on; they just test it to make sure that it works.

            We don’t audit the code already for life critical software already. It is all liability taken on by the manufacturers and verified via government testing of the finished product. What is an audit going to do when we don’t it already?

      • kep@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        Strawman arguments can be factual. The entire point is that you’re responding to something that wasn’t the argument. You’re putting words in their mouth to defeat them instead of addressing their words at face value. It is the definition of a strawman argument.

      • donalonzo@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        1 year ago

        It is most definitely a strawman to frame my comment as considering the companies “infinitely altruistic”, no matter what lies behind the strawman. It doesn’t refute my statistics but rather tries to make me look like I make an extremely silly argument I’m not making, which is the defintion of a strawman argument.

        • rambaroo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 year ago

          The data you cited comes straight from manufacturers, who’ve repeatedly been shown to lie and cherry-pick their data to intentionally mislead people about driverless car safety.

          So no it’s not a straw man argument at all to claim that you’re putting inordinate faith in manufacturers, because that’s exactly what you did. It’s actually incredible to me how many of you are so irresponsible that you’re not even willing to do basic cross-checking against an industry that is known for blatantly lying about safety issues.

      • vinnymac@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        It may be the case that every line of code of all self driving vehicles is not available for a public audit. But neither is the instruction set of every human who was taught to drive properly on the road today.

        I would hope that through protesting and new legislation, that we will see the industry become more safe over time. Which we simply will never be able to achieve with human drivers.