Tesla knew Autopilot caused death, but didn’t fix it::Software’s alleged inability to handle cross traffic central to court battle after two road deaths

  • gamer@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    I remember reading about the ethical question about the hypothetical self driving car that loses control and can choose to either turn left and kill a child, turn right and kill a crowd of old people, or do nothing and hit a wall, killing the driver. It’s a question that doesn’t have a right answer, but it must be answered by anybody implementing a self driving car.

    I non-sarcastically feel like Tesla would implement this system by trying to see which option kills the least number of paying Xitter subscribers.

    • Ocelot@lemmies.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Meanwhile hundreds of people are killed in auto accidents every single day in the US. Even if a self driving car is 1000x safer than a human driver there will still be accidents as long as other humans are also sharing the same road.

      • Oderus@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        When a human is found to be at fault, you can punish them.

        With automated driving, who’s to punish? The company? Great. They pay a small fine and keep making millions while your loved one is gone and you get no justice.

        • CmdrShepard@lemmy.one
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          People generally aren’t punished for an accident unless they did it intentionally or negligently. The better and more prevalent these systems get, the fewer the families with lost loved ones. Are you really arguing that this is a bad thing because it isn’t absolutely perfect and you can’t take vengeance on it?

          • Oderus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Generally, people are punished for causing an accident, purposefully or not. Their insurance will either raise their rates or drop them causing them to not be able to drive. That is a form of punishment you don’t get with automated driving.

            • int3ro@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Of course you get the same with automated driving. Accidents will cause either the insurance rate of the whole company to raise, or the company will have to pay out of pocket. In both cases accidents have direct financial “punishment” and if a car company is seen to be “unsafe” (see cruise right now) they are not allowed to drive (or drive “less”). I don’t see a big difference to normal people. After a while this is is my opinion even better, because “safer” companies will push out “less safe” companies… Assuming of course that the gov properly regulates that stuff so that a minimum of safety is required.

            • CmdrShepard@lemmy.one
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Increased rates aren’t a punishment they’re a risk calculation and insurance (outside of maybe property insurance in case a tree falls on the car for example) may not even be needed someday if everything is handled automatically without driver input. Why are you so stuck on the punishment aspect when these systems are already preventing needless death?

    • Liz@midwest.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      At the very least, they would prioritize the driver, because the driver is likely to buy another Tesla in the future if they do.

    • CmdrShepard@lemmy.one
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I think the whole premise is flawed because the car would have had to have had numerous failures before ever reaching a point where it would need to make this decision. This applies to humans as we have free will. A computer does not.