Microsoft raced to put generative AI at the heart of its systems. Ask a question about an upcoming meeting and the company’s Copilot AI system can pull answers from your emails, Teams chats, and files—a potential productivity boon. But these exact processes can also be abused by hackers.

At the Black Hat security conference in Las Vegas, researcher Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers, including using it to provide false references to files, exfiltrate some private data, and dodge Microsoft’s security protections.

One of the most alarming displays, arguably, is Bargury’s ability to turn the AI into an automatic spear-phishing machine. Dubbed LOLCopilot, the red-teaming code Bargury created can—crucially, once a hacker has access to someone’s work email—use Copilot to see who you email regularly, draft a message mimicking your writing style (including emoji use), and send a personalized blast that can include a malicious link or attached malware.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    29
    ·
    3 months ago

    The schadenfreude is still palpable with every “AI turns out to be billion-dollar snake oil” story, especially the extra spicy ones like this.

    The wealthy people who run the world are sociopathic morons. Tell them you have a miracle way to fire all their human workers, and they will give you unlimited money and trust.

    • LadyMeow
      link
      fedilink
      arrow-up
      14
      ·
      3 months ago

      Except its not really snake oil, its more like a dystopia machine