AI's Jurassic Park moment

Speaking about AI tools like chatGPT, Dall-E, and Lensa:

It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society.

Why?

  • these systems are inherently unreliable, frequently making errors of both reasoning and fact…
  • they can easily be automated to generate misinformation at unprecedented scale.
  • they cost almost nothing to operate, and so they are on a path to reducing the cost of generating disinformation to zero

I feel like part old-guy-yelling-at-cloud, but I am genuinely concerned about tools like this. Given how poorly we as society have used the tech we’ve created thus far, I’m not sure we’ll do much better with the next round of advancements.

Nation-states and other bad actors…are likely to use large language models as a new class of [weapon]…For them, the…unreliabilities of large language models are not an obstacle, but a virtue…

[they aim to create a] fog of misinformation [that] focuses on volume, and on creating uncertainty…They are aiming to create a world in which we are unable to know what we can trust; with these new tools, they might succeed.