It wasn’t easy talking about the Muah.AI data breach. It’s not just the rampant child sexual abuse material throughout the system (or at least requests for the AI to generate images of it), it’s the reactions of people to it. The tweets justifying it on the basis of there being noo “actual” abuse, the characterisation of this being akin to “merely thoughts in someone’s head”, and following my recording of this video, the backlash from their users about any attempts to curb creating sexual image of young children being “too much”:
Which is making customers unhappy – “any censorship is too much”: pic.twitter.com/fzfrFdKL8w
— Troy Hunt (@troyhunt) October 12, 2024
The law will catch up with this (and anyone in that breach creating this sort of material should be feel very bloody nervous right now), and the writing is already on the wall for people generating CSAM via AI:
This bill would expand the scope of certain of these provisions to include matter that is digitally altered or generated by the use of artificial intelligence, as such matter is defined.
The bill can’t pass soon enough.
References
- Sponsored by: Report URI: Guarding you from rogue JavaScript! Don’t get pwned; get real-time alerts & prevent breaches #SecureYourSite
- The Muah.AI data breach revealed an enormous volume of requests for CSAM material (you can hear me struggling to even properly explain this, it’s just hard to find the words)
- Internet Archive was breached, defaced and DDoS’d (4 days on from that tweet thread, they’re still offline)
- National Public Data – the service that siphoned up hundreds of millions of social security numbers then exposed them all in a breach – is dead (now, how many more of these are left?)