If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • logos@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    Fake evidence, e.g. forged documents, are not not new things. They take things like origin, chain of custody etc into account.

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      22 hours ago

      Sure, but if you meet up with someone and they later have an audio recording that is completely fabricated from the real audio, there’s nothing for chain of anything. Audio used to be damning evidence and was fairly easily discoverable if it was hacked together to try to sound different. If that goes away, then it just becomes useless as evidence.