If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • RangerJosie@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    26 days ago

    We’re not. Its going to upend our already laughably busted “justice” system to new unknown heights of cartoonish malfeasance.

  • logos@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    27 days ago

    Fake evidence, e.g. forged documents, are not not new things. They take things like origin, chain of custody etc into account.

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      26 days ago

      Sure, but if you meet up with someone and they later have an audio recording that is completely fabricated from the real audio, there’s nothing for chain of anything. Audio used to be damning evidence and was fairly easily discoverable if it was hacked together to try to sound different. If that goes away, then it just becomes useless as evidence.

      • GamingChairModel@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        26 days ago

        You can’t just use an audio file by itself. It has to come from somewhere.

        The courts already have a system in place that if someone seeks to introduce a screenshot of a text message, or a printout of a webpage, or a VHS tape with video, or just a plain audio file, needs to be able to introduce that as evidence, with someone who testifies that it is real and that it is accurate, with an opportunity for others to question and even investigate where it came from and how it was made/stored/copied.

        If I just show up to a car accident case with an audio recording that I claim is the other driver admitting that he forgot to look before turning, that audio is gonna do basically nothing unless and until I show that I had a reason to be making that recording while talking to him, why I didn’t give it to the police who wrote the accident report that day, etc. And even then, the other driver can say “that’s not me and I don’t know what you think that recording is” and we’re still back to a credibility problem.

        We didn’t need AI to do impressions of people. This has always been a problem, or a non-problem, in evidence.

      • hypna@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        26 days ago

        It becomes useless as evidence unless you can establish authenticity. It just makes audio recordings more in a class with text documents; perfectly fakeable, but admissible with the right supporting information. So I agree it’s a change, but it’s not the end of audio evidence, and it’s a change in a direction which courts already have experience.

  • Randomgal@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    26 days ago

    A bit dramatic imo. For most of legal history we didn’t actually have perfectly recorded video or audio, and while they are great tools at the present, they are still not the silver-bullet people would expect them to be at trial. (Think Trump and his cucks) Furthermore, most poor people try to avoid being recorded when doing crimes.

    It will probably mean that focus will shift to other kinds of evidence and evidence-gathering methods. But definitely not the end of law as we know it, far from it.

    • ConstipatedWatson@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      25 days ago

      Right, but anyone would like to not be in a video implying them in a crime, but I was wondering what would happen if fake videos of said person were to appear implicating a crime that actually did not take place

  • SirEDCaLot@lemmy.today
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    26 days ago

    Eventually, we will just have to accept that photographic proof is no longer proof.

    There are ways that you could guarantee an image is valid. You would need a hardware security module inside the camera, which signs a hash of the picture with its own built-in security key that can’t be extracted and a serial number that it generates. That can prove that an image came from a particular camera, and if you change even one pixel of that image the signature won’t match anymore. I don’t see this happening anytime soon. Not mainstream at least. There are one or two camera manufacturers that offer this as a feature, but it’s not on things like surveillance cameras or cell phones nor will it be anytime soon.

    • ConstipatedWatson@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      25 days ago

      True, sooner or later there might be ways to make sure that a picture or video are digitally signed and probably it would be very hard to crack, but theoretically a fake video might still pass for real (though it would require a lot of resources to make that happen)

      • SirEDCaLot@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        25 days ago

        More likely, most of the sources that produce photos and videos would not be using the digital signatures. Professional cameras for journalists probably would have the signature chip. Cheapo Chinese surveillance cameras? Unlikely.

  • tlou3please@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    27 days ago

    As someone who works in the field of criminal law (in Europe, and I would be shocked if it wasn’t the same in the US) - I’m not actually very worried about this. By that I don’t mean to say it’s not a problem, though.

    The risk of evidence being tampered with or outright falsified is something that already exists, and we know how to deal with it. What AI will do is lower the barrier for technical knowledge needed to do it, making the practice more common.

    While it’s pretty easy for most AI images to be spotted by anyone with some familiarity with them, they’re only going to get better and I don’t imagine it will take very long before they’re so good the average person can’t tell.

    In my opinion this will be dealt with via two mechanisms:

    • Automated analysis of all digital evidence for signatures of AI as a standard practice. Whoever can be the first person to land contracts with police departments to provide bespoke software for quick forensic AI detection is going to make a lot of money.

    • A growth in demand for digital forensics experts who can provide evidence on whether something is AI generated. I wouldn’t expect them to be consulted on all cases with digital evidence, but for it to become standard practice where the defence raises a challenge about a specific piece of evidence during trial.

    Other than that, I don’t think the current state of affairs when it comes to doctored evidence will particularly change. As I say, it’s not a new phenomenon, so countries already have the legal and procedural framework in place to deal with it. It just needs to be adjusted where needed to accommodate AI.

    What concerns me much more than the issue you raise is the emergence of activities which are uniquely AI dependent and need legislating for. For example, how does AI generated porn of real people fit into existing legislation on sex offences? Should it be an offence? Should it be treated differently to drawing porn of someone by hand? Would this include manually created digital images without the use of AI? If it’s not decided to be illegal generally, what about when it depicts a child? Is it the generation of the image that should be regulated, or the distribution? That’s just one example. What about AI enabled fraud? That’s a whole can of worms in itself, legally speaking. These are questions that in my opinion are beyond the remit of the courts and will require direction from central governments and fresh, tailor made legislation to deal with.

    • ryathal@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      24 days ago

      My bigger concern is the state using AI created fake data. It’s far harder to stop that, as false confessions and coerced confessions are already a problem. The process can’t really catch it, because it’s the people in charge of the process doing it.

  • LesserAbe@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    26 days ago

    I think other answers here are more essential - chain of custody, corroborating evidence, etc.

    That said, Leica has released a camera that digitally signs its images, and other manufacturers are working on similar things. That will allow people to verify whether the image is original or has been edited. From what I understand Leica has some scheme where you can sign images when you update them too, so there’s a whole chain of documentation. Here’s a brief article

    • ConstipatedWatson@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      25 days ago

      Cameras with stronger security will become more and more important, though on a theoretical level, they could be cracked or forged, but I suppose it’s the usual cat and mouse game

    • calcopiritus@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      24 days ago

      Hardware signing stuff is not a real solution. It’s security through obscurity.

      If someone has access to the hardware, they technically have access to the private key that the hardware uses to sign things.

      A determined malicious actor could take that key and sign whatever they want to.

    • andrew_bidlaw@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      It’s an interesting experiment, but why would we trust everything that Leica supposedly verified? The same shit with digital signatures and blockchain stuff. We are at the gates of the world where we have zero trust by default and would only intentionally outsource verification to third parties we trust, because penalties for mistakes are growing each day.

      • LesserAbe@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        26 days ago

        Well as I said, I think there’s a collection of things we already use for judging what’s true, this would just be one more tool.

        A cryptographic signature (in the original sense, not just the Bitcoin sense) means that only someone who possesses a certain digital key is able to sign something. In the case of a digitally signed photo, it verifies “hey I, key holder, am signing this file”. And if the file is edited, the signed document won’t match the tampered version.

        Is it possible someone could hack and steal such a key? Yes. We see this with certificates for websites, where some bad actor is able to impersonate a trusted website. (And of course when NFT holders get their apes stolen)

        But if something like that happened it’s a cause for investigation, and it leaves a trail which authorities could look into. Not perfect, but right now there’s not even a starting point for “did this image come from somewhere real?”

  • Call me Lenny/Leni@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    26 days ago

    A camera can only show us what it sees. It doesn’t objectively necessitate a viewer’s interpretation of it. I remember some of us being called down to the principal’s office (before the age of footage-based scandals, which if anything imply shortcoming in the people progressing the rulings to be in so much awe at, sadly a common occurrence, adding to the “normal people distaste” I have, and something authorities have made sure I’m no stranger to) who may say “we saw you on the camera doing something against the rules” only to be responded to with “that’s not me, I have an alibi” or “that’s not me, I wouldn’t wear that jacket” or “that’s not me, I can’t do that person’s accent” (aforementioned serial slander of me serving as a prime example where this would be the case). In connection to the process, you might say it’s witness testimony from a machine and that they’ve “just started” to get into the habit of not being very honest to the humans in thw court. I remember my first lie.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    Maybe each camera could have a unique private key that it could use to watermark keyframes with a hash of the frames themselves.

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      arrow-up
      0
      arrow-down
      2
      ·
      27 days ago

      Usually I see non-technical people throw ideas like this and they’re stupid, but I’ve been thinking about this for a few minutes and it’s actually kinda smart