misk@sopuli.xyz to Technology@lemmy.worldEnglish · 3 hours agoConcerns about medical note-taking tool raised after researcher discovers it invents things no one said — Nabla is powered by OpenAI's Whisperwww.tomshardware.comexternal-linkmessage-square5fedilinkarrow-up170arrow-down11cross-posted to: technology@lemmy.zip
arrow-up169arrow-down1external-linkConcerns about medical note-taking tool raised after researcher discovers it invents things no one said — Nabla is powered by OpenAI's Whisperwww.tomshardware.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 3 hours agomessage-square5fedilinkcross-posted to: technology@lemmy.zip
minus-squareRobotToaster@mander.xyzlinkfedilinkEnglisharrow-up4arrow-down3·2 hours agoHow can it be that bad? I’ve used zoom’s ai transcriptions, for far less mission critical stuff, and it’s generally fine, (I still wouldn’t trust it for medical purposes)
minus-squarehuginn@feddit.itlinkfedilinkEnglisharrow-up11·2 hours agoZoom ai transcriptions also make things up. That’s the point. They’re hallucination engines. They pattern match and fill holes by design. It doesn’t matter if the match isn’t perfect, it will patch it over with nonsense instead.
minus-squareElPussyKangaroo@lemmy.worldlinkfedilinkEnglisharrow-up4·2 hours agoIt’s not the transcripts that are the issue here. It’s that the transcripts are being interpreted by the model to give information.
How can it be that bad?
I’ve used zoom’s ai transcriptions, for far less mission critical stuff, and it’s generally fine, (I still wouldn’t trust it for medical purposes)
Zoom ai transcriptions also make things up.
That’s the point. They’re hallucination engines. They pattern match and fill holes by design. It doesn’t matter if the match isn’t perfect, it will patch it over with nonsense instead.
It’s not the transcripts that are the issue here. It’s that the transcripts are being interpreted by the model to give information.