outer_spec@lemmy.blahaj.zone to 196@lemmy.blahaj.zone · 11 months agoRuletaniclemmy.blahaj.zoneimagemessage-square7fedilinkarrow-up1177arrow-down10
arrow-up1177arrow-down1imageRuletaniclemmy.blahaj.zoneouter_spec@lemmy.blahaj.zone to 196@lemmy.blahaj.zone · 11 months agomessage-square7fedilink
minus-squareNorah - She/They@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2·11 months agoHope you like 40 second response times unless you use a GPU model.
minus-squareJDubbleu@programming.devlinkfedilinkarrow-up10·11 months agoI’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.
minus-squareNorah - She/They@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2·11 months agoOkay well I was running faster-whisper through Home Assistant.
Hope you like 40 second response times unless you use a GPU model.
I’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.
Okay well I was running faster-whisper through Home Assistant.