So JohnL threw out the "I'd love to be able to talk to the Eigenharp" at the DevCon and I was somewhat skeptical having doen stuff with voice-control systems and finding it extremely frustrating. However, having said that I thought I'd give it a go and got a fairly plausible non-Agent based thing listening and spitting out Belcanto.
However, there are a number of things to discuss about the best way of doing it before I endeavor to make it a real agent:
- I don't have an Alpha with a microphone, only a Tau & Pico. So I can't test it effectively - but even so the Agent should be availble to Tau/Pico players too, I would suggest (in a moral manner). Is there an Agent that will take the microphone buffer form the host computer and stream it like other audio?
- if its an Agent can it send the belcanto phrase straight to the interpreter via a connection (i.e. does it have an output port). If so what's the data structure? Can someone point me in the right direction?
- there needs to be at least wo other outputs : a recognition status (did it understand) and a success/fail (did the phrase get recognized). Are these just statusdata_t again?
Started a wiki page for the spec