Author Topic: Beyond the game - VA in the real world  (Read 30935 times)

Mike308

  • Newbie
  • *
  • Posts: 48
Beyond the game - VA in the real world
« on: December 29, 2017, 10:06:16 PM »
This is in part a pickup from foxpur's post on the brilliant use of VA to improve quality of life. http://voiceattack.com/SMF/index.php?topic=1084.0

Like many who read that brief thread, I came away wondering how they did this or that, but moreso wondering how far one could take such an effort. It struck me to pose the question here in hopes of gathering suggestions, stories of successful or failed experiments, etc.

Some of the ideas that make the list strike me as:

1 - integrating with existing home automation kits like iHome or other modern modular system

2 - working with smart IoT devices like a NEST thermostat

3 - working with applications like iTunes or a Mail/Messaging client or Calendar
 
4 - working with a land-line or mobile phone (something I did long ago in my phlink days...)  This has notable impact on the ability to place an emergency call when you are on the floor and unable to reach a phone. TTS or pre-recorded data could be triggered to provide a 911 operator with a wealth of vital static info (blood type, DOB, medical history, etc.)

5 - creating a voice-driven RSS reader for things like news feeds (but presumably any text content I suppose)

I realize that any single item above might be a huge effort, but also know that some of these may have already been cracked (I'm hardly the first person to go down this line).  I am hoping there is enough interest that this thread, or a prior thread in the works, to serve as a collecting point for like minds on the subject. The benefits could be transformative, not just for the disabled but also the elderly (I have a mobility-challenged parent in my family who finds technology fascinating but overwhelming).

I'd love to see where the rabbit hole leads on this topic.
« Last Edit: December 29, 2017, 10:33:26 PM by Mike308 »

Slan

  • Global Moderator
  • Newbie
  • *****
  • Posts: 30
Re: Beyond the game - VA in the real world
« Reply #1 on: January 09, 2018, 11:37:54 AM »
Gary has already demo'd a Hue lights plugin.

It is available for download. I had to hack it a bit, but I have it working via voice commands, for the most part. Sometimes it is very sluggish to respond, but nominally it works! ;)

There's even a YT vid of him somewhere showing it off!

Does that count as a start?

Mike308

  • Newbie
  • *
  • Posts: 48
Re: Beyond the game - VA in the real world
« Reply #2 on: January 13, 2018, 09:29:44 PM »
It does indeed, I will go give it a look. Many thanks!   :D

Mike308

  • Newbie
  • *
  • Posts: 48
Re: Beyond the game - VA in the real world
« Reply #3 on: February 06, 2018, 11:18:23 AM »
Forget to post this link to the above vid, in case anybody wanted to check it out.

https://www.youtube.com/watch?v=L77Lne93HLI


ralf44

  • Newbie
  • *
  • Posts: 41
Re: Beyond the game - VA in the real world
« Reply #4 on: April 13, 2018, 04:08:48 PM »
Well, the number one use for Alexa's and such are "set alarm," "start countdown timer" and "play music," which all took me about an hour to replicate on my first day using VA!

The longest to make command so far was "Tell me a Zen Story," because I manually found like 40 of them and then tweaked the spelling and punctuation for hours until the Microsoft Text-to_Speech sounded good.

I'm working on a theatrical performance with VA as the other actor on stage - and the actual difficulty is not scripting or working out how to do anything, that's all so simple and elegant. It's just trying to bulletproof the system with backups and duplicate hardware because of running several apps that are "mission critical" on Windows and how likely that is to go wrong. :)

If you're fitting a Smart Home, especially for someone disabled or blind, you have got to plan from the outset for contingencies that involve a Microsoft product crashing. I'm sure we've all been to train platforms, ATMs and such where they weren't ready for the Blue Screen of Doom!

In terms of what might be cool to automate in a home, we are all still lagging behind Monsieur Robert-Houdin. His ideas, which he was able to implement with 1800s technologies, included having whole sets of lighting change by an imperceptible cue for dramatic effect and being notified of guests a mile before they arrived...

Le Prieuré, organisations mystérieuses pour le confort et l'agrément d'une demeure, 1867 - his other books are excellent too, he originally trained as a watchmaker and developed complex automatons and illusions.
« Last Edit: April 13, 2018, 04:25:09 PM by ralf44 »

Mike308

  • Newbie
  • *
  • Posts: 48
Re: Beyond the game - VA in the real world
« Reply #5 on: May 23, 2018, 11:09:18 AM »
Well that intrigues me and gives me all sorts of hope. Here's why:

My desire was to have a consistent hm, 'persona' reflected in my pseudo-AI house system. So while a voicepack approach delivers excellent enunciation and emotion, you cannot have pre-recorded message for all questions. I too spent hours doing SSML markup on passages for in-the-computer TTS, to get notably better results, but enough to convince me that I didn't want to do that as an ongoing solution. That left any one of the cloud-based Alexa/Siri voices, which now articulate with increasingly human delivery, but you add time-lag to every response and open that whole privacy thing. But your reply (and thanks, by the way) sparks a thought.

A large body of responses can be canned, which is to say fixed responses to simple questions. I could manually prompt an Alexa/Siri to read those back to me, record them, and save them as insta-playback resources that never again need the cloud.

If I can write code that passes a query to an Alexa/Siri for more open-ended questions (what is the weather, etc) then I get a couple of benefits:

1. Privacy - I don't have an always-on Alexa. If I can selectively pass data to an Alexa engine through VA, I am very happy

2. Consistency - In theory I should have zero appreciable sound difference between the canned / recorded replies that snap back instantly, versus the queries actually passed into the cloud.

Can you share the code you used to make a simple Alexa/Siri query, and how you physically connect to do so?

Many thanks,

Mike

menacslude9

  • Newbie
  • *
  • Posts: 6
Re: Beyond the game - VA in the real world
« Reply #6 on: July 01, 2020, 04:03:55 PM »
Well that intrigues me and gives me all sorts of hope. Here's why:

My desire was to have a consistent hm, 'persona' reflected in my pseudo-AI house system. So while a voicepack approach delivers excellent enunciation and emotion, you cannot have pre-recorded message for all questions. I too spent hours doing SSML markup on passages for in-the-computer TTS, to get notably better results, but enough to convince me that I didn't want to do that as an ongoing solution. That left any one of the cloud-based Alexa/Siri voices, which now articulate with increasingly human delivery, but you add time-lag to every response and open that whole privacy thing. But your reply (and thanks, by the way) sparks a thought.

A large body of responses can be canned, which is to say fixed responses to simple questions. I could manually prompt an Alexa/Siri to read those back to me, record them, and save them as insta-playback resources that never again need the cloud.

If I can write code that passes a query to an Alexa/Siri for more open-ended questions (what is the weather, etc) then I get a couple of benefits:

1. Privacy - I don't have an always-on Alexa. If I can selectively pass data to an Alexa engine through VA, I am very happy

2. Consistency - In theory I should have zero appreciable sound difference between the canned / recorded replies that snap back instantly, versus the queries actually passed into the cloud.

Can you share the code you used to make a simple Alexa/Siri query, and how you physically connect to do so?

Many thanks,

Mike
\

Keep in mind that VA runs on Microsoft's speech recognition engine.  I don't think this will help privacy wise.

Pfeil

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 4792
  • RTFM
Re: Beyond the game - VA in the real world
« Reply #7 on: July 01, 2020, 04:08:24 PM »
VoiceAttack uses the offline Microsoft Speech Recognition engine, not Cortana.

As far as I'm personally aware no data is sent to Microsoft unless you explicitly choose to share it (which is asked at the end of a speech recognition training session, and can be denied by clicking "Don't Send").