Troubleshooting

Supporting a voice interface with a Meya bot is easy and does not require any special steps. However, designing a great user experience for voice interfaces such as Google Home requires special consideration compared to text interfaces. Provided here are anticipated questions when building an effective Meya voice-enabled bot. Be sure to read best practices recommended by Google for designing voice apps.

Actions on Google is not picking up speech the way I want it to

Meya aims to provides known input strings automatically to bias user speech in an expected manner. Currently, bias input strings are picked up from cms_nlu (taking strings from CMS data), and keyword triggers. Additionally, the meya.input_cms provides dynamic in-dialog phrases. To provide explicit speech bias information within a conversation, you can use the input_biases property:

states:
    low_or_high:
        component: meya.input_string
        properties:
            text: "Jump low or high?."
            input_biases:
              # Provide input bias from CMS data
              - cms: my_space.jumping_options
              # Provide literal strings for input bias
              - "High"
              - "Low"
     # ... additional states that process the user's reply ...

The conversation closes prematurely / when input is not understood

Text interfaces such as Messenger or Twitter tend to be naturally open to more input whenever the user wants to provide it. However, since voice interfaces represent an active, short-lived conversation with the user, it is important for interfaces such as Google Home to know when the conversation has ended.

In the Meya platform, the conversation is considered closed unless an input component was used, or the expect_user_action property was true on a component that was invoked. To handle input not being understood, have a catchall flow such as the following:

states:
    say_try_again:
        component: meya.text
        properties:
            text: "I didn't understand that. Try saying 'book appointment'."
            expect_user_action: true

I want to render a card on a hybrid text/voice integration

Card support on voice integrations such as Actions on Google is still being explored. We are happy to hear comments at [email protected] if this feature is important to you.

I want to provide an alternate speech representation to the displayed text, how can I?

Meya will pass along text representations of messages for speech transcription by default. To override this behaviour, you can use the speech field wherever the text field can be provided. This field accepts SSML markup to customize pronunciation.

I'm getting an error message I don't understand

Check the deployment logs to verify everything went as expected.

  1. Go to the bot's Actions on Google integration page.
  2. Click the See deployment logs button.
  3. Expand the install log section of the log event.

Ensure that no errors appear in the deployment log.


What’s Next

Have you done the Actions on Google tutorial yet? Check it out below.