Voice AI adoption in bilingual (English–Spanish) markets is currently limited by Spanish voice quality.
Even when selecting Spanish language agents, the current TTS output sounds choppy and unnatural, which reduces trust for real callers. This is not a prompt or configuration issue — it’s a voice prosody limitation.
To unlock wider adoption of Voice AI (auto dealerships, home services, healthcare, etc.), we need at least one of the following:
Option A:
• Add true Spanish-native neural voices (male & female), especially Latin American Spanish, with natural prosody and pacing.
Option B:
• Allow optional integration with external TTS providers (e.g., Azure, Google) for Voice AI, while still keeping core logic and call handling inside GHL.
English voice quality is solid just needs a bit more human like realism — Spanish is the current blocker.
Improving this would significantly increase usage in bilingual regions in the United States and expand Voice AI revenue opportunities.
Happy to help test or provide examples if needed.