Back to Blog
ZVÂCVERTAVoice AIElevenLabsProduct

Voice AI Is Coming to ZVÂC and VERTA: We Got an ElevenLabs Grant

Waiheke AIApril 18, 20263 min read
Voice AI Is Coming to ZVÂC and VERTA: We Got an ElevenLabs Grant

We've been selected to receive an ElevenLabs Grant.

That's the short version. The longer version is about what we're actually going to build with it.

What ElevenLabs does

ElevenLabs builds voice AI infrastructure — text-to-speech, speech synthesis, and conversational voice models that are genuinely good. Not robotically good. Actually good. The kind of voice that doesn't pull you out of a conversation.

They've been selective about which startups get into their grants program. We applied because we think voice is the missing layer in career coaching AI, and we're glad they agreed.

Why voice matters for what we're building

ZVÂC and VERTA are both built around conversation — asking questions, interpreting answers, giving feedback. That conversation currently happens through text. Text works. But voice changes what's possible.

There are a few specific things we want to unlock:

In ZVÂC, the AI Career Coach helps users explore career paths, identify skill gaps, and make decisions about what to do next. Those conversations are often nuanced. A student navigating a career change isn't filling out a form — they're working through uncertainty. A spoken conversation handles that better. Voice lets ZVÂC feel less like a tool and more like a session.

In VERTA, the use case is even more direct. VERTA assesses communication competencies — how people express ideas, structure arguments, handle ambiguity. Right now, that assessment is text-based. With voice, VERTA can actually hear how someone communicates, not just read what they wrote. That's a different level of signal.

Beyond that, voice support opens up access for users who don't type fluently in English or who think better in spoken form — which includes a significant portion of our user base in emerging markets.

What we're building first

We're starting with voice output — giving the coaching AI a consistent, clear voice that users can listen to instead of read. This matters for engagement in longer sessions.

After that, we move to voice input — letting users speak to ZVÂC and VERTA directly. This is where the real personalization gain comes from. Tone, pace, confidence — these carry information that text doesn't. The AI can use those signals to calibrate its responses.

The sequence is deliberate. We want the output experience to be right before we ask users to speak into it.

What stays the same

The coaching logic doesn't change. The career frameworks in ZVÂC, the competency mapping in VERTA, the institutional integrations — all of that continues as is. Voice is an interface layer, not a replacement for the substance underneath.

We're not adding voice because it's a feature to announce. We're adding it because the conversations our products are built around work better when they can actually happen out loud.


Waiheke AI builds AI-powered career development platforms for universities, organizations, and individuals. ZVÂC handles career discovery and guidance. VERTA handles communication and competency assessment. More at waiheke.ai. Learn about ElevenLabs at elevenlabs.io.