$2,000 free credits for voice AI Startups
$2,000 free credits
Build voice agents with TypeScript
Layercode makes it easy to build low-latency, production-ready voice AI agents
npx @layercode/cli init
copied
Build voice agents with TypeScript
Layercode makes it easy to build low-latency, production-ready voice AI agents
npx @layercode/cli init
copied
Build voice agents with TypeScript
Layercode makes it easy to build low-latency, production-ready voice AI agents
npx @layercode/cli init
copied

Don't get stuck in "cool demo" hell.
You built a cool demo. You even won a hackathon. But in front of customers, it mispronounces their brand, lags, makes random noises and stops replying.
Full control
No black boxes. You control how your agent thinks, speaks, and responds; so you can actually fix it when it breaks.
Natural conversations
Sub-second response times and human-like turn-taking. No awkward pauses. No talking over users.
Observability
Inspect calls, latency and failures in production in real-time. Understand what went wrong without guessing.
Built for TypeScript
Fits directly into modern TypeScript and Next.js stacks. You receive text, you send text back. That's it.
Hot-swap leading providers
Avoid single vendor lock in. We allow you to test and evaluate leading providers with ease.
More coming soon…

Don't get stuck in "cool demo" hell.
You built a cool demo. You even won a hackathon. But in front of customers, it mispronounces their brand, lags, makes random noises and stops replying.
Full control
No black boxes. You control how your agent thinks, speaks, and responds; so you can actually fix it when it breaks.
Natural conversations
Sub-second response times and human-like turn-taking. No awkward pauses. No talking over users.
Observability
Inspect calls, latency and failures in production in real-time. Understand what went wrong without guessing.
Built for TypeScript
Fits directly into modern TypeScript and Next.js stacks. You receive text, you send text back. That's it.
Hot-swap leading providers
Avoid single vendor lock in. We allow you to test and evaluate leading providers with ease.
More coming soon…
Quickly add reliable, low-latency voice to your AI agent
Quickly add reliable, low-latency voice to your AI agent
Layercode’s platform gives you real‑time audio infrastructure so you can turn any LLM-powered agent into a reliable, conversational voice AI agent that’s built to scale.





Don't get stuck in "cool demo" hell.
You built a cool demo. You even won a hackathon. But in front of customers, it mispronounces their brand, lags, makes random noises and stops replying.
Full control
No black boxes. You control how your agent thinks, speaks, and responds; so you can actually fix it when it breaks.
Natural conversations
Sub-second response times and human-like turn-taking. No awkward pauses. No talking over users.
Observability
Inspect calls, latency and failures in production in real-time. Understand what went wrong without guessing.
Built for TypeScript
Fits directly into modern TypeScript and Next.js stacks. You receive text, you send text back. That's it.
Hot-swap leading voice model providers
Avoid single vendor lock-in. We make it easy to test and evaluate leading voice and transcription model providers.
More coming soon…

Plug in your own AI agent backend.
Retain complete flexibility.
Unlike low/no-code voice agent platforms, we give you complete control over your agent’s backend — without having to manage the complexity of building and maintaining every component of your voice infrastructure.

Add voice to your Next.js app.
Get started fast with our CLI and use our Node.js SDK to integrate with your own LLMs via a single webhook.
Use our SDKs to let users speak to your agent via web, mobile or phone.
Frontend
// Layercode Voice Agent Frontend React Example
import { useLayercodePipeline } from "@layercode/react-sdk";
import { AudioVisualization } from "./AudioVisualization";
import { MicrophoneButton } from "./MicrophoneButton";
export function VoiceAgent() {
const { status, agentAudioAmplitude } = useLayercodePipeline({ pipelineId });
return (
<div className="flex flex-col">
<h1>Voice Agent</h1>
<AudioVisualization amplitude={agentAudioAmplitude} />
<MicrophoneButton />
</div>
);
}
Backend
// Layercode Voice Agent Backend Next.js Example
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { streamText } from "ai";
import { streamResponse } from "@layercode/node-server-sdk";
export const POST = async (request) => {
const google = createGoogleGenerativeAI();
const requestBody = await request.json();
const text = requestBody.text;
return streamResponse(requestBody, async ({ stream }) => {
if (requestBody.type === "message") {
const { textStream } = streamText({
model: google("gemini-2.0-flash-001"),
system: "You are a helpful voice assistant.",
messages: [{ role: "user", content: text }],
onFinish: () => {
stream.end();
},
});
await stream.ttsTextStream(textStream);
}
});
};
Use LLM and Agent libraries you know and love









Plug in your own AI agent backend. Retain complete flexibility.
Unlike low/no-code voice agent platforms, we give you complete control over your agent’s backend — without having to manage the complexity of building and maintaining every component of your voice infrastructure.

Add voice to your Next.js app.
Get started fast with our CLI and use our Node.js SDK to integrate with your own LLMs via a single webhook.
Use our SDKs to let users speak to your agent via web, mobile or phone.
Frontend
// Layercode Voice Agent Frontend React Example
import { useLayercodePipeline } from "@layercode/react-sdk";
import { AudioVisualization } from "./AudioVisualization";
import { MicrophoneButton } from "./MicrophoneButton";
export function VoiceAgent() {
const { status, agentAudioAmplitude } = useLayercodePipeline({ pipelineId });
return (
<div className="flex flex-col">
<h1>Voice Agent</h1>
<AudioVisualization amplitude={agentAudioAmplitude} />
<MicrophoneButton />
</div>
);
}
Backend
// Layercode Voice Agent Backend Next.js Example
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { streamText } from "ai";
import { streamResponse } from "@layercode/node-server-sdk";
export const POST = async (request) => {
const google = createGoogleGenerativeAI();
const requestBody = await request.json();
const text = requestBody.text;
return streamResponse(requestBody, async ({ stream }) => {
if (requestBody.type === "message") {
const { textStream } = streamText({
model: google("gemini-2.0-flash-001"),
system: "You are a helpful voice assistant.",
messages: [{ role: "user", content: text }],
onFinish: () => {
stream.end();
},
});
await stream.ttsTextStream(textStream);
}
});
};
Use LLM and Agent libraries you know and love









Plug in your own AI agent backend. Retain complete flexibility.
Unlike low/no-code voice agent platforms, we give you complete control over your agent’s backend — without having to manage the complexity of building and maintaining every component of your voice infrastructure.

Add voice to your Next.js app.
Get started fast with our CLI and use our Node.js SDK to integrate with your own LLMs via a single webhook.
Use our SDKs to let users speak to your agent via web, mobile or phone.
Frontend
// Layercode Voice Agent Frontend React Example
import { useLayercodePipeline } from "@layercode/react-sdk";
import { AudioVisualization } from "./AudioVisualization";
import { MicrophoneButton } from "./MicrophoneButton";
export function VoiceAgent() {
const { status, agentAudioAmplitude } = useLayercodePipeline({ pipelineId });
return (
<div className="flex flex-col">
<h1>Voice Agent</h1>
<AudioVisualization amplitude={agentAudioAmplitude} />
<MicrophoneButton />
</div>
);
}
Backend
// Layercode Voice Agent Backend Next.js Example
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { streamText } from "ai";
import { streamResponse } from "@layercode/node-server-sdk";
export const POST = async (request) => {
const google = createGoogleGenerativeAI();
const requestBody = await request.json();
const text = requestBody.text;
return streamResponse(requestBody, async ({ stream }) => {
if (requestBody.type === "message") {
const { textStream } = streamText({
model: google("gemini-2.0-flash-001"),
system: "You are a helpful voice assistant.",
messages: [{ role: "user", content: text }],
onFinish: () => {
stream.end();
},
});
await stream.ttsTextStream(textStream);
}
});
};
Use LLM and Agent libraries you know and love









The first global edge voice infrastructure with 330+ locations
Our network architecture is powered by a global edge network, helping ensure rock-solid performance and reliability for voice agents speaking to users anywhere on Earth.
Low latency conversations
Your users connect to the nearest edge location, where we process voice data within 50ms, helping ensure conversations flow smoothly.
Instant scalability
Zero cold starts mean conversations start instantly and automatically scale with demand in real-time.
True per-call isolation
Every Layercode voice session runs in complete isolation: no shared infrastructure, no "noisy neighbors," and no performance degradation when platform traffic spikes.
Distributed by design
330+ independent locations mean no single point of failure.

The first global edge voice infrastructure with 330+ locations
Our network architecture is powered by a global edge network, helping ensure rock-solid performance and reliability for voice agents speaking to users anywhere on Earth.
Low latency conversations
Your users connect to the nearest edge location, where we process voice data within 50ms, helping ensure conversations flow smoothly.
Instant scalability
Zero cold starts mean conversations start instantly and automatically scale with demand in real-time.
True per-call isolation
Every Layercode voice session runs in complete isolation: no shared infrastructure, no "noisy neighbors," and no performance degradation when platform traffic spikes.
Distributed by design
330+ independent locations mean no single point of failure.

The first global edge voice infrastructure with 330+ locations
Our network architecture is powered by a global edge network, helping ensure rock-solid performance and reliability for voice agents speaking to users anywhere on Earth.
Low latency conversations
Your users connect to the nearest edge location, where we process voice data within 50ms, helping ensure conversations flow smoothly.
Instant scalability
Zero cold starts mean conversations start instantly and automatically scale with demand in real-time.
True per-call isolation
Every Layercode voice session runs in complete isolation: no shared infrastructure, no "noisy neighbors," and no performance degradation when platform traffic spikes.
Distributed by design
330+ independent locations mean no single point of failure.
Full control over your agent’s audio pipeline
Full control over your agent’s audio pipeline
Build reliable, production-ready real-time voice agents. Total transparency at every step, plus the tools you need to monitor and debug your agent's conversations.
Agent Dashboard
Analytics
Observability

Agent Dashboard
Analytics
Observability

Pipeline
Analytics
Observability


Full control
Configure your agent for the browser, phone or both. Easily configure your STT and chosen TTS model.
Easily switch models
Switch between leading providers and voices with just a few clicks.
Build with templates
Start with our quickstart templates for common voice agent use cases.
Observability & recording
Review any session and replay, review logs, or download the recording.
Latency analytics
Track latency to spot issues before they impact users.
Simplified billing
Multiple providers are consolidated into a single bill.

Full control
Configure your agent for the browser, phone or both. Easily configure your STT and chosen TTS model.
Easily switch models
Switch between leading providers and voices with just a few clicks.
Build with templates
Start with our quickstart templates for common voice agent use cases.
Observability & recording
Review any session and replay, review logs, or download the recording.
Latency analytics
Track latency to spot issues before they impact users.
Simplified billing
Multiple providers are consolidated into a single bill.

Full control
Configure your agent for the browser, phone or both. Easily configure your STT and chosen TTS model.
Easily switch models
Switch between leading providers and voices with just a few clicks.
Build with templates
Pre-made pipelines for common agent use cases.
Observability & recording
Review any session and replay, review logs, or download the recording.
Latency analytics
Track latency to spot issues before they impact users.
Simplified billing
Multiple providers are consolidated into a single bill.
Frequently asked questions
Frequently asked questions
Frequently asked questions
What is Layercode?
What is Layercode?
What is Layercode?
Who is Layercode for?
Who is Layercode for?
Who is Layercode for?
How do Layercode voice agents work?
How do Layercode voice agents work?
How do Layercode voice agents work?
How is Layercode different from other voice Al platforms?
How is Layercode different from other voice Al platforms?
How is Layercode different from other voice Al platforms?
Is Layercode secure?
Is Layercode secure?
Is Layercode secure?
Get started with $100 free credits
Get started with $100 free credits
Build your first production-ready voice agent in minutes.

Don't get stuck in
"cool demo" hell.
You built a cool demo. You even won a hackathon. But in front of customers, it mispronounces their brand, lags, makes random noises and stops replying.
Full control
No black boxes. You control how your agent thinks, speaks, and responds; so you can actually fix it when it breaks.
Natural speech
Sub-second response times and human-like turn-taking. No awkward pauses. No talking over users.
Observability
Inspect calls, latency and failures in production in real-time. Understand what went wrong without guessing.
TypeScript Ready
Fits directly into modern TypeScript and Next.js stacks. You receive text, you send text back.
That's it.
Hot-swap leading providers
Avoid single vendor lock in. We allow you to test and evaluate leading providers with ease.
More coming soon…

Don't get stuck in
"cool demo" hell.
You built a cool demo. You even won a hackathon. But in front of customers, it mispronounces their brand, lags, makes random noises and stops replying.
Full control
No black boxes. You control how your agent thinks, speaks, and responds; so you can actually fix it when it breaks.
Natural speech
Sub-second response times and human-like turn-taking. No awkward pauses. No talking over users.
Observability
Inspect calls, latency and failures in production in real-time. Understand what went wrong without guessing.
TypeScript Ready
Fits directly into modern TypeScript and Next.js stacks. You receive text, you send text back.
That's it.
Hot-swap leading providers
Avoid single vendor lock in. We allow you to test and evaluate leading providers with ease.
More coming soon…
https://docs.google.com/document/d/1z7h1lVr6iQQJoB1YULxpVHm1SY9KqhzYRItdQOVp-5Y/edit?disco=AAABr7IL7X0
https://docs.google.com/document/d/1z7h1lVr6iQQJoB1YULxpVHm1SY9KqhzYRItdQOVp-5Y/edit?disco=AAABr7IL7X0
