Chats: text conversations with the same agent
Run text-based conversations with the same agent that answers your phone — embedded widget, REST API, or your own UI. Same prompt, same knowledge base, different channel.
Updated May 6, 2026
A chat is a text conversation with one of your voice agents. The agent definition is shared — same prompt, same voice (irrelevant for chat), same knowledge base, same functions. The channel layer changes; the brain doesn't.
Two ways to use chat
| Path | When |
|---|---|
| Embedded widget | Drop a <script> on your site, get a floating chat button |
| REST API | Build your own UI, mobile app, or backend integration |
The widget is the fast path. The API is the right path when you have a custom client.
REST API
Start a chat
curl -X POST https://api.call2me.app/v1/chats \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"agent_id": "agent_abc123",
"metadata": { "user_id": "u_4521", "source": "support-page" }
}'
Returns a chat_id — keep it for subsequent messages.
Send a message
curl -X POST https://api.call2me.app/v1/chats/chat_xyz789/messages \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"role": "user",
"content": "Where is my order 4521?"
}'
Returns the agent's reply:
{
"message_id": "msg_def456",
"role": "assistant",
"content": "Your order shipped yesterday and arrives Thursday.",
"function_calls": []
}
If the agent invoked a function (see Functions),
function_calls is populated with what was called and what came back.
Streaming responses
For typing-indicator UX, use the streaming variant:
curl -N -X POST https://api.call2me.app/v1/chats/chat_xyz789/messages/stream \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{ "role": "user", "content": "..." }'
Returns Server-Sent Events:
event: delta
data: {"text": "Your order"}
event: delta
data: {"text": " shipped yesterday"}
event: done
data: {"message_id": "msg_def456"}
Fetching history
curl https://api.call2me.app/v1/chats/chat_xyz789/messages \
-H "Authorization: Bearer YOUR_API_KEY"
Pricing
Chat is billed per message — roughly $0.01 per message at the base rate. Both user-sent and agent-sent messages count. Function calls don't add a separate charge; they're part of the message that triggered them.
A typical 10-message support exchange costs about $0.10. The same conversation as a voice call would cost roughly $0.50 (5 minutes × $0.10/min voice base, plus telephony if applicable).
Knowledge base & functions
Both work the same way as voice:
- Knowledge base: agent retrieves relevant chunks before responding
- Functions: LLM decides to invoke; platform calls your endpoint; result lands in the next message
The agent doesn't know whether the user is on a phone or in a browser — the channel is invisible to the prompt.
Widget vs. API: when to pick which
Pick the widget when:
- You want chat on your marketing site, fast
- You're fine with the platform's UI (customizable but bounded)
- The user is anonymous (or you'll capture identity inside the chat)
Pick the API when:
- You're building a logged-in product UI with your own design system
- You need to inject context (
metadata.user_id) per session - Mobile app, desktop app, or backend-to-backend chat
What's next
Frequently asked
Q.What's the difference between a chat and a call?
A call is voice (audio in/out, telephony, STT/TTS). A chat is text (messages in/out). Both run through the same agent — same prompt, same knowledge base, same functions. Pick the channel based on where your user is.
Q.Can the same agent handle both?
Yes — that's the whole point. One agent definition, two channels. The agent doesn't know which channel it's on; it just generates a response. The channel layer handles audio for calls, text for chats.
Q.Is chat cheaper than voice?
Yes — much. Chat is billed per message (around $0.01 each); voice is billed per minute. A typical 5-minute support call costs $0.50 in voice base; the same conversation as 10 chat messages costs $0.10.
Q.How do I keep chat history across page loads?
Pass the same chat_id when continuing a conversation. The platform stores history per chat_id; the agent gets the full context every turn.