Javascript
The BRTC JavaScript SDK enables you to build WebRTC-powered real-time voice communication applications in the browser or Node.js. This SDK manages WebRTC endpoints, handles audio streaming, and provides a simple API for connecting users through high-quality voice calls.
Installation
| SDK | Description |
|---|---|
| JavaScript SDK | A JavaScript SDK to manage WebRTC endpoints and connect them to other endpoints in your application. |
| Webpack Releases | We also release Webpack bundles of our SDKs for use in browser applications. These bundles are available on GitHub and can be used without installing the SDKs via npm. |
| NPM Packages | We also publish our Browser SDKs to npm for use in Node.js applications. |
| Sample Application | A sample application that demonstrates how to use the BRTC JavaScript SDK to create a browser-based dialer. |
Install via NPM
npm install bandwidth-rtc
Using Webpack Bundle
Download the latest bundle from the releases page and include it in your HTML:
<script src="bandwidth-rtc.min.js"></script>
Getting Started
1. Initialize the Client
First, import and create a new instance of the BandwidthRtc client. Optionally, you can set the log level for debugging:
import BandwidthRtc from "bandwidth-rtc";
// Create client with optional log level
// Available levels: "debug", "info", "warn", "error"
const bandwidthRtc = new BandwidthRtc("debug");
2. Connect to Bandwidth RTC Platform
Before you can publish or subscribe to media, you need to connect to the Bandwidth RTC platform using an endpoint token. This token should be obtained from your backend server after registering an endpoint with the Bandwidth API.
try {
await bandwidthRtc.connect({
endpointToken: "your-endpoint-token-here"
});
console.log("Connected to Bandwidth RTC!");
} catch (error) {
console.error("Failed to connect:", error);
}
Connection Options
You can pass additional options when connecting:
await bandwidthRtc.connect(
{
endpointToken: "your-endpoint-token-here"
},
{
// Optional: Override the default WebSocket URL
websocketUrl: "wss://your-custom-url.com",
// Optional: Provide custom ICE servers
iceServers: [
{
urls: "stun:stun.l.google.com:19302"
}
],
// Optional: Set ICE transport policy
iceTransportPolicy: "all" // or "relay"
}
);
3. Set Up Event Listeners
Register event handlers to respond to incoming streams and connection events:
// Called when a remote stream becomes available
bandwidthRtc.onStreamAvailable((rtcStream) => {
console.log("New stream available:", rtcStream.mediaStream.id);
console.log("Media types:", rtcStream.mediaTypes);
// Attach the stream to an HTML video/audio element
const videoElement = document.getElementById("remote-video");
videoElement.srcObject = rtcStream.mediaStream;
});
// Called when a remote stream is no longer available
bandwidthRtc.onStreamUnavailable((rtcStream) => {
console.log("Stream unavailable:", rtcStream.mediaStream.id);
// Remove the stream from your UI
const videoElement = document.getElementById("remote-video");
videoElement.srcObject = null;
});
// Called when the connection is ready
bandwidthRtc.onReady((metadata) => {
console.log("Connection ready!");
console.log("Endpoint ID:", metadata.endpointId);
console.log("Device ID:", metadata.deviceId);
console.log("Region:", metadata.region);
console.log("Territory:", metadata.territory);
});
Publishing Media
Publish Audio (Default)
Publish audio with default constraints:
try {
const rtcStream = await bandwidthRtc.publish({
audio: true,
video: false
});
console.log("Publishing audio stream:", rtcStream.mediaStream.id);
} catch (error) {
console.error("Failed to publish media:", error);
}
Publish with Custom Audio Constraints
Customize audio constraints for better quality control:
const mediaConstraints = {
audio: {
autoGainControl: true,
channelCount: 1,
deviceId: "default",
echoCancellation: true,
noiseSuppression: true,
sampleRate: 48000
},
video: false
};
const rtcStream = await bandwidthRtc.publish(mediaConstraints);
See MDN MediaStreamConstraints for all available audio options.
Publish an Existing MediaStream
You can publish an already-acquired MediaStream:
// Get audio from a specific device
const audioStream = await navigator.mediaDevices.getUserMedia({
audio: { deviceId: "specific-device-id" },
video: false
});
// Publish it
const rtcStream = await bandwidthRtc.publish(audioStream);
Publish with Audio Level Detection
Monitor audio levels while publishing:
import { AudioLevel } from "bandwidth-rtc";
const rtcStream = await bandwidthRtc.publish(
{ audio: true, video: false },
(audioLevel) => {
// audioLevel will be "silent", "low", or "high"
console.log("Audio level:", audioLevel);
if (audioLevel === AudioLevel.SILENT) {
console.log("User is muted or not speaking");
}
}
);
Publish with Stream Alias
Add an alias to your stream for easier identification in billing records and events:
const rtcStream = await bandwidthRtc.publish(
{ audio: true, video: true },
null, // audio level handler (optional)
"user-camera-stream" // alias
);
Controlling Published Media
Mute/Unmute Microphone
// Mute microphone
bandwidthRtc.setMicEnabled(false);
// Unmute microphone
bandwidthRtc.setMicEnabled(true);
// Toggle mic for a specific stream
bandwidthRtc.setMicEnabled(false, rtcStream);
// Or by stream ID
bandwidthRtc.setMicEnabled(false, "stream-id");
Unpublish Media
Stop publishing one or more streams:
// Unpublish a specific stream
await bandwidthRtc.unpublish(rtcStream);
// Unpublish by stream ID
await bandwidthRtc.unpublish("stream-id");
// Unpublish multiple streams
await bandwidthRtc.unpublish(stream1, stream2, stream3);
// Unpublish all streams
await bandwidthRtc.unpublish();
Device Management
List Available Devices
// Get all audio input devices (microphones)
const audioInputs = await bandwidthRtc.getAudioInputs();
audioInputs.forEach(device => {
console.log(device.label, device.deviceId);
});
// Get all video input devices (cameras)
const videoInputs = await bandwidthRtc.getVideoInputs();
// Get all audio output devices (speakers)
const audioOutputs = await bandwidthRtc.getAudioOutputs();
// Get all media devices
const allDevices = await bandwidthRtc.getMediaDevices();
Select Specific Devices
// Use a specific microphone
const audioInputs = await bandwidthRtc.getAudioInputs();
const selectedMic = audioInputs[0].deviceId;
const rtcStream = await bandwidthRtc.publish({
audio: { deviceId: selectedMic },
video: true
});
DTMF (Dual-Tone Multi-Frequency) Signaling
Send DTMF tones during an active call:
// Send a single digit
bandwidthRtc.sendDtmf("5");
// Send multiple digits
bandwidthRtc.sendDtmf("1234");
// Send with special characters
bandwidthRtc.sendDtmf("*123#");
// Send on a specific stream
bandwidthRtc.sendDtmf("9", "stream-id");
Valid DTMF characters: 0-9, *, #, ,
Making Outbound Connections
Connect to Another Endpoint
import { EndpointType } from "bandwidth-rtc";
// Connect to another endpoint
const result = await bandwidthRtc.requestOutboundConnection(
"endpoint-123",
EndpointType.ENDPOINT
);
if (result.accepted) {
console.log("Connection established!");
}
// Connect to a phone number
const callResult = await bandwidthRtc.requestOutboundConnection(
"+15551234567",
EndpointType.PHONE_NUMBER
);
// Connect to a call ID
const callIdResult = await bandwidthRtc.requestOutboundConnection(
"c-call-id-123",
EndpointType.CALL_ID
);
Hang Up Connection
// Hang up connection to an endpoint
const result = await bandwidthRtc.hangupConnection(
"endpoint-123",
EndpointType.ENDPOINT
);
// Hang up phone call
await bandwidthRtc.hangupConnection(
"+15551234567",
EndpointType.PHONE_NUMBER
);
Disconnecting
When you're done with the session, disconnect from the platform:
bandwidthRtc.disconnect();
This will:
- Close the WebSocket connection
- Stop all published media streams
- Clean up all peer connections
- Release all media devices
Complete React Example
Here's a complete example of using the SDK in a React component for voice calls:
import React, { useEffect, useRef, useState } from 'react';
import BandwidthRtc, { AudioLevel } from 'bandwidth-rtc';
function VoiceCall() {
const [rtcClient, setRtcClient] = useState(null);
const [isConnected, setIsConnected] = useState(false);
const [isMuted, setIsMuted] = useState(false);
const [audioLevel, setAudioLevel] = useState(AudioLevel.SILENT);
const [endpointId, setEndpointId] = useState(null);
const remoteAudioRef = useRef(null);
useEffect(() => {
const client = new BandwidthRtc("debug");
// Set up event listeners
client.onStreamAvailable((rtcStream) => {
console.log("Remote stream available");
if (remoteAudioRef.current) {
remoteAudioRef.current.srcObject = rtcStream.mediaStream;
}
});
client.onStreamUnavailable((rtcStream) => {
console.log("Remote stream unavailable");
if (remoteAudioRef.current) {
remoteAudioRef.current.srcObject = null;
}
});
client.onReady((metadata) => {
console.log("Ready:", metadata);
setEndpointId(metadata.endpointId);
setIsConnected(true);
});
setRtcClient(client);
return () => {
client?.disconnect();
};
}, []);
const connect = async () => {
try {
// Get endpoint token from your backend
const response = await fetch('/api/get-endpoint-token');
const { endpointToken } = await response.json();
await rtcClient.connect({ endpointToken });
// Start publishing audio with level detection
const rtcStream = await rtcClient.publish(
{ audio: true, video: false },
(level) => setAudioLevel(level),
"user-microphone"
);
console.log("Publishing audio stream:", rtcStream.mediaStream.id);
} catch (error) {
console.error("Connection failed:", error);
}
};
const toggleMute = () => {
rtcClient.setMicEnabled(isMuted);
setIsMuted(!isMuted);
};
const sendDTMF = (digit) => {
rtcClient.sendDtmf(digit);
};
const disconnect = () => {
rtcClient.disconnect();
setIsConnected(false);
setEndpointId(null);
};
return (
<div className="voice-call">
<h1>Voice Call</h1>
{/*Hidden audio element for remote stream */}
<audio ref={remoteAudioRef} autoPlay />
<div className="status">
<p>Status: {isConnected ? 'Connected' : 'Disconnected'}</p>
{endpointId && <p>Endpoint ID: {endpointId}</p>}
<p>Audio Level: {audioLevel}</p>
</div>
<div className="controls">
{!isConnected ? (
<button onClick={connect}>Connect</button>
) : (
<>
<button
onClick={toggleMute}
className={isMuted ? 'active' : ''}
>
{isMuted ? '🔇 Unmute' : '🎤 Mute'}
</button>
<button onClick={disconnect}>
📞 Hang Up
</button>
</>
)}
</div>
{isConnected && (
<div className="dtmf-pad">
<h3>Dial Pad</h3>
<div className="keypad">
{['1', '2', '3', '4', '5', '6', '7', '8', '9', '*', '0', '#'].map(digit => (
<button
key={digit}
onClick={() => sendDTMF(digit)}
className="dtmf-button"
>
{digit}
</button>
))}
</div>
</div>
)}
</div>
);
}
export default VoiceCall;
TypeScript Support
The SDK is written in TypeScript and includes full type definitions. Import types as needed:
import BandwidthRtc, {
RtcStream,
RtcAuthParams,
RtcOptions,
EndpointType,
MediaType,
AudioLevel,
ReadyMetadata
} from "bandwidth-rtc";
const authParams: RtcAuthParams = {
endpointToken: "your-token"
};
const options: RtcOptions = {
iceServers: [{ urls: "stun:stun.l.google.com:19302" }]
};
const client = new BandwidthRtc();
await client.connect(authParams, options);
Error Handling
Always wrap SDK calls in try-catch blocks:
try {
await bandwidthRtc.connect({ endpointToken });
const stream = await bandwidthRtc.publish();
} catch (error) {
if (error.name === 'BandwidthRtcError') {
console.error("BRTC SDK error:", error.message);
} else if (error.name === 'NotAllowedError') {
console.error("User denied media permissions");
} else if (error.name === 'NotFoundError') {
console.error("No media devices found");
} else {
console.error("Unknown error:", error);
}
}
Best Practices
- Always request permissions early: Request camera/microphone permissions as soon as possible to improve user experience
- Handle disconnections gracefully: Monitor connection state and implement reconnection logic
- Clean up resources: Always call
disconnect()when done to free up media devices - Use audio level detection: Implement visual feedback for speaker activity
- Test across browsers: WebRTC behavior can vary across browsers
- Secure your tokens: Never expose endpoint tokens in client-side code; always fetch them from your backend
- Monitor media quality: Use the diagnostics features to identify connection issues
Next Steps
- Review the Sample Application for a complete working example
- Check out the GitHub repository for the latest updates
- Read the Bandwidth BRTC API documentation for backend integration details