Skip to main content

Swift

The BRTC Swift SDK enables you to build real-time audio communication applications on iOS. This SDK manages WebRTC endpoints, handles audio streaming, and provides a simple API for connecting users through high-quality voice calls.

Installation

ResourceDescription
Swift SDKAn iOS SDK to manage WebRTC endpoints and connect them to other endpoints in your application.
XCFramework ReleasesPre-built signed XCFramework bundles are available on GitHub Releases.
Sample ApplicationA sample iOS application that demonstrates outbound PSTN dialing and live call quality monitoring using BRTC.

Requirements

  • iOS 17+
  • Swift 5.9+
  • Xcode 16+

Install via Swift Package Manager

Add the SDK as a dependency in Xcode:

  1. Open your project in Xcode
  2. Go to File > Add Package Dependencies
  3. Enter the repository URL: https://github.com/Bandwidth/swift-brtc-sdk
  4. Select a version rule (e.g., "Up to Next Major Version" from 1.0.0)

Or add it directly to your Package.swift:

dependencies: [
.package(url: "https://github.com/Bandwidth/swift-brtc-sdk", from: "1.0.0"),
]

Then add BandwidthRTC to your target's dependencies:

.target(
name: "YourApp",
dependencies: [
.product(name: "BandwidthRTC", package: "swift-brtc-sdk"),
]
)

Getting Started

1. Initialize the Client

Create a new instance of the BandwidthRTCClient. All SDK operations are async and must be called from an async context.

import BandwidthRTC

// Create client with optional log level (default: .warn)
// Available levels: .off, .error, .warn, .info, .debug, .trace
let bandwidthRtc = BandwidthRTCClient(logLevel: .debug)

2. Connect to Bandwidth RTC Platform

Connect to the Bandwidth RTC platform using an endpoint token obtained from your backend server.

do {
try await bandwidthRtc.connect(
authParams: RtcAuthParams(endpointToken: "your-endpoint-token-here")
)
print("Connected to Bandwidth RTC!")
} catch {
print("Failed to connect: \(error.localizedDescription)")
}

Connection Options

You can pass additional options when connecting:

import WebRTC

let options = RtcOptions(
// Optional: Override the default WebSocket URL
websocketUrl: "wss://your-custom-url.com",

// Optional: Provide custom ICE servers
iceServers: [
RTCIceServer(urlStrings: ["stun:stun.l.google.com:19302"])
],

// Optional: Set ICE transport policy
iceTransportPolicy: .all,

// Optional: Configure audio processing
audioProcessing: AudioProcessingOptions(
audioSessionMode: .voiceChat,
audioSessionCategoryOptions: [.allowBluetoothHFP],
inputSampleRate: 48000,
outputSampleRate: 48000
)
)

try await bandwidthRtc.connect(
authParams: RtcAuthParams(endpointToken: "your-token"),
options: options
)

3. Set Up Event Listeners

Register callbacks to respond to incoming streams and connection events:

// Called when a remote stream becomes available
bandwidthRtc.onStreamAvailable = { rtcStream in
print("New stream available: \(rtcStream.streamId)")
print("Media types: \(rtcStream.mediaTypes)")
// Use rtcStream.mediaStream for audio playback
}

// Called when a remote stream is no longer available
bandwidthRtc.onStreamUnavailable = { streamId in
print("Stream unavailable: \(streamId)")
}

// Called when the connection is ready
bandwidthRtc.onReady = { metadata in
print("Connection ready!")
print("Endpoint ID: \(metadata.endpointId ?? "unknown")")
print("Device ID: \(metadata.deviceId ?? "unknown")")
print("Region: \(metadata.region ?? "unknown")")
print("Territory: \(metadata.territory ?? "unknown")")
}

// Called when the WebSocket disconnects unexpectedly
bandwidthRtc.onRemoteDisconnected = {
print("Disconnected from BRTC")
}

Publishing Media

Publish Audio (Default)

Publish audio with default settings:

do {
let rtcStream = try await bandwidthRtc.publish(audio: true)
print("Publishing audio stream: \(rtcStream.streamId)")
} catch {
print("Failed to publish media: \(error.localizedDescription)")
}
note

The SDK will request microphone permission via the system prompt when publish() is called. Ensure your Info.plist contains the NSMicrophoneUsageDescription key.

Publish with Stream Alias

Add an alias to your stream for easier identification in billing records and events:

let rtcStream = try await bandwidthRtc.publish(
audio: true,
alias: "user-microphone"
)

Controlling Published Media

Mute/Unmute Microphone

// Mute microphone
bandwidthRtc.setMicEnabled(false)

// Unmute microphone
bandwidthRtc.setMicEnabled(true)

Unpublish Media

Stop publishing a stream:

try await bandwidthRtc.unpublish(stream: rtcStream)

Audio Level Detection

Monitor local and remote audio levels:

// Local microphone audio level
bandwidthRtc.onLocalAudioLevel = { samples in
// samples is [Float32] of PCM audio data
let rms = sqrt(samples.map { $0 * $0 }.reduce(0, +) / Float32(samples.count))
print("Local audio level: \(rms)")
}

// Remote audio level
bandwidthRtc.onRemoteAudioLevel = { samples in
let rms = sqrt(samples.map { $0 * $0 }.reduce(0, +) / Float32(samples.count))
print("Remote audio level: \(rms)")
}

DTMF (Dual-Tone Multi-Frequency) Signaling

Send DTMF tones during an active call:

// Send a single digit
bandwidthRtc.sendDtmf("5")

// Send multiple digits
bandwidthRtc.sendDtmf("1234")

// Send with special characters
bandwidthRtc.sendDtmf("*123#")

Valid DTMF characters: 0-9, *, #, ,

Call Statistics

Get real-time call quality statistics:

bandwidthRtc.getCallStats(previousSnapshot: nil) { stats in
print("Packets received: \(stats.packetsReceived)")
print("Packets lost: \(stats.packetsLost)")
print("Jitter: \(stats.jitter)s")
print("Round trip time: \(stats.roundTripTime)s")
print("Codec: \(stats.codec)")
print("Inbound bitrate: \(stats.inboundBitrate) bps")
print("Outbound bitrate: \(stats.outboundBitrate) bps")
}

Pass the previous snapshot to compute delta-based metrics like bitrate:

var lastSnapshot: CallStatsSnapshot?

func refreshStats() {
bandwidthRtc.getCallStats(previousSnapshot: lastSnapshot) { stats in
lastSnapshot = stats
updateStatsUI(stats)
}
}

Making Outbound Connections

Connect to a Phone Number

let callResult = try await bandwidthRtc.requestOutboundConnection(
id: "+15551234567",
type: .phoneNumber
)
if callResult.accepted {
print("Connection established!")
}

Hang Up Connection

try await bandwidthRtc.hangupConnection(endpoint: "+15551234567", type: .phoneNumber)

Disconnecting

When you're done with the session, disconnect from the platform:

await bandwidthRtc.disconnect()

This will:

  • Close the WebSocket connection
  • Stop all published media streams
  • Clean up all peer connections
  • Release audio session resources

Complete SwiftUI Example

Here's a complete example of using the SDK in a SwiftUI application for voice calls:

import SwiftUI
import BandwidthRTC

struct VoiceCallView: View {
@StateObject private var viewModel = CallViewModel()

var body: some View {
VStack(spacing: 20) {
Text("Voice Call")
.font(.largeTitle)

Text(viewModel.isConnected ? "Connected" : "Disconnected")
.foregroundColor(viewModel.isConnected ? .green : .red)

if let endpointId = viewModel.endpointId {
Text("Endpoint: \(endpointId)")
.font(.caption)
}

if !viewModel.isConnected {
Button("Connect") {
Task { await viewModel.connect() }
}
.buttonStyle(.borderedProminent)
} else {
HStack(spacing: 12) {
Button(viewModel.isMuted ? "Unmute" : "Mute") {
viewModel.toggleMute()
}
.buttonStyle(.bordered)

Button("Hang Up") {
Task { await viewModel.disconnect() }
}
.buttonStyle(.borderedProminent)
.tint(.red)
}

// DTMF Dial Pad
Text("Dial Pad").font(.headline)
LazyVGrid(columns: Array(repeating: GridItem(.flexible()), count: 3), spacing: 8) {
ForEach(["1", "2", "3", "4", "5", "6", "7", "8", "9", "*", "0", "#"], id: \.self) { digit in
Button(digit) {
viewModel.sendDTMF(digit)
}
.buttonStyle(.bordered)
.frame(minWidth: 60, minHeight: 44)
}
}
}
}
.padding()
}
}

@MainActor
class CallViewModel: ObservableObject {
@Published var isConnected = false
@Published var isMuted = false
@Published var endpointId: String?

private let bandwidthRtc = BandwidthRTCClient(logLevel: .debug)

init() {
bandwidthRtc.onStreamAvailable = { rtcStream in
print("Remote stream available: \(rtcStream.streamId)")
}

bandwidthRtc.onStreamUnavailable = { streamId in
print("Stream unavailable: \(streamId)")
}

bandwidthRtc.onReady = { [weak self] metadata in
Task { @MainActor in
self?.endpointId = metadata.endpointId
self?.isConnected = true
}
}

bandwidthRtc.onRemoteDisconnected = { [weak self] in
Task { @MainActor in
self?.isConnected = false
self?.endpointId = nil
}
}
}

func connect() async {
do {
// Get endpoint token from your backend
let endpointToken = try await fetchEndpointToken()

try await bandwidthRtc.connect(
authParams: RtcAuthParams(endpointToken: endpointToken)
)

_ = try await bandwidthRtc.publish(audio: true, alias: "user-microphone")
} catch {
print("Connection failed: \(error.localizedDescription)")
}
}

func disconnect() async {
await bandwidthRtc.disconnect()
isConnected = false
endpointId = nil
}

func toggleMute() {
isMuted.toggle()
bandwidthRtc.setMicEnabled(!isMuted)
}

func sendDTMF(_ digit: String) {
bandwidthRtc.sendDtmf(digit)
}

private func fetchEndpointToken() async throws -> String {
// Implement your backend token fetch here
fatalError("Fetch endpoint token from your backend server")
}
}

Error Handling

The SDK uses an enum for errors that conforms to LocalizedError. Always wrap SDK calls in do-catch blocks:

do {
try await bandwidthRtc.connect(authParams: RtcAuthParams(endpointToken: token))
_ = try await bandwidthRtc.publish(audio: true)
} catch BandwidthRTCError.invalidToken {
print("Token is invalid or expired")
} catch BandwidthRTCError.alreadyConnected {
print("Already connected -- disconnect first")
} catch BandwidthRTCError.notConnected {
print("Must connect before publishing")
} catch BandwidthRTCError.connectionFailed(let detail) {
print("Connection failed: \(detail)")
} catch BandwidthRTCError.mediaAccessDenied {
print("Microphone permission not granted")
} catch BandwidthRTCError.publishFailed(let detail) {
print("Failed to publish: \(detail)")
} catch let error as BandwidthRTCError {
print("BRTC error: \(error.localizedDescription)")
} catch {
print("Unknown error: \(error)")
}

Best Practices

  1. Add NSMicrophoneUsageDescription to Info.plist: iOS requires a usage description string for microphone access. Without it, your app will crash when requesting permission.
  2. Handle disconnections gracefully: Monitor the onRemoteDisconnected callback and implement reconnection logic.
  3. Clean up resources: Always call disconnect() when done to free up the audio session and network connections.
  4. Use audio level detection: Implement visual feedback for speaker activity using onLocalAudioLevel and onRemoteAudioLevel.
  5. Secure your tokens: Never embed endpoint tokens in your app; always fetch them from your backend server.
  6. Use structured concurrency: All SDK methods are async. Use Swift's structured concurrency (Task, async/await) for clean control flow.
  7. Monitor call quality: Use getCallStats() to track jitter, packet loss, and round-trip time for diagnostics.

Next Steps