Mastering WebRTC: From Camera Capture to Screen Sharing in the Browser
This article explains WebRTC fundamentals, covering camera and microphone basics, frame rate and resolution, tracks and streams, the getUserMedia API with constraints and error handling, device enumeration, MediaRecorder for client‑side recording, and screen sharing via getDisplayMedia, all illustrated with practical JavaScript code examples.
2021‑01‑26, W3C and IETF announced WebRTC as an official standard, marking its maturity. The IMWeb team has early experience with WebRTC in applications such as Tencent Classroom and Penguin Tutor.
1. Basic Concepts of Audio/Video Capture
Before using the JavaScript APIs for audio/video capture, understand basic concepts.
Camera
Used to capture images and video.
Frame Rate
Number of frames captured per second; higher frame rate yields smoother video but increases bandwidth.
Resolution
Measures data amount of a video image, expressed as 1080p, 720p, etc.; higher resolution improves clarity but consumes more bandwidth.
Microphone
Used to capture audio data.
Sample Rate
Number of audio samples per second; higher sample rate yields more realistic sound. 8000 Hz is sufficient for voice.
Track
In multimedia, a track is an independent stream of data (e.g., audio track, video track) that does not intersect with other tracks.
Stream
A container that can hold one or more audio or video tracks (MediaStream) or data tracks (DataStream).
2. Audio/Video Capture
getUserMedia
The
navigator.mediaDevices.getUserMedia(constnraints)method obtains a media stream from the user's devices. It returns a Promise that resolves with a
MediaStream, which can be assigned to a
videoor
audioelement via
srcObjector
URL.createObjectURL.
Possible errors caught by
.catchinclude:
AbortError – hardware problem.
NotFoundError – no media type matches the constraints.
NotReadableError – OS, browser, or page prevents device access.
TypeError – constraints object is empty or all false.
OverConstrainedError – device cannot satisfy the requested constraints.
SecurityError – permission not granted.
NotAllowedError – user denied permission.
MediaStreamConstraints Parameters
Constraints specify which tracks to include and their limits. Example:
<code>const mediaStreamConstraints = {
video: true,
audio: true
};</code>More detailed constraints:
<code>const mediaStreamConstraints = {
video: {
frameRate: { min: 20 },
width: { min: 640, ideal: 1280 },
height: { min: 360, ideal: 720 },
aspectRatio: 16/9
},
audio: {
echoCancellation: true,
noiseSuppression: true,
autoGainControl: true
}
};</code>Using Captured Media Streams
Example of playing a video stream locally:
<code>const constraints = { video: true };
const localVideo = document.querySelector('video');
function gotLocalMediaStream(stream) {
localVideo.srcObject = stream;
localVideo.play();
}
function handleLocalMediaStreamError(error) {
console.log('getUserMedia failed:', error);
}
navigator.mediaDevices.getUserMedia(constraints)
.then(gotLocalMediaStream)
.catch(handleLocalMediaStreamError);
</code>3. Audio/Video Devices
The
MediaDevicesinterface provides methods to enumerate connected media devices and to request access.
Each
MediaDeviceInfocontains:
deviceId – unique identifier.
label – device name (empty unless permission granted via HTTPS).
kind – type of device (audioinput, audiooutput, videoinput, etc.).
Example to list devices:
<code>MediaDevices.enumerateDevices().then(deviceList => {
console.log(deviceList);
});
</code>4. Audio/Video Recording and Screen Sharing
Audio/Video Recording
Recording can be performed on the server or client; client‑side recording is simpler but CPU‑intensive.
Browser provides
MediaRecorderto record a
MediaStream. Example:
<code>const options = { mimeType: 'video/webm;codecs=vp8' };
if (!MediaRecorder.isTypeSupported(options.mimeType)) {
console.error(`${options.mimeType} is not supported!`);
return;
}
const mediaRecorder = new MediaRecorder(stream, options);
let chunks = [];
mediaRecorder.ondataavailable = e => {
if (e.data && e.data.size > 0) chunks.push(e.data);
};
mediaRecorder.start(10); // collect data every 10 ms
</code>After stopping, combine chunks into a
Bloband play:
<code>const blob = new Blob(chunks, { type: 'video/webm' });
const video = document.getElementById('playback');
video.src = URL.createObjectURL(blob);
video.controls = true;
video.play();
</code>Screen Sharing
Screen sharing is treated as a special media source. Use
navigator.mediaDevices.getDisplayMedia({ video: true })to capture the screen. Example:
<code>function getDeskStream(stream) {
localStream = stream;
}
function shareDesktop() {
if (utils.isPC) {
navigator.mediaDevices.getDisplayMedia({ video: true })
.then(getDeskStream)
.catch(handleError);
return true;
}
return false;
}
</code>Note: audio constraints cannot be set when capturing the screen.
Screen sharing workflow: capture → encode (H264/VP8) → transmit → decode → render. Unlike RDP/VNC, WebRTC uses video codecs for compression.
Overall, the article covers the fundamentals of browser‑based audio/video capture, device enumeration, recording with
MediaRecorder, and screen sharing via
getDisplayMedia, providing code snippets for each step.
Tencent IMWeb Frontend Team
IMWeb Frontend Community gathering frontend development enthusiasts. Follow us for refined live courses by top experts, cutting‑edge technical posts, and to sharpen your frontend skills.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.