π Introduction
In your React JS video call application, implementing Active Speaker Indication can enhance the user experience by highlighting the participant currently speaking. This feature helps users easily identify who is talking, fostering smoother communication and engagement within the call. By dynamically updating the interface to visually indicate the active speaker, such as through border highlights or avatar animations, participants can focus their attention more effectively during group discussions.
Benefits of Implement Active Speaker Indication in React JS Video App:
- Enhanced Communication : Active Speaker Indication improves communication by helping participants easily identify who is speaking, reducing confusion and interruptions.
- Improved Engagement : Users stay more engaged in the conversation when they can easily follow who is speaking, leading to more active participation.
- Smooth User Experience : It provides a smoother user experience by making it effortless for participants to understand the flow of conversation.
Use case of Implement Active Speaker Indication in React JS Video App:
- Group Meetings : During group meetings, Active Speaker Indication helps participants follow the conversation flow and know who is speaking at any given moment.
- Remote Work : In remote work scenarios, where team members communicate through video calls, this feature ensures smoother collaboration and better engagement.
- Educational Webinars : In online educational webinars or virtual classrooms, it help students identify the speaker and stay focused on the lesson.
In the below guide, you will learn how to highlight active speakers in the React JS Video chat application.
π Getting Started with VideoSDK
To take advantage of the active speaker functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.
Create a VideoSDK Account
Go to your VideoSDK dashboard and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.
Generate your Auth Token
Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.
For a more visual understanding of the account creation and token generation process, consider referring to the provided tutorial.
Prerequisites and Setup
Before proceeding, ensure that your development environment meets the following requirements:
- VideoSDK Developer Account (Not having one? Follow VideoSDK Dashboard)
- Basic understanding of React.
- React VideoSDK
- Make sure Node and NPM are installed on your device.
- Basic understanding of Hooks (useState, useRef, useEffect).
- React Context API (optional).
Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for Quickstart here.β
Create a new React App using the below command.
$ npx create-react-app videosdk-rtc-react-app
π οΈ Install VideoSDKβ
It is necessary to set up VideoSDK within your project before going into the details of integrating the Screen Share feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.
- For NPM
$ npm install "@videosdk.live/react-sdk"
//For the Participants Video
$ npm install "react-player"
- For Yarn
$ yarn add "@videosdk.live/react-sdk"
//For the Participants Video
$ yarn add "react-player"
You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.
App Architecture
The App will contain a MeetingView
component which includes a ParticipantView
component which will render the participant's name, video, audio, etc. It will also have a Controls
component that will allow the user to perform operations like leave and toggle media.
You will be working on the following files:
- API.js: Responsible for handling API calls such as generating unique
meetingId
tokens. - App.js: Responsible for rendering
MeetingView
and joining the meeting.
π₯ Essential Steps to Implement Video Calling Functionality
To add video capability to your React application, you must first complete a sequence of prerequisites.
Step 1: Get started with API.jsβ
Before moving on, you must create an API request to generate a unique meetingId
. You will need an authentication token, which you can create either through the videosdk-rtc-api-server-examples or directly from the VideoSDK Dashboard for developers.
//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "<Generated-from-dashbaord>";
// API call to create a meeting
export const createMeeting = async ({ token }) => {
const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
method: "POST",
headers: {
authorization: `${authToken}`,
"Content-Type": "application/json",
},
body: JSON.stringify({}),
});
//Destructuring the roomId from the response
const { roomId } = await res.json();
return roomId;
};
Step 2: Wireframe App.js with all the componentsβ
To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.
First, you need to understand the Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.
-
MeetingProvider : This is the Context Provider. It accepts value
config
andtoken
as a prop. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree. - MeetingConsumer : This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Providerβs value proposition changes.
- useMeeting : This is the meeting hook API. It includes all the information related to meetings, such as join/leave, enable/disable the mic or webcam, etc.
- useParticipant : This is the participant hook API. It is responsible for handling all the events and props related to one particular participant, such as name , webcamStream , micStream , etc.
The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.
Begin by making a few changes to the code in the App.js file.
App.js
import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
MeetingProvider,
MeetingConsumer,
useMeeting,
useParticipant,
} from "@videosdk.live/react-sdk";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";
function JoinScreen({ getMeetingAndToken }) {
return null;
}
function ParticipantView(props) {
return null;
}
function Controls(props) {
return null;
}
function MeetingView(props) {
return null;
}
function ChatView() {
return null;
}
function App() {
const [meetingId, setMeetingId] = useState(null);
//Getting the meeting id by calling the api we just wrote
const getMeetingAndToken = async (id) => {
const meetingId =
id == null ? await createMeeting({ token: authToken }) : id;
setMeetingId(meetingId);
};
//This will set Meeting Id to null when meeting is left or ended
const onMeetingLeave = () => {
setMeetingId(null);
};
return authToken && meetingId ? (
<MeetingProvider
config={{
meetingId,
micEnabled: true,
webcamEnabled: true,
name: "C.V. Raman",
}}
token={authToken}
>
<MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} />
</MeetingProvider>
) : (
<JoinScreen getMeetingAndToken={getMeetingAndToken} />
);
}
export default App;
Step 3: Implement Join Screen
The join screen will serve as a medium to either schedule a new meeting or join an existing one.
function JoinScreen({ getMeetingAndToken }) {
const [meetingId, setMeetingId] = useState(null);
const onClick = async () => {
await getMeetingAndToken(meetingId);
};
return (
<div>
<input
type="text"
placeholder="Enter Meeting Id"
onChange={(e) => {
setMeetingId(e.target.value);
}}
/>
<button onClick={onClick}>Join</button>
{" or "}
<button onClick={onClick}>Create Meeting</button>
</div>
);
}
Output
Step 4: Implement MeetingView and Controlsβ
The next step is to create MeetingView
and Controls
components to manage features such as join, leave, mute, and unmute.
function MeetingView(props) {
const [joined, setJoined] = useState(null);
//Get the method which will be used to join the meeting.
//We will also get the participants list to display all participants
const { join, participants } = useMeeting({
//callback for when meeting is joined successfully
onMeetingJoined: () => {
setJoined("JOINED");
},
//callback for when meeting is left
onMeetingLeft: () => {
props.onMeetingLeave();
},
});
const joinMeeting = () => {
setJoined("JOINING");
join();
};
return (
<div className="container">
<h3>Meeting Id: {props.meetingId}</h3>
{joined && joined == "JOINED" ? (
<div>
<Controls />
//For rendering all the participants in the meeting
{[...participants.keys()].map((participantId) => (
<ParticipantView
participantId={participantId}
key={participantId}
/>
))}
<ChatView />
</div>
) : joined && joined == "JOINING" ? (
<p>Joining the meeting...</p>
) : (
<button onClick={joinMeeting}>Join</button>
)}
</div>
);
}
function Controls() {
const { leave, toggleMic, toggleWebcam } = useMeeting();
return (
<div>
<button onClick={() => leave()}>Leave</button>
<button onClick={() => toggleMic()}>toggleMic</button>
<button onClick={() => toggleWebcam()}>toggleWebcam</button>
</div>
);
}
Control Component
Output of Controls Component
Step 5: Implement Participant Viewβ
Before implementing the participant view, you need to understand a couple of concepts.
5.1 Forwarding Ref for mic and camera
The useRef
hook is responsible for referencing the audio and video components. It will be used to play and stop the audio and video of the participant.
const webcamRef = useRef(null);
const micRef = useRef(null);
Forwarding Ref for mic and camera
5.2 useParticipant Hook
The useParticipant
hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId
as an argument.
const { webcamStream, micStream, webcamOn, micOn } = useParticipant(
props.participantId
);
5.3 MediaStream API
The MediaStream API is beneficial for adding a MediaTrack to the audio/video tag, enabling the playback of audio or video.
const webcamRef = useRef(null);
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);
webcamRef.current.srcObject = mediaStream;
webcamRef.current
.play()
.catch((error) => console.error("videoElem.current.play() failed", error));
5.4 Implement ParticipantView
β
Now you can use both of the hooks and the API to create ParticipantView
function ParticipantView(props) {
const micRef = useRef(null);
const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
useParticipant(props.participantId);
const videoStream = useMemo(() => {
if (webcamOn && webcamStream) {
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);
return mediaStream;
}
}, [webcamStream, webcamOn]);
useEffect(() => {
if (micRef.current) {
if (micOn && micStream) {
const mediaStream = new MediaStream();
mediaStream.addTrack(micStream.track);
micRef.current.srcObject = mediaStream;
micRef.current
.play()
.catch((error) =>
console.error("videoElem.current.play() failed", error)
);
} else {
micRef.current.srcObject = null;
}
}
}, [micStream, micOn]);
return (
<div>
<p>
Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
{micOn ? "ON" : "OFF"}
</p>
<audio ref={micRef} autoPlay playsInline muted={isLocal} />
{webcamOn && (
<ReactPlayer
//
playsinline // extremely crucial prop
pip={false}
light={false}
controls={false}
muted={true}
playing={true}
//
url={videoStream}
//
height={"300px"}
width={"300px"}
onError={(err) => {
console.log(err, "participant video error");
}}
/>
)}
</div>
);
}
Integrate Active Speaker Feature
The Active Speaker Indication feature in VideoSDK allows you to identify the participant who is currently the active speaker in a meeting. This feature proves especially valuable in larger meetings or webinars, where numerous participants can make it challenging to identify the active speaker.
Whenever any participant speaks in a meeting, the onSpeakerChanged
event will trigger, providing the participant ID of the active speaker.
For example, the meeting is running with Alice and Bob. Whenever any of them speaks, onSpeakerChanged
the event will trigger and return the speaker's participantId
.
import { useMeeting } from "@videosdk.live/react-sdk";
const MeetingView = () => {
/** useMeeting hooks events */
const {
/** Methods */
} = useMeeting({
onSpeakerChanged: (activeSpeakerId) => {
console.log("Active Speaker participantId", activeSpeakerId);
},
});
};
π― Conclusion
By implementing active speaker indication, you can significantly enhance the usability and clarity of your React JS video call application. Not only does it improve user experience, but it also fosters a more natural and focused communication flow. With VideoSDK, this feature is straightforward to integrate, allowing you to focus on building a captivating user experience.
Enhance your React video call app today with VideoSDK and offer an unparalleled video calling experience to your users. Remember, VideoSDK provides 10,000 free minutes to empower your video call app with advanced features without breaking any initial investment. Sign up with VideoSDK today and take the video app to the next level.
Top comments (0)