Yet another thing i needed to figure out recently to hook up my Assembly.ai transription engine to a frontend that was loud.
Here is what i tried:
- Request microphone access with echo cancellation.
- Set up an audio processing chain using the Web Audio API.
- Integrate this setup with speech recognition.
- Utilize the
DynamicsCompressorNode
for additional audio processing.
Step 1: Request Microphone Access with Echo Cancellation
The first step is to request access to the microphone with echo cancellation enabled. This feature is built into most modern browsers and helps reduce the feedback from your speakers.
async function getMicrophoneStream() {
const constraints = {
audio: {
echoCancellation: true,
noiseSuppression: true,
autoGainControl: true
}
};
try {
const stream = await navigator.mediaDevices.getUserMedia(constraints);
return stream;
} catch (err) {
console.error('Error accessing the microphone', err);
return null;
}
}
Explanation
- Constraints: We specify audio constraints to enable echo cancellation, noise suppression, and auto-gain control.
- Error Handling: If the user denies access or if there is any other issue, we catch and log the error.
Step 2: Set Up Web Audio API Nodes
Next, we set up the Web Audio API to process the audio stream. This involves creating an AudioContext
and connecting various nodes, including a DynamicsCompressorNode
.
async function setupAudioProcessing(stream) {
const audioContext = new AudioContext();
const source = audioContext.createMediaStreamSource(stream);
// Create a DynamicsCompressorNode for additional processing
const compressor = audioContext.createDynamicsCompressor();
compressor.threshold.setValueAtTime(-50, audioContext.currentTime); // Example settings
compressor.knee.setValueAtTime(40, audioContext.currentTime);
compressor.ratio.setValueAtTime(12, audioContext.currentTime);
compressor.attack.setValueAtTime(0, audioContext.currentTime);
compressor.release.setValueAtTime(0.25, audioContext.currentTime);
// Connect nodes
source.connect(compressor);
compressor.connect(audioContext.destination);
return { audioContext, source, compressor };
}
Explanation
- AudioContext: Represents the audio environment.
- MediaStreamSource: Connects the microphone stream to the audio context.
- DynamicsCompressorNode: Reduces the dynamic range of the audio signal, helping to manage background noise and feedback.
Step 3: Integrate with Speech Recognition
Finally, we integrate our audio processing setup with the Web Speech API to perform speech recognition.
async function startSpeechRecognition() {
const stream = await getMicrophoneStream();
if (!stream) return;
const { audioContext, source, compressor } = await setupAudioProcessing(stream);
const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();
recognition.continuous = true;
recognition.interimResults = true;
recognition.onresult = (event) => {
for (let i = event.resultIndex; i < event.results.length; i++) {
const transcript = event.results[i][0].transcript;
console.log('Transcript:', transcript);
}
};
recognition.onerror = (event) => {
console.error('Speech recognition error', event.error);
};
recognition.start();
// Handle audio context resume if needed
if (audioContext.state === 'suspended') {
audioContext.resume();
}
return recognition;
}
// Start the speech recognition process
startSpeechRecognition();
Explanation
- Speech Recognition Setup: We set up the Web Speech API for continuous and interim speech recognition.
-
Event Handling: We handle the
onresult
andonerror
events to process recognition results and errors. - Start Recognition: We start the speech recognition process and ensure the audio context is not suspended.
Hopefully you found this useful.
Happy coding!
Tim.
Top comments (0)