Picovoice Wordmark
Start Free
Introduction
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryPicovoice picoLLMGPTQ
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-TextWhisper Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeWeb
SummaryPicovoice CheetahAzure Real-Time Speech-to-TextAmazon Transcribe StreamingGoogle Streaming ASRMoonshine StreamingVosk StreamingWhisper.cpp Streaming
FAQ
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryAmazon PollyAzure TTSElevenLabsOpenAI TTSPicovoice OrcaChatterbox-TTS-TurboKokoro-TTSKitten-TTS-Nano-0.8-INT8Pocket-TTSNeu-TTS-Nano-Q4-GGUFPiper-TTSSoprano-TTSSupertonic-TTS-2ESpeak-NG
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice KoalaMozilla RNNoise
Introduction
AndroidCiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidCNode.jsPythoniOSWeb
SummaryPicovoice EaglepyannoteSpeechBrain
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice FalconAmazon TranscribeAzure Speech-to-TextGoogle Speech-to-Textpyannote
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeSafariWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeWeb
SummaryPicovoice PorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeSafariWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidArduinoC.NETiOSLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSMicrocontrollerNode.jsPythonWeb
SummaryPicovoice CobraWebRTC VADSilero VAD
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
Introduction
AndroidC.NETFlutteriOSNode.jsPythonReact NativeWeb
AndroidC.NETFlutteriOSNode.jsPythonReact NativeWeb
Introduction
C.NETNode.jsPython
C.NETNode.jsPython
FAQGlossary

Cobra VAD — C Quick Start

Platforms

  • Linux (x86_64)
  • macOS (x86_64, arm64)
  • Windows (x86_64, arm64)
  • Raspberry Pi (Zero, 3, 4, 5)

Requirements

  • C99-compatible compiler
  • CMake (3.13+)
  • For Windows Only: MinGW is required to build the demo

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Clone the repository:
git clone --recurse-submodules https://github.com/Picovoice/cobra.git

Usage

  1. Include the public header files (picovoice.h and pv_cobra.h).
  2. Link the project to an appropriate precompiled library for the target platform and load it.
  3. Construct the Cobra Voice Activity Detection object:
static const char* ACCESS_KEY = "${ACCESS_KEY}";
pv_cobra_t *cobra;
const pv_status_t status = pv_cobra_init(
ACCESS_KEY
&cobra);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
  1. Pass in audio to the pv_cobra_process function:
extern const int16_t *get_next_audio_frame(void);
while (true) {
const int16_t *pcm = get_next_audio_frame();
float is_voiced = 0.f;
const pv_status_t status = pv_cobra_process(cobra, pcm, &is_voiced);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
}
  1. Release resources explicitly when done with Cobra Voice Activity Detection:
pv_cobra_delete(cobra);

Demo

For the Cobra Voice Activity Detection SDK, we offer demo applications that demonstrate how to use the VAD engine on real-time audio streams (i.e. microphone input) and audio files.

Setup

  1. Clone the Cobra Voice Activity Detection repository from GitHub using HTTPS:
git clone --recurse-submodules https://github.com/Picovoice/cobra.git
  1. Build the microphone demo:
cd cobra
cmake -S demo/c/. -B demo/c/build
cmake --build demo/c/build --target cobra_demo_mic

Usage

To see the usage options for the demo:

./demo/c/build/cobra_demo_mic

Ensure you have a working microphone connected to your system and run the command corresponding to your platform to detect voice activity:

./demo/c/build/cobra_demo_mic \
-l lib/${PLATFORM}/${ARCH}/libpv_cobra.so \
-a ${ACCESS_KEY} \
-d ${AUDIO_DEVICE_INDEX}

For more information on our Cobra Voice Activity Detection demos for C, head over to our GitHub repository.

Resources

API

  • Cobra Voice Activity Detection C API Docs

GitHub

  • Cobra Voice Activity Detection C Demos on GitHub

Benchmark

  • Voice Activity Benchmark

Further Reading

  • Yet Another Voice Activity Detection Engine

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Cobra VAD — C Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Demo
  • Setup
  • Usage
  • Resources
© 2019-2026 Picovoice Inc.PrivacyTerms