Picovoice Wordmark
Start Free
Introduction
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryPicovoice picoLLMGPTQ
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-TextWhisper Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeWeb
SummaryPicovoice CheetahAzure Real-Time Speech-to-TextAmazon Transcribe StreamingGoogle Streaming ASRMoonshine StreamingVosk StreamingWhisper.cpp Streaming
FAQ
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryAmazon PollyAzure TTSElevenLabsOpenAI TTSPicovoice OrcaChatterbox-TTS-TurboKokoro-TTSKitten-TTS-Nano-0.8-INT8Pocket-TTSNeu-TTS-Nano-Q4-GGUFPiper-TTSSoprano-TTSSupertonic-TTS-2ESpeak-NG
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice KoalaMozilla RNNoise
Introduction
AndroidCiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidCNode.jsPythoniOSWeb
SummaryPicovoice EaglepyannoteSpeechBrain
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice FalconAmazon TranscribeAzure Speech-to-TextGoogle Speech-to-Textpyannote
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeSafariWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeWeb
SummaryPicovoice PorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeSafariWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidArduinoC.NETiOSLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSMicrocontrollerNode.jsPythonWeb
SummaryPicovoice CobraWebRTC VADSilero VAD
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
Introduction
AndroidC.NETFlutteriOSNode.jsPythonReact NativeWeb
AndroidC.NETFlutteriOSNode.jsPythonReact NativeWeb
Introduction
C.NETNode.jsPython
C.NETNode.jsPython
FAQGlossary

Eagle Speaker Recognition Engine
C Quick Start

Platforms

  • Linux (x86_64)
  • macOS (x86_64, arm64)
  • Windows (x86_64, arm64)
  • Raspberry Pi (3, 4, 5)

Requirements

  • C99-compatible compiler
  • CMake (3.13+)
  • For Windows Only: MinGW is required to build the demo

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Overview

Eagle Speaker Recognition consists of two distinct steps: Enrollment and Recognition. In the enrollment step, Eagle analyzes a series of utterances from a particular speaker to learn their unique voiceprint. This step results in a Profile object, which can be stored and utilized during inference. During the Recognition step, Eagle compares the incoming frames of audio to the voiceprints of all enrolled speakers in real-time to determine the similarity between them.

Quick Start

Setup

  1. Clone the repository:
git clone --recurse-submodules https://github.com/Picovoice/eagle.git

Usage

  1. Include the public header files (picovoice.h and pv_eagle.h).
  2. Link the project to an appropriate precompiled library for the target platform and load it.

Speaker Enrollment

  1. Construct an instance of the profiler:
const char *access_key = "${ACCESS_KEY}";
const char *model_path = "${MODEL_PATH}";
const char *device = "best";
const int32_t min_enrollment_chunks = 1;
const float voice_threshold = 0.3f;
pv_eagle_profiler_t *eagle_profiler = NULL;
pv_status_t status = pv_eagle_profiler_init(
access_key,
model_path,
device,
min_enrollment_chunks,
voice_threshold,
&eagle_profiler);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
  1. Pass in audio to the pv_eagle_profiler_enroll function to enroll a speaker:
extern const int16_t *get_next_enroll_audio_frame(int32_t frame_length);
extern const bool has_next_enroll_audio_frame(int32_t frame_length);
const int32_t frame_length = pv_eagle_profiler_frame_length();
float enroll_percentage = 0.0f;
while (enroll_percentage < 100.0f && has_next_enroll_audio_frame(frame_length)) {
status = pv_eagle_profiler_enroll(
eagle_profiler,
get_next_enroll_audio_frame(frame_length),
&enroll_percentage);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
}
status = pv_eagle_profiler_flush(
eagle_profiler,
&enroll_percentage);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
int32_t profile_size_bytes = 0;
status = pv_eagle_profiler_export_size(eagle_profiler, &profile_size_bytes);
void *speaker_profile = malloc(profile_size_bytes);
status = pv_eagle_profiler_export(
eagle_profiler,
speaker_profile);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
  1. Release resources explicitly when done with the profiler:
pv_eagle_profiler_delete(eagle_profiler);

Speaker Recognition

  1. Construct an instance of the engine:
pv_eagle_t *eagle = NULL;
pv_status_t status = pv_eagle_init(
access_key,
model_path,
device,
voice_threshold,
&eagle);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
  1. Pass audio frames to the pv_eagle_process function to perform speaker recognition:
extern const int16_t *get_next_audio_frame(int32_t min_process_samples);
const int32_t min_process_samples = 0;
pv_status_t status = pv_eagle_process_min_audio_length_samples(
eagle,
&min_process_samples);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
float *scores = NULL;
while (true) {
status = pv_eagle_process(
eagle,
get_next_audio_frame(min_process_samples),
min_process_samples,
&speaker_profile,
1,
&scores);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
}
  1. Release resources explicitly when done with the engine:
pv_eagle_delete(handle);

Demo

For the Eagle Speaker Recognition SDK, we offer demo applications that demonstrate how to use the speaker recognition engine on real-time audio streams (i.e. microphone input) and audio files.

Setup

  1. Clone the Eagle Speaker Recognition repository from GitHub using HTTPS:
git clone --recurse-submodules https://github.com/Picovoice/eagle.git
  1. Build the microphone demo:
cd eagle
cmake -S demo/c/ -B demo/c/build
cmake --build demo/c/build --target eagle_demo_mic

Usage

To see the usage options for the demo:

./demo/c/build/eagle_demo_mic

Ensure you have a working microphone connected to your system and run the command corresponding to your platform to either enroll a speaker or perform speaker recognition:

./demo/c/build/eagle_demo_mic \
-l lib/${PLATFORM}/${ARCH}/libpv_eagle.${LIB_EXTENSION} \
-m lib/common/eagle_params.pv \
-a ${ACCESS_KEY}
-e ${OUTPUT_PROFILE_PATH}

or

./demo/c/build/eagle_demo_mic \
-l lib/${PLATFORM}/${ARCH}/libpv_eagle.${LIB_EXTENSION} \
-m lib/common/eagle_params.pv \
-a ${ACCESS_KEY}
-i ${INPUT_PROFILE_PATH}

For more information on our Eagle Speaker Recognition demos for C, head over to our GitHub repository.

Resources

API

  • Eagle C API Docs

GitHub

  • Eagle C Demos on GitHub

Benchmark

  • Speaker Recognition Benchmark

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Eagle Speaker Recognition Engine C Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Overview
  • Quick Start
  • Setup
  • Usage
  • Demo
  • Setup
  • Usage
  • Resources
© 2019-2026 Picovoice Inc.PrivacyTerms