Picovoice Wordmark
Start Free
Introduction
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryPicovoice picoLLMGPTQ
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-TextWhisper Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeWeb
SummaryPicovoice CheetahAzure Real-Time Speech-to-TextAmazon Transcribe StreamingGoogle Streaming ASRMoonshine StreamingVosk StreamingWhisper.cpp Streaming
FAQ
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryAmazon PollyAzure TTSElevenLabsOpenAI TTSPicovoice OrcaChatterbox-TTS-TurboKokoro-TTSKitten-TTS-Nano-0.8-INT8Pocket-TTSNeu-TTS-Nano-Q4-GGUFPiper-TTSSoprano-TTSSupertonic-TTS-2ESpeak-NG
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice KoalaMozilla RNNoise
Introduction
AndroidCiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidCNode.jsPythoniOSWeb
SummaryPicovoice EaglepyannoteSpeechBrain
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice FalconAmazon TranscribeAzure Speech-to-TextGoogle Speech-to-Textpyannote
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeSafariWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeWeb
SummaryPicovoice PorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeSafariWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidArduinoC.NETiOSLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSMicrocontrollerNode.jsPythonWeb
SummaryPicovoice CobraWebRTC VADSilero VAD
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
Introduction
AndroidC.NETFlutteriOSNode.jsPythonReact NativeWeb
AndroidC.NETFlutteriOSNode.jsPythonReact NativeWeb
Introduction
C.NETNode.jsPython
C.NETNode.jsPython
FAQGlossary

Rhino Speech-to-Intent
iOS Quick Start

Platforms

  • iOS (16.0+)

Requirements

  • Xcode
  • Swift Package Manager or CocoaPods

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Install Xcode.

  2. Import the Rhino-iOS package into your project.

To import the package using SPM, open up your project's Package Dependencies in Xcode and add:

https://github.com/Picovoice/rhino.git

To import it into your iOS project using CocoaPods, add the following line to your Podfile:

pod 'Rhino-iOS'

Then, run the following from the project directory:

pod install
  1. Add the following to the app's Info.plist file to enable recording with an iOS device's microphone
<key>NSMicrophoneUsageDescription</key>
<string>[Permission explanation]</string>

Usage

Include the context file (either a pre-built context file (.rhn) from the Rhino Speech-to-Intent GitHub Repository or a custom context created with the Picovoice Console) in the app as a bundled resource (found by selecting in Build Phases > Copy Bundle Resources). Then, get its path from the app bundle:

let contextPath = Bundle.main.path(forResource: "${CONTEXT_FILE}", ofType: "rhn")

Create an instance of RhinoManager that infers custom commands:

import Rhino
do {
let rhinoManager = try RhinoManager(
accessKey: "${ACCESS_KEY}",
contextPath: contextPath,
onInferenceCallback: inferenceCallback)
} catch { }

The onInferenceCallback parameter is a function that will be invoked when Rhino Speech-to-Intent has returned an inference result:

let inferenceCallback: ((Inference) -> Void) = { inference in
if inference.isUnderstood {
let intent:String = inference.intent
let slots:Dictionary<String,String> = inference.slots
// take action based on inferred intent and slot values
} else {
// handle unsupported commands
}
}
}

Start audio capture:

do {
try rhinoManager.process()
} catch { }

Once an inference has been made, the inferenceCallback will be invoked and audio capture will stop automatically.

Release resources explicitly when done with Rhino Speech-to-Intent:

rhinoManager.delete()
To use your own audio processing pipeline, check out the Low-Level Rhino API.

Custom Contexts

Create custom contexts with the Picovoice Console. Download the custom context file (.rhn) and include it in the app as a bundled resource (found by selecting in Build Phases > Copy Bundle Resources).

Alternatively, if the context file is deployed to the device with a different method, the absolute path to the file on device can be used.

Non-English Languages

Use the corresponding model file (.pv) to infer non-English commands. The model files for all supported languages are available on the Rhino Speech-to-Intent GitHub repository.

Pass in the model file using the modelPath input argument to change the inference language:

let modelPath = Bundle.main.path(forResource: "${MODEL_FILE}", ofType: "pv")
do {
let rhinoManager = try RhinoManager(
accessKey: "${ACCESS_KEY}",
contextPath: contextPath,
modelPath: modelPath,
onInferenceCallback: inferenceCallback)
} catch { }

Alternatively, if the model file is deployed to the device with a different method, the absolute path to the file on device can be used.

Demo

For the Rhino Speech-to-Intent iOS SDK, we offer demo applications that demonstrate how to use the Speech-to-Intent engine on real-time audio streams (i.e. microphone input).

Setup

Clone the Repository:

git clone --recurse-submodules https://github.com/Picovoice/rhino.git

Usage

  1. Install dependencies:
cd rhino/demo/ios/RhinoDemo
pod install
  1. Open the RhinoDemo.xcworkspace.

  2. Replace "${YOUR_ACCESS_KEY_HERE}" in the file ContentView.swift with a valid AccessKey.

  3. Go to Product > Scheme and select the scheme for the language you would like to demo (e.g. esDemo -> Spanish Demo, deDemo -> German Demo).

  4. Run the demo with a simulator or connected iOS device.

For more information on our Rhino Speech-to-Intent demos for iOS, head over to our GitHub repository.

Resources

Package

  • Rhino-iOS on Cocoapods

API

  • Rhino-iOS API Docs

GitHub

  • Rhino Speech-to-Intent iOS SDK on GitHub
  • Rhino Speech-to-Intent iOS Demos on GitHub

Benchmark

  • Speech-to-Intent Benchmark

Further Reading

  • Siri Gets a Barista Job: Adding Offline Voice AI to a SwiftUI App

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Rhino Speech-to-Intent iOS Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Custom Contexts
  • Non-English Languages
  • Demo
  • Setup
  • Usage
  • Resources
© 2019-2026 Picovoice Inc.PrivacyTerms