Support
Need help? You're in the right place.
For questions, feedback, or anything else - send an email and you'll hear back shortly.
msafeworks@gmail.com →Bug reports
Found something broken? Open an issue on GitHub with steps to reproduce and your macOS version.
Open an issue →Frequently asked questions
Common questions and troubleshooting tips.
Why does macOS ask for Screen Recording permission?
Robota uses ScreenCaptureKit to capture system audio (other meeting participants). This requires the "Screen & System Audio Recording" permission on macOS. Robota only captures audio - it never records your screen or takes screenshots. On macOS Sequoia (15+), Apple may ask you to re-confirm this permission monthly.
I'm hearing echo in my transcription. How do I fix it?
Robota mixes both audio streams (mic + system) 50/50 before transcription, which eliminates most echo. If you're still seeing duplicates, make sure you're using headphones - without them, your speakers bleed into your mic and can cause duplicates even with the mixing approach.
The speech model is downloading. How long does it take?
Apple's SpeechAnalyzer models are managed by the system and downloaded on demand. The first time you use a language, macOS needs to download the model (typically a few hundred MB). This happens in the background. If the download is slow, check your internet connection and try again - subsequent uses will be instant since the model is cached.
How do I set up Ollama for AI summaries and chat?
- Download and install Ollama
- Pull a model: run
ollama pull llama3.2in Terminal - Make sure Ollama is running (it shows in your menu bar)
- In Robota settings, set Summarization Provider to "Ollama"
Apple Intelligence is used by default and requires zero setup. Ollama is optional for users who want larger context windows (16K vs 4K tokens) for longer meetings.
How do I set up llama.cpp for AI summaries and chat?
- Build llama.cpp (or install it via
brew install llama.cpp) - Download a GGUF model from Hugging Face
- Start the server:
llama-server -m <model.gguf> -c 16384 --port 8080 - In Robota settings, set Summarization Provider to "llama.cpp" and confirm the endpoint URL (default
http://localhost:8080)
Robota talks to llama.cpp through its OpenAI-compatible /v1/chat/completions endpoint, so any GGUF model the server can run works. The context window is whatever you started the server with via -c.
Can other meeting participants see that I'm recording?
Robota captures audio passively at the macOS system mixer level via ScreenCaptureKit. It does not inject into or interact with the meeting app in any way. Other participants cannot detect that Robota is running. Please ensure you comply with local laws regarding recording consent.
How do I export notes to Obsidian?
In Robota settings, set your Obsidian vault path and subfolder (defaults to Meetings/Robota). After transcription, click the "Obsidian" button in the review toolbar to export. You can also enable auto-save to export automatically after every meeting. Notes include YAML frontmatter, formatted transcript, summary (if generated), and bookmarks.
What macOS version do I need?
Robota requires macOS 26 (Tahoe) or later. This is because it uses Apple's SpeechAnalyzer framework for on-device transcription, which is only available on macOS 26+. Both Apple Silicon and Intel Macs are supported.
Where is my data stored?
Audio files are written to a temporary directory and automatically deleted after transcription. Transcripts and summaries exist only in memory during the app session. Settings are stored at ~/Library/Application Support/com.worldtiki.robota/settings.json. If you export to Obsidian, markdown files are saved to your vault. No data is ever sent to any server. See the privacy policy for full details.