private and local LLM chat
the simplest way to use private LLMs: works fully offline and private when you don’t have internet. runs models on-device optimized for Apple silicon. works on every platform: iOS, iPadOS, macOS, visionOS personalize: adjust the theme, fonts, and system prompt in the app. Shortcut: chat with fullmoon or get an output from a local model to use with other actions. free, open source, private.
Version 1.2.0 • Jan 28, 2025
added support for reasoning model DeepSeek-R1-Distill-Qwen-1.5B-4bit and 8bit
108 Ratings