dipampaul17/KVSplit
Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit keys & 4-bit values, reducing memory by 59% with <1% quality loss. Includes benchmarking, visualization, and one-command setup. Optimized for M1/M2/M3 Macs with Metal support.
Language: Python
#apple_silicon #generative_ai #kv_cache #llama_cpp #llm #m1 #m2 #m3 #memory_optimization #metal #optimization #quantization
Stars: 222 Issues: 1 Forks: 5
https://github.com/dipampaul17/KVSplit
Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit keys & 4-bit values, reducing memory by 59% with <1% quality loss. Includes benchmarking, visualization, and one-command setup. Optimized for M1/M2/M3 Macs with Metal support.
Language: Python
#apple_silicon #generative_ai #kv_cache #llama_cpp #llm #m1 #m2 #m3 #memory_optimization #metal #optimization #quantization
Stars: 222 Issues: 1 Forks: 5
https://github.com/dipampaul17/KVSplit
GitHub
GitHub - dipampaul17/KVSplit: Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache…
Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit keys & 4-bit values, reducing memory by 59% with &am...
taigrr/spank
Slap your MacBook, it yells back. Uses Apple Silicon accelerometer via IOKit HID.
Language: Go
#accelerometer #apple_silicon #fun #go #iokit #macos
Stars: 830 Issues: 1 Forks: 35
https://github.com/taigrr/spank
Slap your MacBook, it yells back. Uses Apple Silicon accelerometer via IOKit HID.
Language: Go
#accelerometer #apple_silicon #fun #go #iokit #macos
Stars: 830 Issues: 1 Forks: 35
https://github.com/taigrr/spank
GitHub
GitHub - taigrr/spank: Slap your MacBook, it yells back. Uses Apple Silicon accelerometer via IOKit HID.
Slap your MacBook, it yells back. Uses Apple Silicon accelerometer via IOKit HID. - taigrr/spank
❤1
RunanywhereAI/RCLI
Talk to your Mac, query your docs, no cloud required. On-device voice AI + RAG
Language: C++
#ai_assistant #apple_silicon #kitten_tts #kokoro_tts #lfm2 #llama_cpp #llm #local_ai #metal #on_device_ai #parakeet #qwen3 #rag #speech_to_text #text_to_speech #tool_calling #voice_assistant
Stars: 627 Issues: 10 Forks: 19
https://github.com/RunanywhereAI/RCLI
Talk to your Mac, query your docs, no cloud required. On-device voice AI + RAG
Language: C++
#ai_assistant #apple_silicon #kitten_tts #kokoro_tts #lfm2 #llama_cpp #llm #local_ai #metal #on_device_ai #parakeet #qwen3 #rag #speech_to_text #text_to_speech #tool_calling #voice_assistant
Stars: 627 Issues: 10 Forks: 19
https://github.com/RunanywhereAI/RCLI
GitHub
GitHub - RunanywhereAI/RCLI: Talk to your Mac, query your docs, no cloud required. On-device voice AI + RAG
Talk to your Mac, query your docs, no cloud required. On-device voice AI + RAG - RunanywhereAI/RCLI
❤2