GitHub Trends
10.4K subscribers
15.5K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#python #apple_silicon #audio_processing #mlx #multimodal #speech_recognition #speech_synthesis #speech_to_text #text_to_speech #transformers

MLX-Audio is a powerful tool for converting text into speech and speech into new audio. It works well on Apple Silicon devices, like M-series chips, making it fast and efficient. You can choose from different languages and voices, and even adjust how fast the speech is. It also includes a web interface where you can see audio in 3D and play your own files. This tool is helpful for making audiobooks, interactive media, and personal projects because it's easy to use and provides high-quality audio quickly.

https://github.com/Blaizzy/mlx-audio
#cplusplus #apple_silicon #bsd #c_plus_plus #cmake #floss #game #gplv2 #json #linux #lua #macos_app #python #strategy #windows

Widelands is a free, open-source real-time strategy game like Settlers II, where you lead a small clan to build roads, gather resources like wood and gold, manage four unique tribes, trade, or fight in single-player campaigns and multiplayer. Download it easily for Windows, Mac, or Linux, or compile from source with simple scripts and tools like CMake on various systems. This lets you enjoy deep, replayable empire-building fun at no cost, anytime with friends or AI.

https://github.com/widelands/widelands
4
#python #apple_silicon #florence2 #idefics #llava #llm #local_ai #mlx #molmo #paligemma #pixtral #vision_framework #vision_language_model #vision_transformer

MLX-VLM lets you run, chat with, and fine-tune Vision Language Models (VLMs) plus audio/video models on your Mac using MLX—install easily with `pip install -U mlx-vlm`. Use CLI for quick text/image/audio generation (e.g., `mlx_vlm.generate --model ... --image photo.jpg`), Gradio UI for chats, Python scripts, or a FastAPI server with OpenAI-compatible endpoints supporting multi-images/videos. Features like TurboQuant cut KV cache memory by 76%, and LoRA/QLoRA fine-tuning works on consumer hardware. You benefit by experimenting with powerful multimodal AI locally—fast, memory-efficient, no cloud costs, perfect for Mac users tweaking models affordably.

https://github.com/Blaizzy/mlx-vlm