labmlai/annotated_deep_learning_paper_implementations
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Language:Python
Total stars: 58051
Stars trend:
#python
#attention, #deeplearning, #deeplearningtutorial, #gan, #literateprogramming, #lora, #machinelearning, #neuralnetworks, #optimizers, #pytorch, #reinforcementlearning, #transformer, #transformers
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Language:Python
Total stars: 58051
Stars trend:
19 Jan 2025
9pm ▏ +1
10pm █▏ +9
11pm ▌ +4
20 Jan 2025
12am ▍ +3
1am ▍ +3
2am ▋ +5
3am █▏ +9
4am █▎ +10
5am █▏ +9
6am █▏ +9
7am █ +8
8am ▋ +5
#python
#attention, #deeplearning, #deeplearningtutorial, #gan, #literateprogramming, #lora, #machinelearning, #neuralnetworks, #optimizers, #pytorch, #reinforcementlearning, #transformer, #transformers
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Language:Python
Total stars: 34142
Stars trend:
#python
#amd, #cuda, #gpt, #hpu, #inference, #inferentia, #llama, #llm, #llmserving, #llmops, #mlops, #modelserving, #pytorch, #rocm, #tpu, #trainium, #transformer, #xpu
A high-throughput and memory-efficient inference and serving engine for LLMs
Language:Python
Total stars: 34142
Stars trend:
20 Jan 2025
8pm ▎ +2
9pm ▎ +2
10pm █▏ +9
11pm ▍ +3
21 Jan 2025
12am ▎ +2
1am █ +8
2am ▉ +7
3am ▉ +7
4am █▌ +12
5am ▊ +6
6am █▎ +10
7am █▍ +11
#python
#amd, #cuda, #gpt, #hpu, #inference, #inferentia, #llama, #llm, #llmserving, #llmops, #mlops, #modelserving, #pytorch, #rocm, #tpu, #trainium, #transformer, #xpu