will-thompson-k/tldr-transformers
The "tl;dr" on a few notable transformer papers.
#nlp #deep_learning #notes #transformers #attention #transfer_learning #language_models #language_model #bert #open_ai #huggingface #huggingface_transformer #gpt_3
Stars: 90 Issues: 2 Forks: 2
https://github.com/will-thompson-k/tldr-transformers
The "tl;dr" on a few notable transformer papers.
#nlp #deep_learning #notes #transformers #attention #transfer_learning #language_models #language_model #bert #open_ai #huggingface #huggingface_transformer #gpt_3
Stars: 90 Issues: 2 Forks: 2
https://github.com/will-thompson-k/tldr-transformers
GitHub
GitHub - will-thompson-k/tldr-transformers: The "tl;dr" on a few notable transformer papers (pre-2022).
The "tl;dr" on a few notable transformer papers (pre-2022). - will-thompson-k/tldr-transformers
lucidrains/fast-transformer-pytorch
Implementation of Fast Transformer in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #transformers
Stars: 90 Issues: 0 Forks: 8
https://github.com/lucidrains/fast-transformer-pytorch
Implementation of Fast Transformer in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #transformers
Stars: 90 Issues: 0 Forks: 8
https://github.com/lucidrains/fast-transformer-pytorch
GitHub
GitHub - lucidrains/fast-transformer-pytorch: Implementation of Fast Transformer in Pytorch
Implementation of Fast Transformer in Pytorch. Contribute to lucidrains/fast-transformer-pytorch development by creating an account on GitHub.
lucidrains/RETRO-pytorch
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #retrieval #transformers
Stars: 102 Issues: 0 Forks: 2
https://github.com/lucidrains/RETRO-pytorch
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #retrieval #transformers
Stars: 102 Issues: 0 Forks: 2
https://github.com/lucidrains/RETRO-pytorch
GitHub
GitHub - lucidrains/RETRO-pytorch: Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch - lucidrains/RETRO-pytorch
lucidrains/PaLM-pytorch
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
Language: Python
#artificial_general_intelligence #attention_mechanism #deep_learning #transformers
Stars: 147 Issues: 0 Forks: 8
https://github.com/lucidrains/PaLM-pytorch
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
Language: Python
#artificial_general_intelligence #attention_mechanism #deep_learning #transformers
Stars: 147 Issues: 0 Forks: 8
https://github.com/lucidrains/PaLM-pytorch
GitHub
GitHub - lucidrains/PaLM-pytorch: Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling…
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - lucidrains/PaLM-pytorch
lucidrains/flamingo-pytorch
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #transformers #visual_question_answering
Stars: 131 Issues: 0 Forks: 3
https://github.com/lucidrains/flamingo-pytorch
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #transformers #visual_question_answering
Stars: 131 Issues: 0 Forks: 3
https://github.com/lucidrains/flamingo-pytorch
GitHub
GitHub - lucidrains/flamingo-pytorch: Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention…
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch - lucidrains/flamingo-pytorch
lucidrains/CoCa-pytorch
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #contrastive_learning #deep_learning #image_to_text #multimodal #transformers
Stars: 90 Issues: 0 Forks: 2
https://github.com/lucidrains/CoCa-pytorch
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #contrastive_learning #deep_learning #image_to_text #multimodal #transformers
Stars: 90 Issues: 0 Forks: 2
https://github.com/lucidrains/CoCa-pytorch
GitHub
GitHub - lucidrains/CoCa-pytorch: Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch - lucidrains/CoCa-pytorch
lucidrains/parti-pytorch
Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #text_to_image #transformers
Stars: 143 Issues: 0 Forks: 2
https://github.com/lucidrains/parti-pytorch
Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #text_to_image #transformers
Stars: 143 Issues: 0 Forks: 2
https://github.com/lucidrains/parti-pytorch
GitHub
GitHub - lucidrains/parti-pytorch: Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch
Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch - lucidrains/parti-pytorch
lucidrains/audiolm-pytorch
Implementation of AudioLM, a Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #audio_synthesis #deep_learning #transformers
Stars: 121 Issues: 1 Forks: 1
https://github.com/lucidrains/audiolm-pytorch
Implementation of AudioLM, a Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #audio_synthesis #deep_learning #transformers
Stars: 121 Issues: 1 Forks: 1
https://github.com/lucidrains/audiolm-pytorch
GitHub
GitHub - lucidrains/audiolm-pytorch: Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google…
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch - lucidrains/audiolm-pytorch
lucidrains/make-a-video-pytorch
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #axial_convolutions #deep_learning #text_to_video
Stars: 331 Issues: 1 Forks: 15
https://github.com/lucidrains/make-a-video-pytorch
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #axial_convolutions #deep_learning #text_to_video
Stars: 331 Issues: 1 Forks: 15
https://github.com/lucidrains/make-a-video-pytorch
GitHub
GitHub - lucidrains/make-a-video-pytorch: Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch - lucidrains/make-a-video-pytorch
lucidrains/robotic-transformer-pytorch
Implementation of RT1 (Robotic Transformer) in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #robotics #transformers
Stars: 128 Issues: 1 Forks: 3
https://github.com/lucidrains/robotic-transformer-pytorch
Implementation of RT1 (Robotic Transformer) in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #robotics #transformers
Stars: 128 Issues: 1 Forks: 3
https://github.com/lucidrains/robotic-transformer-pytorch
GitHub
GitHub - lucidrains/robotic-transformer-pytorch: Implementation of RT1 (Robotic Transformer) in Pytorch
Implementation of RT1 (Robotic Transformer) in Pytorch - lucidrains/robotic-transformer-pytorch
lucidrains/muse-maskgit-pytorch
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #text_to_image #transformers
Stars: 119 Issues: 1 Forks: 6
https://github.com/lucidrains/muse-maskgit-pytorch
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #text_to_image #transformers
Stars: 119 Issues: 1 Forks: 6
https://github.com/lucidrains/muse-maskgit-pytorch
GitHub
GitHub - lucidrains/muse-maskgit-pytorch: Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers,…
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch - lucidrains/muse-maskgit-pytorch
lucidrains/musiclm-pytorch
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch
#artificial_intelligence #attention_mechanisms #deep_learning #music_synthesis #transformers
Stars: 277 Issues: 1 Forks: 8
https://github.com/lucidrains/musiclm-pytorch
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch
#artificial_intelligence #attention_mechanisms #deep_learning #music_synthesis #transformers
Stars: 277 Issues: 1 Forks: 8
https://github.com/lucidrains/musiclm-pytorch
GitHub
GitHub - lucidrains/musiclm-pytorch: Implementation of MusicLM, Google's new SOTA model for music generation using attention networks…
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch - lucidrains/musiclm-pytorch
lucidrains/toolformer-pytorch
Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI
Language: Python
#api_calling #artificial_intelligence #attention_mechanisms #deep_learning #transformers
Stars: 419 Issues: 2 Forks: 13
https://github.com/lucidrains/toolformer-pytorch
Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI
Language: Python
#api_calling #artificial_intelligence #attention_mechanisms #deep_learning #transformers
Stars: 419 Issues: 2 Forks: 13
https://github.com/lucidrains/toolformer-pytorch
GitHub
GitHub - lucidrains/toolformer-pytorch: Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI
Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI - lucidrains/toolformer-pytorch
lucidrains/recurrent-memory-transformer-pytorch
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #long_context #memory #recurrence #transformers
Stars: 223 Issues: 0 Forks: 4
https://github.com/lucidrains/recurrent-memory-transformer-pytorch
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #long_context #memory #recurrence #transformers
Stars: 223 Issues: 0 Forks: 4
https://github.com/lucidrains/recurrent-memory-transformer-pytorch
GitHub
GitHub - lucidrains/recurrent-memory-transformer-pytorch: Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in…
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch - lucidrains/recurrent-memory-transformer-pytorch
lucidrains/MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #learned_tokenization #long_context #transformers
Stars: 204 Issues: 0 Forks: 10
https://github.com/lucidrains/MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #learned_tokenization #long_context #transformers
Stars: 204 Issues: 0 Forks: 10
https://github.com/lucidrains/MEGABYTE-pytorch
GitHub
GitHub - lucidrains/MEGABYTE-pytorch: Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers…
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch - lucidrains/MEGABYTE-pytorch
lucidrains/soundstorm-pytorch
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #audio_generation #deep_learning #non_autoregressive #transformers
Stars: 181 Issues: 0 Forks: 6
https://github.com/lucidrains/soundstorm-pytorch
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #audio_generation #deep_learning #non_autoregressive #transformers
Stars: 181 Issues: 0 Forks: 6
https://github.com/lucidrains/soundstorm-pytorch
GitHub
GitHub - lucidrains/soundstorm-pytorch: Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind…
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch - lucidrains/soundstorm-pytorch
kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
Language: Python
#artificial_intelligence #attention #attention_is_all_you_need #attention_mechanisms #chatgpt #context_length #gpt3 #gpt4 #machine_learning #transformer
Stars: 381 Issues: 4 Forks: 55
https://github.com/kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
Language: Python
#artificial_intelligence #attention #attention_is_all_you_need #attention_mechanisms #chatgpt #context_length #gpt3 #gpt4 #machine_learning #transformer
Stars: 381 Issues: 4 Forks: 55
https://github.com/kyegomez/LongNet
GitHub
GitHub - kyegomez/LongNet: Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens" - kyegomez/LongNet
lucidrains/meshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #mesh_generation #transformers
Stars: 195 Issues: 0 Forks: 7
https://github.com/lucidrains/meshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #mesh_generation #transformers
Stars: 195 Issues: 0 Forks: 7
https://github.com/lucidrains/meshgpt-pytorch
GitHub
GitHub - lucidrains/meshgpt-pytorch: Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch - lucidrains/meshgpt-pytorch
kyegomez/MultiModalMamba
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
Language: Python
#ai #artificial_intelligence #attention_mechanism #machine_learning #mamba #ml #pytorch #ssm #torch #transformer_architecture #transformers #zeta
Stars: 264 Issues: 0 Forks: 9
https://github.com/kyegomez/MultiModalMamba
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
Language: Python
#ai #artificial_intelligence #attention_mechanism #machine_learning #mamba #ml #pytorch #ssm #torch #transformer_architecture #transformers #zeta
Stars: 264 Issues: 0 Forks: 9
https://github.com/kyegomez/MultiModalMamba
GitHub
GitHub - kyegomez/MultiModalMamba: A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi…
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever. - kyegomez/MultiModalMamba
thu-ml/SageAttention
Quantized Attention that achieves speedups of 2.1x and 2.7x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Language: Python
#attention #inference_acceleration #llm #quantization
Stars: 145 Issues: 6 Forks: 3
https://github.com/thu-ml/SageAttention
Quantized Attention that achieves speedups of 2.1x and 2.7x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Language: Python
#attention #inference_acceleration #llm #quantization
Stars: 145 Issues: 6 Forks: 3
https://github.com/thu-ml/SageAttention
GitHub
GitHub - thu-ml/SageAttention: Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2…
Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models. - thu-ml/SageAt...