mpaepper/llm_agents
Build agents which are controlled by LLMs
Language: Python
#deep_learning #langchain #llms #machine_learning
Stars: 303 Issues: 2 Forks: 10
https://github.com/mpaepper/llm_agents
Build agents which are controlled by LLMs
Language: Python
#deep_learning #langchain #llms #machine_learning
Stars: 303 Issues: 2 Forks: 10
https://github.com/mpaepper/llm_agents
GitHub
GitHub - mpaepper/llm_agents: Build agents which are controlled by LLMs
Build agents which are controlled by LLMs. Contribute to mpaepper/llm_agents development by creating an account on GitHub.
eugeneyan/open-llms
A list of open LLMs available for commercial use
#commercial #large_language_models #llm #llms
Stars: 286 Issues: 0 Forks: 18
https://github.com/eugeneyan/open-llms
A list of open LLMs available for commercial use
#commercial #large_language_models #llm #llms
Stars: 286 Issues: 0 Forks: 18
https://github.com/eugeneyan/open-llms
GitHub
GitHub - eugeneyan/open-llms: 📋 A list of open LLMs available for commercial use.
📋 A list of open LLMs available for commercial use. - eugeneyan/open-llms
❤2🤬2
PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Language: Python
#ai_safety #alpaca #datasets #deepspeed #large_language_models #llama #llm #llms #reinforcement_learning #reinforcement_learning_from_human_feedback #rlhf #safe_reinforcement_learning #safe_reinforcement_learning_from_human_feedback #safe_rlhf #safety #transformers #vicuna
Stars: 279 Issues: 0 Forks: 14
https://github.com/PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Language: Python
#ai_safety #alpaca #datasets #deepspeed #large_language_models #llama #llm #llms #reinforcement_learning #reinforcement_learning_from_human_feedback #rlhf #safe_reinforcement_learning #safe_reinforcement_learning_from_human_feedback #safe_rlhf #safety #transformers #vicuna
Stars: 279 Issues: 0 Forks: 14
https://github.com/PKU-Alignment/safe-rlhf
GitHub
GitHub - PKU-Alignment/safe-rlhf: Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback - PKU-Alignment/safe-rlhf
👍2👏1
MLGroupJLU/LLM-eval-survey
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
#benchmark #evaluation #large_language_models #llm #llms #model_assessment
Stars: 200 Issues: 1 Forks: 11
https://github.com/MLGroupJLU/LLM-eval-survey
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
#benchmark #evaluation #large_language_models #llm #llms #model_assessment
Stars: 200 Issues: 1 Forks: 11
https://github.com/MLGroupJLU/LLM-eval-survey
GitHub
GitHub - MLGroupJLU/LLM-eval-survey: The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models". - MLGroupJLU/LLM-eval-survey
👍1👎1
hiyouga/FastEdit
🩹Editing large language models within 10 seconds⚡
Language: Python
#bloom #chatbots #chatgpt #falcon #gpt #large_language_models #llama #llms #pytorch #transformers
Stars: 295 Issues: 5 Forks: 28
https://github.com/hiyouga/FastEdit
🩹Editing large language models within 10 seconds⚡
Language: Python
#bloom #chatbots #chatgpt #falcon #gpt #large_language_models #llama #llms #pytorch #transformers
Stars: 295 Issues: 5 Forks: 28
https://github.com/hiyouga/FastEdit
GitHub
GitHub - hiyouga/FastEdit: 🩹Editing large language models within 10 seconds⚡
🩹Editing large language models within 10 seconds⚡. Contribute to hiyouga/FastEdit development by creating an account on GitHub.
❤2👍1