Official release of InternLM2.5 base and chat models. 1M context support
-
Updated
Nov 21, 2024 - Python
Official release of InternLM2.5 base and chat models. 1M context support
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.
「大模型」3小时完全从0训练26M的小参数GPT,个人显卡即可推理训练!
Implementation for MatMul-free LM.
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
Awesome papers about unifying LLMs and KGs
Project Page for "LISA: Reasoning Segmentation via Large Language Model"
Awesome things about LLM-powered agents. Papers / Repos / Blogs / ...
Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.
Building AI agents, atomically
This is a curated list of "Embodied AI or robot with Large Language Models" research. Watch this repository for the latest updates! 🔥
日本語LLMまとめ - Overview of Japanese LLMs
AI-Driven Research Assistant: An advanced multi-agent system for automating complex research processes. Leveraging LangChain, OpenAI GPT, and LangGraph, this tool streamlines hypothesis generation, data analysis, visualization, and report writing. Perfect for researchers and data scientists seeking to enhance their workflow and productivity.
⚙️🦀 Build portable, modular & lightweight Fullstack Agents
KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.
A Home Assistant integration & Model to control your smart home using a Local LLM
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
Add a description, image, and links to the large-language-model topic page so that developers can more easily learn about it.
To associate your repository with the large-language-model topic, visit your repo's landing page and select "manage topics."