Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why re-invent the wheel? #113

Open
TashaSkyUp opened this issue Nov 30, 2024 · 12 comments
Open

Why re-invent the wheel? #113

TashaSkyUp opened this issue Nov 30, 2024 · 12 comments

Comments

@TashaSkyUp
Copy link

Litellm and probably a few others have already done all of this.. this energy and time would be better spent on litellm.

I mean I guess there's a couple possible reasons but I don't see them being mentioned so.

@SermetPekin
Copy link

You bring up a good point—tools like Litellm and others are already out there and do a great job. But honestly, history is full of success stories that started with someone "reinventing the wheel." Python, for example, came along when we already had C, Perl, and Java, yet it succeeded because it focused on simplicity and readability. It wasn’t just about what it did, but how it did it.

The thing is, progress doesn’t always come from doing something completely new. Sometimes it’s about taking an existing idea and iterating on it—making it a little better, a little different, or just more aligned with a specific need. And often, the act of building itself leads to a deeper understanding of the problem space and opens doors to innovation.

Sure, Litellm and others are great, but there’s always room for someone to come in with fresh eyes and create something unexpected. Supporting that kind of work isn’t just about the outcome—it’s about fostering curiosity, creativity, and growth. Even if the result isn’t a revolution, the process itself is invaluable. Who knows? Maybe this iteration will be the one that sparks something big.

@SermetPekin
Copy link

Plus, these tools themselves are also reinventing the wheel in a way. Litellm and others often act as wrappers around powerful tools like OpenAI, Mistral, and others. But think about it—weren’t these tools also reinventing the wheel when OpenAI first made its mark? OpenAI wasn’t the first AI company, and tools like Mistral built on those foundations, refining the approach, targeting specific needs, and pushing boundaries. Reinvention is just part of how progress works.

@andre0xFF
Copy link

Looking into LiteLLM's source makes me reinvent the wheel

@gzquse
Copy link

gzquse commented Nov 30, 2024

makes

Cool reply. Tools only become the tools since they apply specifically.
Are there any highlights on this project regarding the comparison?

@TashaSkyUp
Copy link
Author

TashaSkyUp commented Nov 30, 2024 via email

@vemonet
Copy link

vemonet commented Dec 2, 2024

Hi @TashaSkyUp, I am not contributor here or in any of the existing libraries, but I am currently checking the available solutions to easily query different LLM providers.

Also I like to look into other people codebase, and judge it, so I can give you some elements of response from the point of view of a python package developer.

First my use-case: all I want is a python API for sending messages to LLM providers and getting a completion back

Disclaimer: I have not (yet) used any of these 2 packages

aisuite code is simple and clean

The main point is that aisuite is light, simple and focused on 1 task: providing a simple python API to query completions API of LLM providers.

From the pyproject.toml it has 0 mandatory dependencies, only dependencies are optional when you want to query a specific LLM provider (e.g. mistralai if querying mistral provider, which is nice because then it is provides a unified API over every provider APIs and packages). Note they forgot to add the httpx dependency that is required for fireworks provider

How to add a new provider is clear and simple

This looks like a decently built lib overall. It provides a few parent abstract classes that are used to define each provider. As simple as it should be, and could easily be extended to add some more functionalities

That said it is currently missing some basic features like streaming response for all providers, and function calling. Also no async functions for completion (tbh most of those features would be easy to add with the current state of the codebase)

litellm is not lite at all

It is doing a few more things than just giving a unified API for completion over multiple LLM providers (not a lot though):

  • deployable proxy HTTP API (I don't need this)
  • a solution for caching (that could be interesting, but I don't need this, I'd rather do it myself if I need it)
  • support function/tool calling when available
  • routing to the right LLM (I am not interested, and if I need it I will probably implement it myself with my own logic)

As someone who likes well coded projects there are a ton of red flags in their codebase 🚩

  1. Despite the many thousands lines of code, there is no clear coherent structure, LLM providers don't have a unified parent abstract class (this is really programming 101 tbh, not sure how they missed that), e.g. see fireworks AI implementation vs groq vs azure_ai.py vs AzureOpenAI.py (note the complete mess even in the file nomenclature... That is full on software gore). As a bonus they are redefining a LiteLLMBase class many times at different places, with many different use cases, it's really scary

  2. The list of mandatory dependencies for litellm is way too long and includes many dependencies that are not required when you just want a python API to query a LLM provider for completion (which is the main feature advertised for their tool): https://github.com/BerriAI/litellm/blob/main/pyproject.toml it seems like the maintainers don't really understand optional dependencies and this scares me

    • click is not necessary, only if you want to use litellm as a CLI, it should be optional
    • jinja2 is not necessary (it's for templating strings). And if you search why it is used, then you get really confused: it's used for a 3000 lines factory.py file in a prompt_templates folder that is itself in a llms folder which mostly just contains folders to LLM providers, apart for this one folder about prompt_templates not sure why it's here. And there is nothing about prompt templates in their docs website
    • They require 2 different dependencies for sending HTTP requests: aiohttp and requests. Just pick one and use it. Look, in the same file they import requests and httpx to finally never use requests and only use httpx... Which is not even defined as a dependency of the package (I guess it's the dependency of one of the dependencies, but it should be explicitly added to dependencies list if it explicitly imported)
  3. Another red flag: there is a symlinks to requirements.txt in the main package folder: they probably have a reason, but it is most probably not a good one, and should be dealt differently than creating a simlink there

  4. Configuration for linting tools is a complete mess they use flake8 and isort in a project created in 2023... No python dev does this anymore, ruff is the new standard. Funnily there is also a ruff.toml file, but no trace of ruff being used. They should just use ruff for everything

  5. Linting is not even properly done when you see the many unused imports all over the codebase (e.g. https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/types/llms/ollama.py#L4)

  6. __init__.py files are a mess, sometimes there sometimes not there

There is way too much code poorly structured just for a simple wrapper for 1 function (completion) implemented for various API providers

That's not how good maintainable packages are built. Don't trust the number of stars during the AI trend they don't always reflect on code quality ("oh the features they are proposing in their readme look cool I'll star it"). A lot of people are trying to quickly capitalize on the trend without having the competencies.

What are the other options?

Probably the llm lib from simonw is the closest lib doing similar job, while keeping it simple, it was built with CLI in mind but also provide a python API: https://llm.datasette.io/en/stable/python-api.html it supports streaming, and have a plugin system to add new providers. I wish the click dependency would be behind a CLI flag though

LlamaIndex also enables to do completions for various providers (and they also provide a lot more for RAG). But also comes with a ton of dependencies, and not sure about how good the codebase is.

LangChain seems to be falling out of fashion due to pushing their LCEL language that is not pythonic at all, confusing, and really limited in capacities, also now they are also pushing for LangGraph which is very confusing. Their azure connector is not even up to date anymore and cannot be used for the latest azure pay as you go deployments.

imo python devs just want to write python code with functions, if conditions and loops. No need for a complete closed framework, we just need wrapper for LLMs APIs and vectorstore providers, that's it. The rest we can do it ourselves.

Also all those companies (LlamaIndex, LangChain, litellm) are trying to profit from the AI trend by providing basic wrappers over simple APIs with some extra sugar in the AWS cloud. They don't provide much value to be honest, and are prone to bring in breaking changes and walled gardens if that's good for their business. So I would rather trust a community led effort for such a simple wrapper system. You don't need a whole company for maintaining this kind of tool, devs will be sending PRs by themselves, all you need is a good structure and good test suite to quickly and safely integrate contributions.

Conclusion

Good developers value simplicity over thousands of lines of codes and dependency hell.

After reviewing the code of litellm I know for sure I would never touch this library with a ten meters long keyboard. It just screams "poorly coded package that will be a pain to maintain".

Imo the main questions with aisuite is: will it be properly maintained over time? PRs seems to be already accumulating a bit since last week. But there is a lot of PR that looks interesting and addressing some of the missing features already

But even if it is not maintained the API is so simple that you can easily fork it and maintain the providers you care about just for you. Just so you have a unified API in all your LLM projects that you can control

@TashaSkyUp
Copy link
Author

Thank you for the thoughtful analysis.

I agree litellm is no longer lite. But I imagine it was at some point. Also Ive already seen in this repository that there are plans to start expanding the codebase to cover more features. I imagine this is exactly how litellm started out.

"oh we will keep it lite, clean and simple!"

Just like this repo is now.

Your other comments around industry standards / what people should do are your opinion / possibly rules you dictate to your subordinates.

This my friend is the wilds of github.. people from all nations, socio-economic status, education levels, (dis)abilities and a thousand other different qualifiers contribute here. If the repo owners want to be inclusive that not everyone is a CS major with 10 years in industry (lucky bastards). Then great, if not.. maybe they should host it on their own git.

@vilmar-hillow
Copy link

Hi @TashaSkyUp, I am not contributor here or in any of the existing libraries, but I am currently checking the available solutions to easily query different LLM providers.

Also I like to look into other people codebase, and judge it, so I can give you some elements of response from the point of view of a python package developer.

First my use-case: all I want is a python API for sending messages to LLM providers and getting a completion back

Disclaimer: I have not (yet) used any of these 2 packages

aisuite code is simple and clean

The main point is that aisuite is light, simple and focused on 1 task: providing a simple python API to query completions API of LLM providers.

From the pyproject.toml it has 0 mandatory dependencies, only dependencies are optional when you want to query a specific LLM provider (e.g. mistralai if querying mistral provider, which is nice because then it is provides a unified API over every provider APIs and packages). Note they forgot to add the httpx dependency that is required for fireworks provider

How to add a new provider is clear and simple

This looks like a decently built lib overall. It provides a few parent abstract classes that are used to define each provider. As simple as it should be, and could easily be extended to add some more functionalities

That said it is currently missing some basic features like streaming response for all providers, and function calling. Also no async functions for completion (tbh most of those features would be easy to add with the current state of the codebase)

litellm is not lite at all

It is doing a few more things than just giving a unified API for completion over multiple LLM providers (not a lot though):

* deployable proxy HTTP API (I don't need this)

* a solution for caching (that could be interesting, but I don't need this, I'd rather do it myself if I need it)

* support function/tool calling when available

* routing to the right LLM (I am not interested, and if I need it I will probably implement it myself with my own logic)

As someone who likes well coded projects there are a ton of red flags in their codebase 🚩

1. Despite the many thousands lines of code, **there is no clear coherent structure**, LLM providers don't have a unified parent abstract class (this is really programming 101 tbh, not sure how they missed that), e.g. see [fireworks AI implementation](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py) vs [groq](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/groq/chat/transformation.py) vs [`azure_ai.py`](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/azure_ai/chat/transformation.py) vs [`AzureOpenAI.py`](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/AzureOpenAI/chat/gpt_transformation.py) (note the complete mess even in the file nomenclature... That is full on software gore). As a bonus they are redefining a [`LiteLLMBase` class](https://github.com/search?q=repo%3ABerriAI%2Flitellm%20LiteLLMBase&type=code) many times at different places, with many different use cases, it's really scary

2. The list of mandatory dependencies for `litellm` is **way too long** and includes many dependencies that are not required when you just want a python API to query a LLM provider for completion (which is the main feature advertised for their tool): https://github.com/BerriAI/litellm/blob/main/pyproject.toml it seems like the maintainers don't really understand optional dependencies and this scares me
   
   * `click` is not necessary, only if you want to use litellm as a CLI, it should be optional
   * `jinja2` is not necessary (it's for templating strings). And if you search why it is used, then you get really confused: it's used for a 3000 lines [`factory.py`](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/prompt_templates/factory.py#L10) file in a `prompt_templates` folder that is itself in a `llms` folder which mostly just contains folders to LLM providers, apart for this one folder about `prompt_templates` not sure why it's here. And there is nothing about prompt templates in their docs website
   * They require 2 different dependencies for sending HTTP requests: `aiohttp` and `requests`. Just pick one and use it. Look, in the [same file](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/vertex_ai_and_google_ai_studio/vertex_ai_non_gemini.py#L11C8-L11C16) they import `requests` and `httpx` to finally never use `requests` and only use `httpx`... Which is not even defined as a dependency of the package (I guess it's the dependency of one of the dependencies, but it should be explicitly added to dependencies list if it explicitly imported)

3. Another red flag: there is a [symlinks to `requirements.txt` in the main package folder](https://github.com/BerriAI/litellm/blob/main/litellm/requirements.txt): they probably have a reason, but it is most probably not a good one, and should be dealt differently than creating a simlink there

4. Configuration for **linting tools is a complete mess** they use `flake8` and `isort` in a project created in 2023... No python dev does this anymore, `ruff` is the new standard. Funnily there is also a `ruff.toml` file, but no trace of `ruff` being used. They should just use `ruff` for everything

5. Linting is not even properly done when you see the many unused imports all over the codebase (e.g. https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/types/llms/ollama.py#L4)

6. `__init__.py` files are a mess, sometimes there sometimes not there

There is way too much code poorly structured just for a simple wrapper for 1 function (completion) implemented for various API providers

That's not how good maintainable packages are built. Don't trust the number of stars during the AI trend they don't always reflect on code quality ("oh the features they are proposing in their readme look cool I'll star it"). A lot of people are trying to quickly capitalize on the trend without having the competencies.

What are the other options?

Probably the llm lib from simonw is the closest lib doing similar job, while keeping it simple, it was built with CLI in mind but also provide a python API: https://llm.datasette.io/en/stable/python-api.html it supports streaming, and have a plugin system to add new providers. I wish the click dependency would be behind a CLI flag though

LlamaIndex also enables to do completions for various providers (and they also provide a lot more for RAG). But also comes with a ton of dependencies, and not sure about how good the codebase is.

LangChain seems to be falling out of fashion due to pushing their LCEL language that is not pythonic at all, confusing, and really limited in capacities, also now they are also pushing for LangGraph which is very confusing. Their azure connector is not even up to date anymore and cannot be used for the latest azure pay as you go deployments.

imo python devs just want to write python code with functions, if conditions and loops. No need for a complete closed framework, we just need wrapper for LLMs APIs and vectorstore providers, that's it. The rest we can do it ourselves.

Also all those companies (LlamaIndex, LangChain, litellm) are trying to profit from the AI trend by providing basic wrappers over simple APIs with some extra sugar in the AWS cloud. They don't provide much value to be honest, and are prone to bring in breaking changes and walled gardens if that's good for their business. So I would rather trust a community led effort for such a simple wrapper system. You don't need a whole company for maintaining this kind of tool, devs will be sending PRs by themselves, all you need is a good structure and good test suite to quickly and safely integrate contributions.

Conclusion

Good developers value simplicity over thousands of lines of codes and dependency hell.

After reviewing the code of litellm I know for sure I would never touch this library with a ten meters long keyboard. It just screams "poorly coded package that will be a pain to maintain".

Imo the main questions with aisuite is: will it be properly maintained over time? PRs seems to be already accumulating a bit since last week. But there is a lot of PR that looks interesting and addressing some of the missing features already

But even if it is not maintained the API is so simple that you can easily fork it and maintain the providers you care about just for you. Just so you have a unified API in all your LLM projects that you can control

A lot of these litellm points are true (especially since I had to contribute a small fix there and dive into their codebase). It feels like they are moving fast trying to add new features and forgo code quality and maintainability.

But it would be just good to hear from the authors as to what the intent and direction of this repo is, as it would give some insights as to where to contribute.

@TashaSkyUp
Copy link
Author

@andrewyng Loved your classes! And I would love to hear your thoughts or your teams thoughts on this thread! What do you think? How will aisuite differentiate itself from the others?

krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 9, 2024
ishaan-jaff added a commit to BerriAI/litellm that referenced this issue Dec 9, 2024
* feat(base_llm): initial commit for common base config class

Addresses code qa critique andrewyng/aisuite#113 (comment)

* feat(base_llm/): add transform request/response abstract methods to base config class

---------

Co-authored-by: Krrish Dholakia <[email protected]>
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 9, 2024
* feat(base_llm): initial commit for common base config class

Addresses code qa critique andrewyng/aisuite#113 (comment)

* feat(base_llm/): add transform request/response abstract methods to base config class

* feat(cohere-+-clarifai): refactor integrations to use common base config class

* fix: fix linting errors

* refactor(anthropic/): move anthropic + vertex anthropic to use base config

* test: fix xai test

* test: fix tests

* fix: fix linting errors

* test: comment out WIP test

* fix(transformation.py): fix is pdf used check

* fix: fix linting error
ishaan-jaff added a commit to BerriAI/litellm that referenced this issue Dec 10, 2024
…ere (#7117)

* fix use new format for Cohere config

* fix base llm http handler

* Litellm code qa common config (#7116)

* feat(base_llm): initial commit for common base config class

Addresses code qa critique andrewyng/aisuite#113 (comment)

* feat(base_llm/): add transform request/response abstract methods to base config class

---------

Co-authored-by: Krrish Dholakia <[email protected]>

* use base transform helpers

* use base_llm_http_handler for cohere

* working cohere using base llm handler

* add async cohere chat completion support on base handler

* fix completion code

* working sync cohere stream

* add async support cohere_chat

* fix types get_model_response_iterator

* async / sync tests cohere

* feat  cohere using base llm class

* fix linting errors

* fix _abc error

* add cohere params to transformation

* remove old cohere file

* fix type error

* fix merge conflicts

* fix cohere merge conflicts

* fix linting error

* fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post

* fix passing cohere specific params

---------

Co-authored-by: Krrish Dholakia <[email protected]>
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 10, 2024
…y, mistral, codestral, nvidia nim, cerebras, volcengine, text completion codestral, sambanova, maritalk to base llm config

Addresses feedback from andrewyng/aisuite#113 (comment)
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 10, 2024
* feat(base_llm): initial commit for common base config class

Addresses code qa critique andrewyng/aisuite#113 (comment)

* feat(base_llm/): add transform request/response abstract methods to base config class

* feat(cohere-+-clarifai): refactor integrations to use common base config class

* fix: fix linting errors

* refactor(anthropic/): move anthropic + vertex anthropic to use base config

* test: fix xai test

* test: fix tests

* fix: fix linting errors

* test: comment out WIP test

* fix(transformation.py): fix is pdf used check

* fix: fix linting error
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 10, 2024
…ere (#7117)

* fix use new format for Cohere config

* fix base llm http handler

* Litellm code qa common config (#7116)

* feat(base_llm): initial commit for common base config class

Addresses code qa critique andrewyng/aisuite#113 (comment)

* feat(base_llm/): add transform request/response abstract methods to base config class

---------

Co-authored-by: Krrish Dholakia <[email protected]>

* use base transform helpers

* use base_llm_http_handler for cohere

* working cohere using base llm handler

* add async cohere chat completion support on base handler

* fix completion code

* working sync cohere stream

* add async support cohere_chat

* fix types get_model_response_iterator

* async / sync tests cohere

* feat  cohere using base llm class

* fix linting errors

* fix _abc error

* add cohere params to transformation

* remove old cohere file

* fix type error

* fix merge conflicts

* fix cohere merge conflicts

* fix linting error

* fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post

* fix passing cohere specific params

---------

Co-authored-by: Krrish Dholakia <[email protected]>
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 10, 2024
* feat(base_llm): initial commit for common base config class

Addresses code qa critique andrewyng/aisuite#113 (comment)

* feat(base_llm/): add transform request/response abstract methods to base config class

* feat(cohere-+-clarifai): refactor integrations to use common base config class

* fix: fix linting errors

* refactor(anthropic/): move anthropic + vertex anthropic to use base config

* test: fix xai test

* test: fix tests

* fix: fix linting errors

* test: comment out WIP test

* fix(transformation.py): fix is pdf used check

* fix: fix linting error
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 10, 2024
…ere (#7117)

* fix use new format for Cohere config

* fix base llm http handler

* Litellm code qa common config (#7116)

* feat(base_llm): initial commit for common base config class

Addresses code qa critique andrewyng/aisuite#113 (comment)

* feat(base_llm/): add transform request/response abstract methods to base config class

---------

Co-authored-by: Krrish Dholakia <[email protected]>

* use base transform helpers

* use base_llm_http_handler for cohere

* working cohere using base llm handler

* add async cohere chat completion support on base handler

* fix completion code

* working sync cohere stream

* add async support cohere_chat

* fix types get_model_response_iterator

* async / sync tests cohere

* feat  cohere using base llm class

* fix linting errors

* fix _abc error

* add cohere params to transformation

* remove old cohere file

* fix type error

* fix merge conflicts

* fix cohere merge conflicts

* fix linting error

* fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post

* fix passing cohere specific params

---------

Co-authored-by: Krrish Dholakia <[email protected]>
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 11, 2024
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 11, 2024
commit 7fb0ca9
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 18:00:11 2024 -0800

    fix(hf/transformation.py): fix env var pickup

commit 72efdfb
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 17:16:31 2024 -0800

    test: mark flaky test

commit 8517bec
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 16:10:01 2024 -0800

    test: skip local test

commit 9105aa3
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 15:48:10 2024 -0800

    test: cleanup test

commit 267e1d7
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 15:47:31 2024 -0800

    ci: test faster ci/cd

commit 4f08775
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 15:35:42 2024 -0800

    ci(config.yml): speed up ci/cd

commit c6a582a
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 15:34:13 2024 -0800

    test: fix palm tests

commit d9adb85
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 15:09:28 2024 -0800

    test: fix hf test

commit 6489e14
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 14:49:51 2024 -0800

    fix: test

commit 39c49b8
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 14:19:59 2024 -0800

    fix(huggingface/handler.py): handle rate limit errors correctly

commit ebda638
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 14:17:42 2024 -0800

    fix(openai.py): fix transform messages check

commit 34c6dd8
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 14:12:07 2024 -0800

    test: fix test

commit 5099554
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 13:51:21 2024 -0800

    fix(utils.py): add anthropic text mapping

commit c296071
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 13:50:41 2024 -0800

    fix: map cloudflare config

commit 8f86154
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 13:42:21 2024 -0800

    fix: fix linting errors

commit 7bca018
Author: Ishaan Jaff <[email protected]>
Date:   Tue Dec 10 12:23:58 2024 -0800

    (Refactor) Code Quality improvement - Use Common base handler for `anthropic_text/` (#7143)

    * add anthropic text provider

    * add ANTHROPIC_TEXT to LlmProviders

    * fix anthropic text implementation

    * working anthropic text claude-2

    * test_acompletion_claude2_stream

    * add param mapping for anthropic text

    * fix unused imports

    * fix anthropic completion handler.py

commit 60978f3
Author: Ishaan Jaff <[email protected]>
Date:   Tue Dec 10 10:44:42 2024 -0800

    (Refactor) Code Quality improvement - Use Common base handler for Cohere /generate API (#7122)

    * use validate_environment in common utils

    * use transform request / response for cohere

    * remove unused file

    * use cohere base_llm_http_handler

    * working cohere generate api on llm http handler

    * streaming cohere generate api

    * fix get_model_response_iterator

    * fix streaming handler

    * fix get_model_response_iterator

    * test_cohere_generate_api_completion

    * fix linting error

    * fix testing cohere raising error

    * fix get_model_response_iterator type

    * add testing cohere generate api

commit 4432055
Author: Ishaan Jaff <[email protected]>
Date:   Tue Dec 10 10:12:22 2024 -0800

    (Refactor) Code Quality improvement - Use Common base handler for `cloudflare/` provider  (#7127)

    * add get_complete_url to base config

    * cloudflare - refactor to following existing pattern

    * migrate cloudflare chat completions to base llm http handler

    * fix unused import

    * fix fake stream in cloudflare

    * fix cloudflare transformation

    * fix naming for BaseModelResponseIterator

    * add async cloudflare streaming test

    * test cloudflare

    * add handler.py

    * add handler.py in cohere handler.py

commit a2c129c
Author: Ishaan Jaff <[email protected]>
Date:   Mon Dec 9 21:04:48 2024 -0800

    (Refactor) Code Quality improvement - Use Common base handler for `clarifai/` (#7125)

    * use base_llm_http_handler for clarifai

    * fix clarifai completion

    * handle faking streaming base llm http handler

    * add fake streaming for clarifai

    * add FakeStreamResponseIterator for base model iterator

    * fix get_model_response_iterator

    * fix base model iterator

    * fix base model iterator

    * add support for faking sync streams clarfiai

    * add fake streaming for clarifai

    * remove unused code

    * fix import

    * fix llm http handler

    * test_async_completion_clarifai

    * fix clarifai tests

    * fix linting

commit e4f83cd
Author: Ishaan Jaff <[email protected]>
Date:   Mon Dec 9 17:45:29 2024 -0800

    (Refactor) Code Quality improvement - use Common base handler for Cohere  (#7117)

    * fix use new format for Cohere config

    * fix base llm http handler

    * Litellm code qa common config (#7116)

    * feat(base_llm): initial commit for common base config class

    Addresses code qa critique andrewyng/aisuite#113 (comment)

    * feat(base_llm/): add transform request/response abstract methods to base config class

    ---------

    Co-authored-by: Krrish Dholakia <[email protected]>

    * use base transform helpers

    * use base_llm_http_handler for cohere

    * working cohere using base llm handler

    * add async cohere chat completion support on base handler

    * fix completion code

    * working sync cohere stream

    * add async support cohere_chat

    * fix types get_model_response_iterator

    * async / sync tests cohere

    * feat  cohere using base llm class

    * fix linting errors

    * fix _abc error

    * add cohere params to transformation

    * remove old cohere file

    * fix type error

    * fix merge conflicts

    * fix cohere merge conflicts

    * fix linting error

    * fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post

    * fix passing cohere specific params

    ---------

    Co-authored-by: Krrish Dholakia <[email protected]>

commit 48b134a
Author: Krish Dholakia <[email protected]>
Date:   Mon Dec 9 15:58:25 2024 -0800

    Litellm code qa common config (#7113)

    * feat(base_llm): initial commit for common base config class

    Addresses code qa critique andrewyng/aisuite#113 (comment)

    * feat(base_llm/): add transform request/response abstract methods to base config class

    * feat(cohere-+-clarifai): refactor integrations to use common base config class

    * fix: fix linting errors

    * refactor(anthropic/): move anthropic + vertex anthropic to use base config

    * test: fix xai test

    * test: fix tests

    * fix: fix linting errors

    * test: comment out WIP test

    * fix(transformation.py): fix is pdf used check

    * fix: fix linting error

commit 98902d6
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 13:11:47 2024 -0800

    test: cleanup tests

commit 681d69b
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 12:13:55 2024 -0800

    fix(ai21/transformation.py): add tool choice to ai21 optional params

commit 62f5597
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 12:02:38 2024 -0800

    fix: fix linting errors

commit 484cb2b
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 11:49:58 2024 -0800

    fix: fix linting errors

commit 49e0c0b
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 11:37:32 2024 -0800

    perf: huggingface async streaming perf improvement - reuse client

commit 1a2f516
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 11:10:21 2024 -0800

    fix: fix linting errors

commit a7bd5d6
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 11:09:26 2024 -0800

    fix: fix linting error

commit b09b85f
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 11:07:24 2024 -0800

    fix: fix linting errors

commit e4e1a35
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 11:04:02 2024 -0800

    fix: fix linting error

commit 46d109d
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 11:00:06 2024 -0800

    fix: fix linting errors

commit 9aa1f26
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 10:42:38 2024 -0800

    fix: fix linting error

commit ca51fca
Author: Ishaan Jaff <[email protected]>
Date:   Mon Dec 9 21:04:48 2024 -0800

    (Refactor) Code Quality improvement - Use Common base handler for `clarifai/` (#7125)

    * use base_llm_http_handler for clarifai

    * fix clarifai completion

    * handle faking streaming base llm http handler

    * add fake streaming for clarifai

    * add FakeStreamResponseIterator for base model iterator

    * fix get_model_response_iterator

    * fix base model iterator

    * fix base model iterator

    * add support for faking sync streams clarfiai

    * add fake streaming for clarifai

    * remove unused code

    * fix import

    * fix llm http handler

    * test_async_completion_clarifai

    * fix clarifai tests

    * fix linting

commit f36e57b
Author: Ishaan Jaff <[email protected]>
Date:   Mon Dec 9 17:45:29 2024 -0800

    (Refactor) Code Quality improvement - use Common base handler for Cohere  (#7117)

    * fix use new format for Cohere config

    * fix base llm http handler

    * Litellm code qa common config (#7116)

    * feat(base_llm): initial commit for common base config class

    Addresses code qa critique andrewyng/aisuite#113 (comment)

    * feat(base_llm/): add transform request/response abstract methods to base config class

    ---------

    Co-authored-by: Krrish Dholakia <[email protected]>

    * use base transform helpers

    * use base_llm_http_handler for cohere

    * working cohere using base llm handler

    * add async cohere chat completion support on base handler

    * fix completion code

    * working sync cohere stream

    * add async support cohere_chat

    * fix types get_model_response_iterator

    * async / sync tests cohere

    * feat  cohere using base llm class

    * fix linting errors

    * fix _abc error

    * add cohere params to transformation

    * remove old cohere file

    * fix type error

    * fix merge conflicts

    * fix cohere merge conflicts

    * fix linting error

    * fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post

    * fix passing cohere specific params

    ---------

    Co-authored-by: Krrish Dholakia <[email protected]>

commit 56a69f3
Author: Krish Dholakia <[email protected]>
Date:   Mon Dec 9 15:58:25 2024 -0800

    Litellm code qa common config (#7113)

    * feat(base_llm): initial commit for common base config class

    Addresses code qa critique andrewyng/aisuite#113 (comment)

    * feat(base_llm/): add transform request/response abstract methods to base config class

    * feat(cohere-+-clarifai): refactor integrations to use common base config class

    * fix: fix linting errors

    * refactor(anthropic/): move anthropic + vertex anthropic to use base config

    * test: fix xai test

    * test: fix tests

    * fix: fix linting errors

    * test: comment out WIP test

    * fix(transformation.py): fix is pdf used check

    * fix: fix linting error

commit 12fba38
Author: Krrish Dholakia <[email protected]>
Date:   Tue Dec 10 00:07:50 2024 -0800

    test: comment out WIP test

commit b37d2dd
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 22:20:43 2024 -0800

    refactor: move nlp cloud, oobabooga, ollamachat, deepinfra, perplexity, mistral, codestral, nvidia nim, cerebras, volcengine, text completion codestral, sambanova, maritalk to base llm config

    Addresses feedback from andrewyng/aisuite#113 (comment)

commit b621e64
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 20:54:21 2024 -0800

    refactor(azure/): refactor azure openai to use base llm config

commit bd07837
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 20:22:50 2024 -0800

    refactor(refactor/): gemini + vertex ai gemini

    refactor to support base llm config

commit 3ac1909
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 18:41:33 2024 -0800

    refactor(huggingface/): refactor huggingface to use base llm config class

commit 386d7a9
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 17:21:18 2024 -0800

    refactor(replicate/): refactor replicate to use base config

commit 097bc43
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 13:53:08 2024 -0800

    fix: fix linting errors

commit ccd2d8d
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 13:49:29 2024 -0800

    test: fix tests

commit b1d1b34
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 13:47:38 2024 -0800

    test: fix xai test

commit d1c5462
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 12:52:00 2024 -0800

    refactor(anthropic/): move anthropic + vertex anthropic to use base config

commit 0333c82
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 12:00:44 2024 -0800

    fix: fix linting errors

commit 2bbeea6
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 11:48:23 2024 -0800

    feat(cohere-+-clarifai): refactor integrations to use common base config class

commit de9fc38
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 10:11:44 2024 -0800

    feat(base_llm/): add transform request/response abstract methods to base config class

commit 284377e
Author: Krrish Dholakia <[email protected]>
Date:   Mon Dec 9 09:11:12 2024 -0800

    feat(base_llm): initial commit for common base config class

    Addresses code qa critique andrewyng/aisuite#113 (comment)
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 11, 2024
#7151)

* refactor(sagemaker/): separate chat + completion routes + make them both use base llm config

Addresses andrewyng/aisuite#113 (comment)

* fix(main.py): pass hf model name + custom prompt dict to litellm params
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 11, 2024
…#7157)

* refactor(ollama/): refactor ollama `/api/generate` to use base llm config

Addresses andrewyng/aisuite#113 (comment)

* test: skip unresponsive test

* test(test_secret_manager.py): mark flaky test

* test: fix google sm test
krrishdholakia added a commit to BerriAI/litellm that referenced this issue Dec 11, 2024
* refactor(ollama/): refactor ollama `/api/generate` to use base llm config

Addresses andrewyng/aisuite#113 (comment)

* test: skip unresponsive test

* test(test_secret_manager.py): mark flaky test

* test: fix google sm test

* fix: fix init.py
@krrishdholakia
Copy link

Hi @vemonet , thank you for the feedback on litellm. Here’s what we’ve done / are doing about this. Is this what you wanted?

1. ‘LLM providers don't have a unified parent abstract class’

All chat providers (exc. Bedrock) now inherit from a parent abstract class

For reference, here's the refactor on:

2. 'no clear coherent structure for LLM providers'

We refactored llms/ to make this simpler - https://github.com/BerriAI/litellm/tree/main/litellm/llms

Standard naming convention: All folders in llms/ are now named after their litellm provider name. enforced test

Common enforced structure: https://github.com/BerriAI/litellm/tree/30e147a315d29ba3efe61a179e80409a77754a42/litellm/llms/watsonx

  • each mapped endpoint is in a separate folder - chat/ => endpoint accepts chat completion messages - e.g. watsonx's /text/chat, completion => endpoint requires translation to single prompt str - e.g. watsonx's /text/generation

  • each endpoint folder has a separate handler.py (http calling) and transformation.py (core LLM translation logic)

3. ‘As a bonus they are redefining a LiteLLMBase class many times at different place’

Removed redefiniton: Defined in just 1 place

Clarity on usage: Renamed it to LiteLLMPydanticObjectBase to make it clear that this is the base pydantic object for the repo

4. ‘Another red flag: there is a symlinks to requirements.txt in the main package folder’

Removed the symlinks to requirements.txt

5. ‘Configuration for linting tools is a complete mess’

LiteLLM follows the Google Python Style Guide.

We run:

  1. Ruff for formatting and linting checks
  2. Mypy + Pyright for typing 1, 2
  3. Black for formatting
  4. isort for import sorting

If there's any way we can improve further here, let me know.

[PLANNED FOR NEXT WEEK]

1. ‘The list of mandatory dependencies for litellm is way too long

  1. Single HTTP Library: post-refactor, we can now remove 'aiohttp', 'requests' and just use httpx for our core calling logic (same as openai sdk). This should be done by next week.
  2. Removing proxy-deps: 'click' can also be moved into the separate litellm[proxy] set of dependencies
  3. Clarity on ‘jinja2’: This is required for prompt templating (as you pointed out). This is used for several llm providers (e.g. huggingface) which expose endpoints that only accept a prompt field. We don’t plan on removing this today, as it’s used in several places. Any suggestions for reducing our need for this / being able to remove the requirement are welcome.

2. Migrate Bedrock to Base Config

This would move all Chat LLM providers to the base config.

Contributions to improve LiteLLM’s code quality/linting/etc. are welcome!


-litellm maintainer

@rohitprasad15
Copy link
Collaborator

rohitprasad15 commented Dec 14, 2024

@TashaSkyUp -
Hi, one of our aims with starting development of aisuite was to provide a simple way to use multiple providers.
There are few planned features and we are still deciding on a roadmap based on feedback and traction. If you have a feature request, please open an issue for the same.

Please wait for the next set of features to be announced/added. Differentiators will become evident over the course of next few releases. Thanks for using aisuite and giving feedback. We are committed to maintain and enhance this library long term.

@adv-11
Copy link

adv-11 commented Dec 14, 2024

Is it easy to migrate from Lite LLM to Aisuite or vice versa?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants