To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. | |
https://github.com/microsoft/LLMLingua | |
llmlingua-0.2.1 | MIT |
download~amd64 ~x86 | pypi |