XTuner is an efficient, flexible and full-featured toolkit for fine-tuning
large models (InternLM, Llama, Baichuan, Qwen, ChatGLM) and released under the Apache 2.0 license. The advantage
of this framework is that it is not tied down to a specific LLM architecture, but supports multiple ones out of the box.
With the just released version v0.2.0
of our llm-dataset-converter Python library,
you can read and write the XTuner JSON format (and apply the usual filtering, of course).
Here are the newly added image tags:
-
In-house registry:
-
Docker hub:
Of course, you can use these Docker images in conjunction with our gifr
Python library for gradio interfaces as well (gifr-textgen). Just now we released
version 0.0.4 of the library, which is more flexible in regards to text generation: it can now support send and receive
the conversation history and also parse JSON responses.