Welcome to XTuner’s documentation!# All-IN-ONE toolbox for LLM Star Watch Fork Documentation# Get Started Overview What is XTuner Installation Installation Process Best Practices Verify the installation Quickstart Prepare the model weights Prepare the fine-tuning dataset Prepare the config Modify the config Start fine-tuning Model Convert + LoRA Merge Model Convert LoRA Merge Chat with the model Preparation Pretrained Model Prompt Template Training Modify Settings Custom SFT Dataset Custom Pretrain Dataset Custom Agent Dataset Multi-modal Dataset Open Source Datasets Visualization Acceleration DeepSpeed Pack to Max Length Flash Attention Varlen Flash Attention HyperParameters Length Grouped Sampler Train Large-scale Dataset Train Extreme Long Sequence Benchmark Chat Chat with LLM Chat with Agent Chat with VLM Accelerate chat by LMDeploy Evaluation Evaluation during training MMLU (LLM) MMBench (VLM) Evaluate with OpenCompass Models Supported Models InternEvo Migration InternEVO Migration ftdp Case 1 Case 2 Case 3 Case 4