🎛️ Configuring a Training Job

This guide explains every field on the Train Model panel to help you understand and configure your model training setup.

1. Details

FieldPurposeExample
NameIdentifies the run in historyHand_Detector_v1_10Ep
Description(optional) Notes for future youTesting larger image size

2. Base Model

You can currently choose between two base models depending on the project type:

  1. YOLOv11 Object Detection – used with bounding boxes annotations
  2. YOLOv11 Segmentation – used with polygon and SAM-2 generated annotations

Depending on the current project’s type (object detection or segmentation), you will only see the one available model for that project type.

Both models the latest generation from Ultralytics, fusing an upgraded backbone and neck for higher accuracy with fewer parameters [Ultralytics Docs].

3. Model Variant

Choosing a variant is a trade-off between speed & accuracy.

VariantParams (M)COCO mAP50-95²Notes
n2.639.5Ultra-light, mobile & IoT
s9.447.0Good for edge GPUs
m20.151.5Balanced; default
l26.254.0Higher accuracy, more VRAM
x58.856.8Maximum accuracy, slowest
m is an excellent starting point for most users.

4. Customization

OptionWhen to use
From ScratchFresh dataset or new architecture.
Upload Weights (coming soon)Resume / fine-tune a previous model.

5. Hyper-parameters

ParameterUI ControlRangeWhat it does
EpochsSlider1-100Number of full passes through the training set.
Image SizeRadio-buttons (320 / 640)320 or 640 pxResolution the model is trained with. Higher helps small objects but needs more VRAM.

Other YOLO hyper-parameters (batch size, learning rate, momentum, etc.) are managed automatically by the trainer and use Ultralytics’ recommended defaults.

6. Launch the Job

Hit Create Model. Your run queues and begins on our GPU fleet. You will be redirected to the models tab, where performance metrics and graphs will be available after the training run is complete.


Prefer code? Head to Notebooks for SDK-based examples.