Introduction
As mentioned in the previous blog post, YOLOv8 performs exceptionally well in Document Layout Analysis. I trained all models from the YOLOv8 series by DocLayNet dataset and found that even the smallest model achieves an overall mAP50-95 of 71.8, while the largest model reaches an impressive 78.7.
Recently, Ultralytics released YOLOv11, the latest iteration in their YOLO series of real-time object detectors. This new version brings significant improvements to both architecture and training methods.
🚀 The results look promising! I decided to train all YOLOv11 models on the DocLayNet dataset again and compare them with the previous YOLOv8 series.
Training Method
For this experiment, I continued to use my repository https://github.com/ppaanngggg/yolo-doclaynet to prepare the data and train the models using my custom scripts. This approach ensures consistency in the data preparation and training process, allowing for a fair comparison between YOLOv8 and YOLOv11 models.
The training and evaluation process for YOLOv11 models is straightforward and can be executed with simple command-line instructions:
# To train the model
python train.py {base-model}
# To evaluate the model
python eval.py {path-to-your-trained-model}
Comparing the Results
Here is the detailed evaluation table comparing YOLOv8 models with YOLOv11:
label | boxes | yolov8n | yolov11n | yolov8s | yolov11s | yolov8m | yolov11m | yolov8l | yolov11l | yolov8x | yolov11x |
---|---|---|---|---|---|---|---|---|---|---|---|
Params (M) | 3.2 | 2.6 | 11.2 | 9.4 | 25.9 | 20.1 | 43.7 | 25.3 | 68.2 | 56.9 | |
Caption | 1542 | 0.682 | 0.717 | 0.721 | 0.744 | 0.746 | 0.746 | 0.75 | 0.772 | 0.753 | 0.765 |
Footnote | 387 | 0.614 | 0.634 | 0.669 | 0.683 | 0.696 | 0.701 | 0.702 | 0.715 | 0.717 | 0.71 |
Formula | 1966 | 0.655 | 0.673 | 0.695 | 0.705 | 0.723 | 0.729 | 0.75 | 0.75 | 0.747 | 0.765 |
List-item | 10521 | 0.789 | 0.81 | 0.818 | 0.836 | 0.836 | 0.843 | 0.841 | 0.847 | 0.841 | 0.845 |
Page-footer | 3987 | 0.588 | 0.591 | 0.61 | 0.621 | 0.64 | 0.653 | 0.641 | 0.678 | 0.655 | 0.684 |
Page-header | 3365 | 0.707 | 0.704 | 0.754 | 0.76 | 0.769 | 0.778 | 0.776 | 0.788 | 0.784 | 0.795 |
Picture | 3497 | 0.723 | 0.758 | 0.762 | 0.783 | 0.789 | 0.8 | 0.796 | 0.805 | 0.805 | 0.802 |
Section-header | 8544 | 0.709 | 0.713 | 0.727 | 0.745 | 0.742 | 0.753 | 0.75 | 0.75 | 0.748 | 0.751 |
Table | 2394 | 0.82 | 0.846 | 0.854 | 0.874 | 0.88 | 0.88 | 0.885 | 0.891 | 0.886 | 0.89 |
Text | 29917 | 0.845 | 0.851 | 0.86 | 0.869 | 0.876 | 0.878 | 0.878 | 0.88 | 0.877 | 0.883 |
Title | 334 | 0.762 | 0.793 | 0.806 | 0.817 | 0.83 | 0.832 | 0.846 | 0.844 | 0.84 | 0.848 |
All | 66454 | 0.718 | 0.735 | 0.752 | 0.767 | 0.775 | 0.781 | 0.783 | 0.793 | 0.787 | 0.794 |
I've also created a plot to illustrate the relationship between model size and score for these two series:
Conclusion
Based on the table and plot above, we can conclude
Based on the table and plot above, we can conclude that YOLOv11 models consistently outperform their YOLOv8 counterparts across all sizes. The improvements are particularly noticeable in the smaller models, with YOLOv11n achieving a 1.7% increase in mAP50-95 compared to YOLOv8n. Furthermore, YOLOv11 models generally have fewer parameters than their YOLOv8 equivalents, indicating improved efficiency in addition to better performance.
My favorite model is YOLOv11l. It's only about the same size as YOLOv8m, but it outperforms even YOLOv8x!
However, YOLOv11x shows only a slight improvement over YOLOv11l despite having twice the model size.
More
What are your thoughts on the YOLOv11 results? Have you had experience using YOLO models for document layout analysis? I'd love to hear your insights and experiences in the comments below!
References
- YOLOv11 documentation: https://docs.ultralytics.com/models/yolo11/
- DocLayNet GitHub repository: https://github.com/DS4SD/DocLayNet
- My YOLO-DocLayNet GitHub project: https://github.com/ppaanngggg/yolo-doclaynet
Top comments (0)