
StarSample V2.0
This is a model for the restoration of My Little Pony: Friendship is Magic, however it also works decently well on similar art.
V2.0 greatly improves upon V1.0's dataset in every way, taking models from (realistically) only being viable at 1x, to now being far more competent at 2x, more so for the models trained with heavier architectures in this release.
Improvements come as a significantly better understanding of compressions, and partly architecturally/partly dataset improved handling of details and overall understanding of content, leading to less artifacting and "AI smudging". The dataset takes from a larger variety of sources, despite being smaller than V1.0 (when tiled V1.0 would be 71,876 pairs), due to being filtered for IQA scores and detail density. It also contains many thousands of image pairs manually created to cover areas where there wasn't sufficient information.
This release also includes "NS", or "No Scale" models, which are a better representation of my initial goal with StarSample, and (StarSample V2.0 NS) should provide great 1x restoration results with little apparent artifacting, even where the heavier 2x models can fail due to having to increase resolution.
- 2x StarSample V2.0 HQ — (HAT-L)
- 2x StarSample V2.0 — (ESRGAN) — THIS MODEL
- 2x StarSample V2.0 Lite — (SPAN-S)
- 1x StarSample V2.0 NS — (ESRGAN)
- 1x StarSample V2.0 Lite NS — (SPAN-S)
| Architecture | ESRGAN |
|---|---|
| Scale | 2x |
| Size | 64nf23nb |
| Color Mode | |
| License | CC-BY-NC-SA-4.0 Private use Distribution Modifications Credit required Same License State Changes No Liability & Warranty |
| Date | 2026-02-12 |
| Dataset | HR = 4K GT uncompressed MLP: FiM episode frames + relevant uncompressed HR pairs to LR datasets /// LR = 1080p MLP: FiM episode frames sourced from YouTube in 3 different bitrates + custom MLP: FiM focal blur dataset + custom MLP: FiM GIF compression dataset in 3 different compression levels + custom MLP: FiM difficult details and other edge cases dataset + custom artificially-degraded MLP: FiM background dataset |
| Dataset size | 53560 |
| Training iterations | 500000 |
| Training epochs | 91 |
| Training batch size | 10 |
| Training HR size | 192 |
| Training OTF | No |
