How To Upscale

Notice

OpenModelDB is still in alpha and actively being worked on. Please feel free to share your feedback and report any bugs you find.

The best place to find AI Upscaling models

OpenModelDB is a community driven database of AI Upscaling models. We aim to provide a better way to find and compare models than existing sources.

Found 647 models
SPAN
2x
2xLiveActionV1_SPAN
2xLiveActionV1_SPAN
2xLiveActionV1_SPAN
SPAN model for live action film and digital video. The main goal is to fix/reduce common video quality problems while maintaining fidelity. I tried the existing video-focused models and they all denoise or cause colour shifts so I decided to train my own. The model is trained with compression (JPEG, MPEG-4 ASP, H264, VP9, H265), chroma subsampling, blurriness from multiple scaling, uneven horizontal and vertical resolution, oversharpening halos, bad deinterlacing jaggies, and onscreen text. It is not trained to remove noise at all so it preserves details in the source well. To prevent colour/brightness shifts, I used consistency loss in neosr. I had to modify consistency loss to use a stronger blur so it doesn't interfere with the halo removal. Limitations: 1. The model has limited ability to see details through heavy grain, but light to moderate grain is fine. 2. The model still does not handle bad deinterlacing perfectly, especially if the source is vertically resized. Fixing bad deinterlacing is not the main goal so it is what it is. Sources that are line-doubled throughout should be descaled back to half height first for best results. 3. The model sometimes oversharpens a little. This is probably because the training data has some oversharpened images. 4. This model generally cannot handle VHS degradation. More comparisons: https://slow.pics/c/DtDN7gaq The training config and image degradation scripts used to create training data can be found in https://github.com/jcj83429/upscaling/tree/9332e7d5b07747ff347e5abdc43f8144364de9f7/2xLiveActionV1_SPAN
DAT
4x
PBRify_UpscalerV4
PBRify_UpscalerV4
PBRify_UpscalerV4
A 4x model for Compression Removal, General Upscaler, Restoration. A new version of the main PBRify upscaling model. The PBRify Upscaler series of models are meant to take existing game textures from older 2000s era games, and upscale them to usable quality. V4 significantly improves detail over the previous V3 model. It is slower, as it's based on the DAT2 architecture, however the results are very much worthwhile imo. **Showcase:** https://slow.pics/c/vMGFFfFh !PBRify_Upscaler_V4_1
SPAN
1x
SuperScale
SuperScale
SuperScale
A 1x model for Anti-aliasing, Restoration. I was bored, so I did this. This model uses DPID as the scaling algorithm for the HRs. The original images were 8k or 12k. It's significantly sharper than Box/Area scaling, yet does a great job with aliasing. This allows for a very sharp model with minimal artifacts, even on the SPAN version. The main model is trained on 12k images captured with Nvidia Ansel. It took about 2 days capturing manual 4k and 12k pairs for this model. The 4k captures were used as the LR, the 12k captures were resized to 4k with DPID with randomized lambda values, then trained on as HRs. The Alt model is trained exclusively on 8k images from my 8k dataset, resized to 4k with dpid. This provides a clearer result with less noise, but it doesn't handle long edges well at all. Thanks to CF2lter for advice on preparing the dataset, and umzi2 for creating the rust version of DPID. **Showcase:** https://slow.pics/c/TCyqje9K !Animation (2)
RealPLKSR
1x
SuperScale_Alt_RPLKSR_S
SuperScale_Alt_RPLKSR_S
SuperScale_Alt_RPLKSR_S
A 1x model for Anti-aliasing, Restoration. I was bored, so I did this. This model uses DPID as the scaling algorithm for the HRs. The original images were 8k or 12k. It's significantly sharper than Box/Area scaling, yet does a great job with aliasing. This allows for a very sharp model with minimal artifacts, even on the SPAN version. The main model is trained on 12k images captured with Nvidia Ansel. It took about 2 days capturing manual 4k and 12k pairs for this model. The 4k captures were used as the LR, the 12k captures were resized to 4k with DPID with randomized lambda values, then trained on as HRs. The Alt model is trained exclusively on 8k images from my 8k dataset, resized to 4k with dpid. This provides a clearer result with less noise, but it doesn't handle long edges well at all. Thanks to CF2lter for advice on preparing the dataset, and umzi2 for creating the rust version of DPID. **Showcase:** https://slow.pics/c/TCyqje9K !Animation (2)
RealPLKSR
1x
SuperScale_RPLKSR_S
SuperScale_RPLKSR_S
SuperScale_RPLKSR_S
A 1x model for Anti-aliasing, Restoration. I was bored, so I did this. This model uses DPID as the scaling algorithm for the HRs. The original images were 8k or 12k. It's significantly sharper than Box/Area scaling, yet does a great job with aliasing. This allows for a very sharp model with minimal artifacts, even on the SPAN version. The main model is trained on 12k images captured with Nvidia Ansel. It took about 2 days capturing manual 4k and 12k pairs for this model. The 4k captures were used as the LR, the 12k captures were resized to 4k with DPID with randomized lambda values, then trained on as HRs. The Alt model is trained exclusively on 8k images from my 8k dataset, resized to 4k with dpid. This provides a clearer result with less noise, but it doesn't handle long edges well at all. Thanks to CF2lter for advice on preparing the dataset, and umzi2 for creating the rust version of DPID. **Showcase:** https://slow.pics/c/TCyqje9K !Animation (2)
TSCUNet
2x
GameUpV2-TSCUNet
GameUpV2-TSCUNet
A 2x model for Compression Removal, General Upscaler, Restoration. This is my first video model! It's aimed at restoring compressed video game footage, like what you'd get from Twitch or Youtube. I've attached an example below. It's trained on TSCUNet using lossless game recordings, and degraded with my video destroyer. The degradations include resizing, and H264, H265, and AV1 compression. __IMPORTANT:__ You cannot use this model with chaiNNer or any other tool. You need to use **this**. You just run `test_vsr.py` after installing the requirements. Use the example command from the readme. You can also use the ONNX version of the model with `test_onnx.py` If you want to train a TSCUNet model yourself, use traiNNer-redux. I've included scripts in the SCUNet repository to convert your own models to ONNX if desired. **Showcase:** Watch in a Chrome based browser: https://video.yellowmouse.workers.dev/?key=Fvxw482Nsv8= !Animation
TSCUNet
2x
GameUpV2-TSCUNet-Small
GameUpV2-TSCUNet-Small
A 2x model for Compression Removal, General Upscaler, Restoration. This is my first video model! It's aimed at restoring compressed video game footage, like what you'd get from Twitch or Youtube. I've attached an example below. It's trained on TSCUNet using lossless game recordings, and degraded with my video destroyer. The degradations include resizing, and H264, H265, and AV1 compression. __IMPORTANT:__ You cannot use this model with chaiNNer or any other tool. You need to use **this**. You just run `test_vsr.py` after installing the requirements. Use the example command from the readme. You can also use the ONNX version of the model with `test_onnx.py` If you want to train a TSCUNet model yourself, use traiNNer-redux. I've included scripts in the SCUNet repository to convert your own models to ONNX if desired. **Showcase:** Watch in a Chrome based browser: https://video.yellowmouse.workers.dev/?key=Fvxw482Nsv8= !Animation
TSCUNet
2x
GameUp-TSCUNet
GameUp-TSCUNet
A 2x model for Compression Removal, General Upscaler, Restoration. This is my first video model! It's aimed at restoring compressed video game footage, like what you'd get from Twitch or Youtube. I've attached an example below. It's trained on TSCUNet using lossless game recordings, and degraded with my video destroyer. The degradations include resizing, and H264, H265, and AV1 compression. __IMPORTANT:__ You cannot use this model with chaiNNer or any other tool. You need to use **this**. You just run `test_vsr.py` after installing the requirements. Use the example command from the readme. You can also use the ONNX version of the model with `test_onnx.py` If you want to train a TSCUNet model yourself, use traiNNer-redux. I've included scripts in the SCUNet repository to convert your own models to ONNX if desired. **Showcase:** Watch in a Chrome based browser: https://video.yellowmouse.workers.dev/?key=Fvxw482Nsv8= !Animation
RCAN
2x
AnimeSharpV4
AnimeSharpV4
AnimeSharpV4
A 2x model for Anime. This is a successor to AnimeSharpV3 based on RCAN instead of ESRGAN. It outperforms both versions of AnimeSharpV3 in every capacity. It's sharper, retains *even more* detail, and has very few artifacts. It is __extremely faithful__ to the input image, even with heavily compressed inputs. To use this model, you must update to the **latest chaiNNer nightly build** The `2x-AnimeSharpV4_Fast_RCAN_PU` model is trained on RCAN PixelUnshuffle. This is much faster, but comes at the cost of quality. I believe the model is ~95% the quality of the full V4 RCAN model, but ~6x faster in Pytorch and ~4x faster in TensorRT. This model is ideal for video processing, and as such was trained to handle MPEG2 & H264 compression. __Comparisons:__ https://slow.pics/c/63Qu8HTN https://slow.pics/c/DBJPDJM9 !1736292155 679079
RCAN
2x
AnimeSharpV4_Fast_RCAN_PU
AnimeSharpV4_Fast_RCAN_PU
AnimeSharpV4_Fast_RCAN_PU
A 2x model for Anime. This is a successor to AnimeSharpV3 based on RCAN instead of ESRGAN. It outperforms both versions of AnimeSharpV3 in every capacity. It's sharper, retains *even more* detail, and has very few artifacts. It is __extremely faithful__ to the input image, even with heavily compressed inputs. To use this model, you must update to the **latest chaiNNer nightly build** The `2x-AnimeSharpV4_Fast_RCAN_PU` model is trained on RCAN PixelUnshuffle. This is much faster, but comes at the cost of quality. I believe the model is ~95% the quality of the full V4 RCAN model, but ~6x faster in Pytorch and ~4x faster in TensorRT. This model is ideal for video processing, and as such was trained to handle MPEG2 & H264 compression. __Comparisons:__ https://slow.pics/c/63Qu8HTN https://slow.pics/c/DBJPDJM9 !1736292155 679079
RCAN
1x
UnResizeOnly_RCAN
UnResizeOnly_RCAN
UnResizeOnly_RCAN
A 1x model for Artifact Removal. A version of UnResize trained on RCAN, which is faster and provides better quality than ESRGAN This model does **not remove compression or perform deblurring**, unlike the original UnResize models. __It **only** removes scaling artifacts.__ I've attached the script I used to create the dataset (it utilizes imagemagick) and the config for traiNNer-redux
OmniSR
2x
Digital Pokémon-Large
Digital Pokémon-Large
Digital Pokémon-Large
This model is designed to upscale the standard definition digital era of the Pokémon anime, which runs from late season 5 (Master Quest) to early season 12 (Galactic Battles). During this time, the show was animated digitally in a 4:3 ratio. This process was also used for Mewtwo Returns, most of Pokémon Chronicles, and the Mystery Dungeon specials. Advice/Known Limitations: * This OmniSR model can occasionally produce black frames when run in fp16 mode. This seems to be more common in the TPCi era (seasons 9 and later). The issue is sporadic enough that it probably makes sense to do a first pass in fp16, then re-upscale any affected shots in fp32. * I recommend using QTGMC on a preset of "Slow" or slower for deinterlacing. While the show is primarily animated at 12/24 fps, some elements like backgrounds are animated at a full 60i. * The model is not great at handling fonts, particularly the italicized text in the episode credits. This is despite including font images in the training data,.