How To Upscale

Notice

OpenModelDB is still in alpha and actively being worked on. Please feel free to share your feedback and report any bugs you find.

The best place to find AI Upscaling models

OpenModelDB is a community driven database of AI Upscaling models. We aim to provide a better way to find and compare models than existing sources.

Found 649 models
OmniSR
1x
NES Composite to RGB
NES Composite to RGB
NES Composite to RGB
Takes composite/RF/VHS NES footage and attempts to restore it to RGB quality. Assumes footage has been properly deinterlaced via field duplication from 240p to 480p/720p/etc. Note that: * All footage was captured in 240p/480p/720p NTSC. * RGB footage was captured via an AV Famicom with the RGB Blaster via the Retrotink 2x or GBS Control. * The model was trained exclusively on individual frames, so it can't fix things like dropouts. * The even and odd fields of NES composite tend to be a bit...different from each other, so there will be some jitter at 60fps. * I don't have access to an NES Toploader, so I wouldn't expect it to fix the jailbars very well. Revision History: * 1.5.0 (06/29/2025): Applied additional augmentations to reduce overfitting. Also added a small amount of 720p training data. * 1.0.0 (11/03/2024): Initial release.
OmniSR
1x
Where On Earth-Deinterlace Fix-Large
Where On Earth-Deinterlace Fix-Large
Where On Earth-Deinterlace Fix-Large
The "Complete Series" release of "Where on Earth is Carmen Sandiego?" by Mill Creek has a number of episodes that suffer from some rather harsh deinterlacing. This model attempts to restore the footage to something closer to a more advanced deinterlacing job.
Compact
1x
Where On Earth-Deinterlace Fix-Small
Where On Earth-Deinterlace Fix-Small
Where On Earth-Deinterlace Fix-Small
The "Complete Series" release of "Where on Earth is Carmen Sandiego?" by Mill Creek has a number of episodes that suffer from some rather harsh deinterlacing. This model attempts to restore the footage to something closer to a more advanced deinterlacing job.
SPAN
2x
2xLiveActionV1_SPAN
2xLiveActionV1_SPAN
2xLiveActionV1_SPAN
SPAN model for live action film and digital video. The main goal is to fix/reduce common video quality problems while maintaining fidelity. I tried the existing video-focused models and they all denoise or cause colour shifts so I decided to train my own. The model is trained with compression (JPEG, MPEG-4 ASP, H264, VP9, H265), chroma subsampling, blurriness from multiple scaling, uneven horizontal and vertical resolution, oversharpening halos, bad deinterlacing jaggies, and onscreen text. It is not trained to remove noise at all so it preserves details in the source well. To prevent colour/brightness shifts, I used consistency loss in neosr. I had to modify consistency loss to use a stronger blur so it doesn't interfere with the halo removal. Limitations: 1. The model has limited ability to see details through heavy grain, but light to moderate grain is fine. 2. The model still does not handle bad deinterlacing perfectly, especially if the source is vertically resized. Fixing bad deinterlacing is not the main goal so it is what it is. Sources that are line-doubled throughout should be descaled back to half height first for best results. 3. The model sometimes oversharpens a little. This is probably because the training data has some oversharpened images. 4. This model generally cannot handle VHS degradation. More comparisons: https://slow.pics/c/DtDN7gaq The training config and image degradation scripts used to create training data can be found in https://github.com/jcj83429/upscaling/tree/9332e7d5b07747ff347e5abdc43f8144364de9f7/2xLiveActionV1_SPAN
DAT
4x
PBRify_UpscalerV4
PBRify_UpscalerV4
PBRify_UpscalerV4
A 4x model for Compression Removal, General Upscaler, Restoration. A new version of the main PBRify upscaling model. The PBRify Upscaler series of models are meant to take existing game textures from older 2000s era games, and upscale them to usable quality. V4 significantly improves detail over the previous V3 model. It is slower, as it's based on the DAT2 architecture, however the results are very much worthwhile imo. **Showcase:** https://slow.pics/c/vMGFFfFh !PBRify_Upscaler_V4_1
SPAN
1x
SuperScale
SuperScale
SuperScale
A 1x model for Anti-aliasing, Restoration. I was bored, so I did this. This model uses DPID as the scaling algorithm for the HRs. The original images were 8k or 12k. It's significantly sharper than Box/Area scaling, yet does a great job with aliasing. This allows for a very sharp model with minimal artifacts, even on the SPAN version. The main model is trained on 12k images captured with Nvidia Ansel. It took about 2 days capturing manual 4k and 12k pairs for this model. The 4k captures were used as the LR, the 12k captures were resized to 4k with DPID with randomized lambda values, then trained on as HRs. The Alt model is trained exclusively on 8k images from my 8k dataset, resized to 4k with dpid. This provides a clearer result with less noise, but it doesn't handle long edges well at all. Thanks to CF2lter for advice on preparing the dataset, and umzi2 for creating the rust version of DPID. **Showcase:** https://slow.pics/c/TCyqje9K !Animation (2)
RealPLKSR
1x
SuperScale_Alt_RPLKSR_S
SuperScale_Alt_RPLKSR_S
SuperScale_Alt_RPLKSR_S
A 1x model for Anti-aliasing, Restoration. I was bored, so I did this. This model uses DPID as the scaling algorithm for the HRs. The original images were 8k or 12k. It's significantly sharper than Box/Area scaling, yet does a great job with aliasing. This allows for a very sharp model with minimal artifacts, even on the SPAN version. The main model is trained on 12k images captured with Nvidia Ansel. It took about 2 days capturing manual 4k and 12k pairs for this model. The 4k captures were used as the LR, the 12k captures were resized to 4k with DPID with randomized lambda values, then trained on as HRs. The Alt model is trained exclusively on 8k images from my 8k dataset, resized to 4k with dpid. This provides a clearer result with less noise, but it doesn't handle long edges well at all. Thanks to CF2lter for advice on preparing the dataset, and umzi2 for creating the rust version of DPID. **Showcase:** https://slow.pics/c/TCyqje9K !Animation (2)
RealPLKSR
1x
SuperScale_RPLKSR_S
SuperScale_RPLKSR_S
SuperScale_RPLKSR_S
A 1x model for Anti-aliasing, Restoration. I was bored, so I did this. This model uses DPID as the scaling algorithm for the HRs. The original images were 8k or 12k. It's significantly sharper than Box/Area scaling, yet does a great job with aliasing. This allows for a very sharp model with minimal artifacts, even on the SPAN version. The main model is trained on 12k images captured with Nvidia Ansel. It took about 2 days capturing manual 4k and 12k pairs for this model. The 4k captures were used as the LR, the 12k captures were resized to 4k with DPID with randomized lambda values, then trained on as HRs. The Alt model is trained exclusively on 8k images from my 8k dataset, resized to 4k with dpid. This provides a clearer result with less noise, but it doesn't handle long edges well at all. Thanks to CF2lter for advice on preparing the dataset, and umzi2 for creating the rust version of DPID. **Showcase:** https://slow.pics/c/TCyqje9K !Animation (2)
TSCUNet
2x
GameUpV2-TSCUNet
GameUpV2-TSCUNet
A 2x model for Compression Removal, General Upscaler, Restoration. This is my first video model! It's aimed at restoring compressed video game footage, like what you'd get from Twitch or Youtube. I've attached an example below. It's trained on TSCUNet using lossless game recordings, and degraded with my video destroyer. The degradations include resizing, and H264, H265, and AV1 compression. __IMPORTANT:__ You cannot use this model with chaiNNer or any other tool. You need to use **this**. You just run `test_vsr.py` after installing the requirements. Use the example command from the readme. You can also use the ONNX version of the model with `test_onnx.py` If you want to train a TSCUNet model yourself, use traiNNer-redux. I've included scripts in the SCUNet repository to convert your own models to ONNX if desired. **Showcase:** Watch in a Chrome based browser: https://video.yellowmouse.workers.dev/?key=Fvxw482Nsv8= !Animation
TSCUNet
2x
GameUpV2-TSCUNet-Small
GameUpV2-TSCUNet-Small
A 2x model for Compression Removal, General Upscaler, Restoration. This is my first video model! It's aimed at restoring compressed video game footage, like what you'd get from Twitch or Youtube. I've attached an example below. It's trained on TSCUNet using lossless game recordings, and degraded with my video destroyer. The degradations include resizing, and H264, H265, and AV1 compression. __IMPORTANT:__ You cannot use this model with chaiNNer or any other tool. You need to use **this**. You just run `test_vsr.py` after installing the requirements. Use the example command from the readme. You can also use the ONNX version of the model with `test_onnx.py` If you want to train a TSCUNet model yourself, use traiNNer-redux. I've included scripts in the SCUNet repository to convert your own models to ONNX if desired. **Showcase:** Watch in a Chrome based browser: https://video.yellowmouse.workers.dev/?key=Fvxw482Nsv8= !Animation
TSCUNet
2x
GameUp-TSCUNet
GameUp-TSCUNet
A 2x model for Compression Removal, General Upscaler, Restoration. This is my first video model! It's aimed at restoring compressed video game footage, like what you'd get from Twitch or Youtube. I've attached an example below. It's trained on TSCUNet using lossless game recordings, and degraded with my video destroyer. The degradations include resizing, and H264, H265, and AV1 compression. __IMPORTANT:__ You cannot use this model with chaiNNer or any other tool. You need to use **this**. You just run `test_vsr.py` after installing the requirements. Use the example command from the readme. You can also use the ONNX version of the model with `test_onnx.py` If you want to train a TSCUNet model yourself, use traiNNer-redux. I've included scripts in the SCUNet repository to convert your own models to ONNX if desired. **Showcase:** Watch in a Chrome based browser: https://video.yellowmouse.workers.dev/?key=Fvxw482Nsv8= !Animation
RCAN
2x
AnimeSharpV4
AnimeSharpV4
AnimeSharpV4
A 2x model for Anime. This is a successor to AnimeSharpV3 based on RCAN instead of ESRGAN. It outperforms both versions of AnimeSharpV3 in every capacity. It's sharper, retains *even more* detail, and has very few artifacts. It is __extremely faithful__ to the input image, even with heavily compressed inputs. To use this model, you must update to the **latest chaiNNer nightly build** The `2x-AnimeSharpV4_Fast_RCAN_PU` model is trained on RCAN PixelUnshuffle. This is much faster, but comes at the cost of quality. I believe the model is ~95% the quality of the full V4 RCAN model, but ~6x faster in Pytorch and ~4x faster in TensorRT. This model is ideal for video processing, and as such was trained to handle MPEG2 & H264 compression. __Comparisons:__ https://slow.pics/c/63Qu8HTN https://slow.pics/c/DBJPDJM9 !1736292155 679079