How To Upscale


Github Release Link

Name: Ludvae200
License: CC BY 4.0
Author: Philip Hofmann
Network: LUD-VAE
Scale: 1
Release Date: 25.03.2024
Purpose: 1x realistic noise degradation model
Iterations: 190'000
H_size: 64
n_channels: 3
dataloader_batch_size: 16
H_noise_level: 8
L_noise_level: 3
Dataset: RealLR200
Number of train images: 200
OTF Training: No
Pretrained_Model_G: None

Description: 1x realistic noise degradation model, trained on the RealLR200 dataset as found released on the SeeSR github repo. Next to the ludvae200.pth model file, I provide a file which not only contains the code but also an inference script to run this model on the dataset of your choice. Adapt the script accordingly by adjusting the file paths at the beginning section, to your input folder, output folder, the folder path holding the ludvae200.pth model, and a folder path where you want the text file to be generated. I made the textfile generation the same way as I did in Kim's Dataset Destroyer, which means you will have each image file logged with each of the values used to degrade that specific image file in the resulting text file, which will append only and never overwrite.

You can also adjust the strength settings inside the inference script file to fit to your needs. If you in general want less strong noise for example, you should adjust the temperature upper limit from 0.4 to 0.2 or go even lower. So in line 96 change "temperature_strength = uniform(0.1,0.4)" to "temperature_strength = uniform(0.1,0.2)" just to give an example.

These values are defaulted to my needs of my last dataset degradation workflow I used, but feel free to adjust these values. You can also do the same as I did, temporarily using deterministic values with multiple runs to determine the min and max values of noise generation you deem suitable for your dataset needs.

See the examples of what this looked like for my last dataset workflow I used my model in.

Color Mode
Private use
Commercial use
Credit required
State Changes
No Liability & Warranty
Dataset size200
Training iterations190000
Training batch size16
Training HR size64
Training OTFNo

Similar Models