# Introduction [[Projects/Video/Upscaling/Dragon Ball|Dragon Ball]], in general, is plagued with bad releases, particularly [[Dragon Ball Z]] (The only decent Blu-ray release seems to be the Level Sets, and the Dragon Box's are being sold at ridiculous prices). I came into the sets of Blue Box OG Dragon Ball DVDs. After some searching, I found that there isn't a FUNimation/Crunchyroll Blu-ray release, unlike [[Dragon Ball Z]]. While I won't be able to achieve true Blu-ray quality by doing this, it should be much better than the DVDs. After ripping all the episodes into individual MKV files using `MakeMKV`, I decided to try my hand at AI Upscaling them. The first step was de-interlacing them, which `Hybrid` does beautifully, thanks to its `QTGMC` de-interlacing within `VapourSynth`. After `Hybrid` generated the file, I grabbed the script and edited it to suit my needs better, dropping the GUI altogether. Because Dragon Ball has 153 episodes, I wrote a script to automate all steps, including the upscale. I will be using [[PowerShell]] for this. # Comparisons The earlier episodes are in the worst condition, so I'll show the very first episode. > [!note]- Original 1 > ![[Assets/Attachments/Upscaling/DB01_01SD.png]] > [[Assets/Attachments/Upscaling/DB01_01SD.png|View File]] > [!note]- Upscale 1 > ![[Assets/Attachments/Upscaling/DB01_01HD.png]] > [[Assets/Attachments/Upscaling/DB01_01HD.png|View File]] > [!note]- Original 2 > ![[Assets/Attachments/Upscaling/DB01_02SD.png]] > [[Assets/Attachments/Upscaling/DB01_02SD.png|View File]] > [!note]- Upscale 2 > ![[Assets/Attachments/Upscaling/DB01_02HD.png]] > [[Assets/Attachments/Upscaling/DB01_02HD.png|View File]] > [!note]- Original 3 > ![[Assets/Attachments/Upscaling/DB01_03SD.png]] > [[Assets/Attachments/Upscaling/DB01_03SD.png|View File]] > [!note]- Upscale 3 > ![[Assets/Attachments/Upscaling/DB01_03HD.png]] > [[Assets/Attachments/Upscaling/DB01_03HD.png|View File]] # Deinterlacing For De-interlacing, I will use `ffmpeg` with `vspipe` for `VapourSynth`. In order to support a resume feature after each step, I will do each step by itself instead of piping them into each other. >[!example] > ```powershell vspipe.exe --arg PAR=1.3333 --container y4m "SynthSkript.py" | ffmpeg.exe -y -hide_banner -loglevel error -stats -noautorotate -nostdin -threads 8 -f yuv4mpegpipe -i - -an -sn -vf "zscale=rangein=tv:range=tv" -strict 1 -fps_mode passthrough -vcodec prores_ks -profile:v 3 -vtag apch -aspect 1.3333 -f mov "out_deinterlaced.mov" `vspipe` accepts arguments, so we can pass things like `width`, `height`, etc to it with `--arg`. The `SynthSkript.vpy` is generated by `Hybrid`, but I've edited it to accept arguments. > [!example]- SynthSkript.vpy > ```python > # Imports > import vapoursynth as vs > import os > import ctypes > import sys > > # Scripts folder > scriptPath = "" > > if os.name == 'nt': > # Loading Support Files > Dllref = ctypes.windll.LoadLibrary(hybrid_path + "/64bit/vsfilters/Support/libfftw3f-3.dll") > > scriptPath = hybrid_path + '/64bit/vsscripts' > > elif os.name == 'posix': > uname = os.uname() > if uname.sysname == 'Darwin': > scriptPath = hybrid_path + '/Contents/MacOS/vsscripts' > else: > scriptPath = '/vsscripts' # Needs found > > # getting Vapoursynth core > core = vs.core > sys.path.insert(0, os.path.abspath(scriptPath)) > > # Loading Plugins > if os.name == 'nt': > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/GrainFilter/RemoveGrain/RemoveGrainVS.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/GrainFilter/AddGrain/AddGrain.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/DenoiseFilter/FFT3DFilter/fft3dfilter.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/DenoiseFilter/DFTTest/DFTTest.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/Support/EEDI3m.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/ResizeFilter/nnedi3/vsznedi3.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/Support/libmvtools.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/Support/scenechange.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/Support/fmtconv.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/DeinterlaceFilter/Bwdif/Bwdif.dll") > core.std.LoadPlugin(path=hybrid_path + "/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll") > > # Import scripts > import havsfunc > > AspectRatio = float(AspectRatio) > width = int(width) > height = int(height) > FrameRate = float(FrameRate) > FrameRate_Num = int(FrameRate_Num) > FrameRate_Den = int(FrameRate_Den) > > clip = core.lsmas.LWLibavSource(source=input_file, format="YUV420P8", stream_index=0, cache=0, fpsnum=FrameRate_Num, fpsden=FrameRate_Den, prefer_hw=0) > > # Setting detected color matrix (470bg). > clip = core.std.SetFrameProps(clip, _Matrix=5) > # Setting color transfer info (470bg), when it is not set > clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5) > # Setting color primaries info (BT.601 NTSC), when it is not set > clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5) > # Setting color range to TV (limited) range. > clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) > > # making sure frame rate is set to 29.97 > clip = core.std.AssumeFPS(clip=clip, fpsnum=FrameRate_Num, fpsden=FrameRate_Den) > clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=2) # tff > > # Deinterlacing using QTGMC > clip = havsfunc.QTGMC(Input=clip, Preset="Placebo", TFF=True) # new fps 2x > # Making sure content is preceived as frame based > clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive > # MacOS: clip = core.std.SetFieldBased(clip, 0) > > # ColorMatrix: adjusting color matrix from 470bg to 709 > # adjusting luma range to 'limited' due to post clipping > clip = core.resize.Bicubic(clip=clip, matrix_in_s="470bg", matrix_s="709", range_in=0, range=0) > # cropping the video to 720x478 > if cropLeft or cropBottom or cropRight or cropTop: > clip = core.std.CropRel(clip=clip, left=cropLeft, right=cropRight, top=cropTop, bottom=cropBottom) > # Resizing using 10 - bicubic spline > if maintainPAR: > clip = core.fmtc.resample(clip=clip, kernel="spline16", w=width, h=(width / AspectRatio), interlaced=False, interlacedd=False) # resolution 720x540# before YUV420P8 after YUV420P16 > else: > clip = core.fmtc.resample(clip=clip, kernel="spline16", w=width, h=height, interlaced=False, interlacedd=False) # resolution 720x540# before YUV420P8 after YUV420P16 > else: > clip = core.fmtc.resample(clip=clip, kernel="spline16", w=width, h=height, interlaced=False, interlacedd=False) # resolution 720x540# before YUV420P8 after YUV420P16 > > # adjusting output color from: YUV420P16 to YUV422P10 for ProResModel > clip = core.resize.Bicubic(clip=clip, format=vs.YUV422P10, range_s="limited", dither_type="error_diffusion") > > # set output frame rate to 29.97fps (progressive) > clip = core.std.AssumeFPS(clip=clip, fpsnum=FrameRate_Num * 2, fpsden=FrameRate_Den) > > # Output > clip.set_output() > ``` We must gather information about our video file to pass to `vspipe`. For this, I will primarily be using `MediaInfo`. > [!example] Basic Usage > ```powershell > MediaInfo.exe --Output="Video;%AspectRatio%" input.mov > MediaInfo.exe --Output=JSON input.mkv # For everything > ``` Unfortunately, my videos are using a Variable Frame Rate (VFR), instead of a Constant Frame Rate (CFR). To obtain an accurate frame count, I will be using `ffprobe`. This may take a little bit to execute. > [!example] ffprobe > ```powershell > ffprobe -v 0 -of csv=p=0 -select_streams v:0 -count_frames -show_entries stream=r_frame_rate,nb_read_frames input.mkv > ``` After that, we need to see if the video needs any cropping (black borders). This will only evaluate 10 frames at the five-minute mark. > [!example] Auto-Crop Detection > ``` > ffmpeg -hide_banner -ss 00:05:00 -i input.mkv -an -vframes 10 -vf cropdetect=24:16:00 -f null - 2>&1 > ``` All of this will be passed to the de-interlacing script. For me, the de-interlaced file is around 20GB. # AI Upscaling I use [[Topaz Video AI]] for upscaling. But because I wanted this all automated, I decided to determine what each argument meant from the "Export Command" within [[Topaz Video AI]]. The ones we really care about are in `filter_complex`. An "Export Command" should look something like this > [!example]- Export Command > ```powershell > ffmpeg "-hide_banner" "-i" "input_deinterlaced.mov" "-sws_flags" "spline+accurate_rnd+full_chroma_int" "-color_trc" "2" "-colorspace" "2" "-color_primaries" "2" "-filter_complex" "tvai_up=model=prob-3:scale=0:w=1440:h=1080:preblur=0:noise=0.25:details=0:halo=0.25:blur=0.25:compression=0.6:blend=0.2:device=-2:vram=1:instances=1,scale=w=1440:h=1080:flags=lanczos:threads=0" "-c:v" "prores_ks" "-profile:v" "3" "-vendor" "apl0" "-quant_mat" "hq" "-bits_per_mb" "1350" "-pix_fmt" "yuv422p10le" "-an" "-map_metadata" "0" "-map_metadata:s:v" "0:s:v" "-movflags" "use_metadata_tags+write_colr" "-metadata" "videoai=Enhanced using prob-3 with recover details at 0; dehalo at 25; reduce noise at 25; sharpen at 25; revert compression at 60; anti-alias/deblur at 0. and recover original detail at 20. Changed resolution to 1440x1080" "output_upscaled.mov" > ``` As you can see, everything is in quotes for some reason. > [!example]- Export Command (Cleaned) > ```powershell > ffmpeg -hide_banner -i "input_deinterlaced.mov" -sws_flags spline+accurate_rnd+full_chroma_int -color_trc 2 -colorspace 2 -color_primaries 2 -filter_complex "tvai_up=model=prob-3:scale=0:w=1440:h=1080:preblur=0:noise=0.25:details=0:halo=0.25:blur=0.25:compression=0.6:blend=0.2:device=-2:vram=1:instances=1,scale=w=1440:h=1080:flags=lanczos:threads=0" -c:v prores_ks -profile:v 3 -vendor apl0 -quant_mat hq -bits_per_mb 1350 -pix_fmt yuv422p10le -an -map_metadata 0 -map_metadata:s:v 0:s:v -movflags use_metadata_tags+write_colr -metadata "videoai=Enhanced using prob-3 with recover details at 0; dehalo at 25; reduce noise at 25; sharpen at 25; revert compression at 60; anti-alias/deblur at 0. and recover original detail at 20. Changed resolution to 1440x1080" "output_upscaled.mov" > ``` This command will run the upscale if run from within [[Topaz Video AI]]'s Command Prompt (or you set the Environment Variables). It's worth looking at the (extremely limited) [documentation](https://docs.topazlabs.com/video-ai/advanced-functions-in-topaz-video-ai/command-line-interface). This produces a file that is roughly 80GB. # x265 Encoding I'm using `x265` to encode the Upscale back down to a reasonable size. Because the video we're working on is an animation, we can use `--tune animation`. Our final size is only `1080p`, so we should try to avoid `6.x` for the level (UHD+ I believe). I'm using `5.2`, though you'll see `5.1` a lot more. You'll see I'm using `--preset placebo` - I do not care how long it takes; every optimization helps. Feel free to use a different preset. I found that a CRF (Constant Rate Factor) of 20 produces a pretty good result. Note that `x265` cannot read `ProRes` files, so we'll use `ffmpeg` to pipe the video in. > [!example] x265 > ```powershell > ffmpeg.exe -y -hide_banner -loglevel error -f mov -i "input_upscaled.mov" -strict -1 -f yuv4mpegpipe - | x265.exe --log-level none --y4m --input - --input-res 1440x1080 --fps 59.97 --frames 87058 --input-depth 10 --profile main422-10 --level-idc 5.2 --preset placebo --tune animation --crf 20 --rd 4 --psy-rd 0.75 --psy-rdoq 4.0 --rdoq-level 1 --no-strong-intra-smoothing --aq-mode 1 --rskip 2 --no-rect --output "output_encode.h265" > ``` That will give us a file that's just over 1GB or so. # Merging the tracks back in Now you may have noticed that none of the commands so far copied or did anything else with any track except video. That's because I want the original, untouched audio streams. I like using `mkv`, so I'll be using `mkvmerge` to produce our final file. First, we need to get the language of our original video track. We can use `MediaInfo` once again for this. > [!example] MediaInfo > ```powershell > MediaInfo.exe --Output="Video;%Language%" input.mkv > ``` Now, we can run our `mkvmerge`. > [!example] mkvmerge > ```powershell > mkvmerge -o "out.mkv" --quiet --no-audio --no-subtitles --no-buttons --language 0:ja "input_encode.h265" --no-video "input.mkv" > ``` # The Script Fair warning that this was not meant to be shared. I've uploaded my script and its related files to my [GitHub](https://github.com/NightQuest/DVDUpscaling).