

- #Photoacute vs helicon focus how to
- #Photoacute vs helicon focus manual
- #Photoacute vs helicon focus archive
- #Photoacute vs helicon focus software
12fps) recorded onto an ntsc medium, there is not the in between motion (necessary to be removed but not necessary for the intended motion impression). In the specific case of film-based (24fps) cell animation with reduced motion (e.g. In the case of SR, it trades increase in resolution for decrease in motion. Thanks for any help anyone could give me.Īs i recently mentioned in another thread, "super-resolution" can work, but there is always a compromise.
#Photoacute vs helicon focus software
It seems that this is more complicated than a simple AVISynth script, but may be some software exists out there that I don't know about.
#Photoacute vs helicon focus how to
If this doesn't exist, I'm not sure how to automate this process, as it requires alignment of the frames, and with different sources, that might include resizing to correct for slightly different aspect ratios. The minimum number Deep Sky Stacker talks about is ten. I don't know what more advanced algorithms would do, or require. Here are the results of stacking the same frame across 4 different copies of the intro found on the Thundercats DVD, one of which was a good deal worse than the others: I did an upscale test by stacking the only way I know how (which some have called "incorrect"), following this tutorial:
#Photoacute vs helicon focus manual
Has that 10-year-old advanced process made it into any software available now? Have either this, or the astronomy-style stacking made it into any software that stacks videos and outputs a composite video, instead of using a video to output a single image? The thing is, we have all the tools to do that last thing, but it'd be a highly manual process. Of course it would have problems, maybe introduce artifacts, and wouldn't compare to the real deal source material, but it seems it would possibly be better than what we have now. And if you take something like the intro to a show released on DVD, you have many many very high quality copies that were captured using a standard method/scale. What's especially interesting to me, and this may not be possible, is to scale these video clips, run this process, and maybe get something resembling an HD source. So the algorithm would have to stack these videos on each other, aligning the frames, and dig down through the different clips. While there may not be any two identical frames in any one video of high action animation, we do have that frame duplicated across many videos from many different VHS tape recordings. While this is used on multiple frames to produce a single image, it made me think of how many videos there are out there of things like cartoon intros, from many sources. There are even more advanced approaches now, but they seem maybe too new to have made it into affordable software:Īs you can see, it is used to make a previously non-legible license plate completely legible. In the paper, they used stacking like astronomers do, but with a more sophisticated algorithm along the way. At the time, it was extremely new tech, but I wondered if anything like it is available now.


I also remembered an old scientific paper I read a decade ago. Here are some examples of software that do this: It takes frames from a static video or burst shots, and using the slight differences between frames, reconstructed a cleaner, more accurate image. The first is a pretty widely available method used mainly by astronomers, called "stacking".
#Photoacute vs helicon focus archive
I archive old animation as a hobby, and in my attempts to scale and make it a little clearer, I remembered a couple of ways to enhance images. I have a question that will take a bit to explain.
