View Single Post
Old 27th September 2007, 22:09   #61  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
The -2 was indeed a typo, it should have been -1.

Quote:
Originally Posted by IanB
So if I guess correctly, you classify and examine a stack of images to build a tables of how "the world" works. To do the interpolation you assume the target image is a member of "the world" with every 2nd line missing. You then search your table for "a good/best match" and insert the missing lines based on that instance of experience. This should be extensible for general times N upsizing by assuming N-1 of N lines are missing and need insertion.
Here is basically how it works. In the first stage a small ann takes in some surrounding pixels and predicts whether or not cubic interpolation will be close to the true pixel value (basically a two class classifier). If it thinks it is then cubic is used. Otherwise, the point is matched up to the closest of 64 cluster prototypes. The pixel is then fed to the ann for that cluster prototype (there is a separate ann for each cluster prototype), which predicts (outputs) the missing pixel value using surrounding pixels as input (more this time than for the first stage classification). In this way no single ann has to learn to approximate the entire input->output space mapping, only a small piece of it. CMAES is used to find the weights for these anns (instead of a more usual training method like gradient descent, lev-mar, etc...). In the second stage CMAES is minimizing squared error.
tritical is offline   Reply With Quote