A two-pass paletted pixel-scaling algorithm that uses weighting counting of adjacent colors and a fitness function (for tie-breaking) to create a 2x scale image.
This is not an efficient implementation, just a quick-and-dirty proof of concept. So it is mainly useful for offline rendering right now, but a few optimizations to create less temporary memory and it could be made pretty quick. In particular, the best_sample
function will create a dictionary every call, resulting in a lot of garbage. This algorithm could directly work on an indexed image instead and then the weight array be a fixed-length array that is the size of the image color palette (possibly 16 or 256-color or whatever) that's shared between calls and just cleared before use, and then this should result in way fewer allocations. Also somebody could write it in a systems language like C++ or Rust instead of Python -- which would also help a lot, and hopefully wouldn't be too bad to port.
Turns an image like this:
into something like this, 2x as big:
Personally I prefer nearest-neighbour most of the time for scaling game artwork because it gives the cleanest/sharpest look, but I wanted this upscaling algorithm for the purpose of making larger sprites with smaller ones, while maintaining most of the pixel-integrity and original color palette when scaling.
Here's an example of the algorithm in use in pico8 0.16:
These sample images are copyright (c) 2016 Andrew G. Crowell. All rights reserved.
However, I don't mind you using the code for your own projects. I've released the code under the MIT License. I've attached a Python 2.7 version (that uses PIL), and a pico8 0.16 version.
Inspired by the Scale2x algorithm I tried out a new approach. It scales an image to twice the size without interpolation like scale2x, but uses a different way of determining the "best color" for each of the upscaled pixels, which could be thought of a weighted statistical mode of adjacent pixels for each corner.
for a source image of size w x h,
a destination image of size 2w x 2h is created.
for each pixel in the source we sample the pixel p and its neighbours:
[ ul ][ u ][ ur ]
[ l ][ p ][ r ]
[ dl ][ d ][ dr ]
A nearest neighbour 2x scaling would output a 2x2 grid of pixels p, like this.
[p][p]
[p][p]
On the other hand, zoom2x will output a 2x2 grid of pixels like this:
[e][f]
[g][h]
e = best_sample(u, l, ul, p)
f = best_sample(u, r, ur, p)
g = best_sample(d, l, dl, p)
h = best_sample(d, r, dr, p)
best_sample(vert, horz, diag, pixel) is a function that is responsible for weighting these pixels.
e, f, g, h essentially are a function of the source pixel, 2 orthogonal adjacent pixels, and a diagonal adjacent pixel.
It uses a combination of a weighted mode, and a fitness function for breaking ties.
The weighting is such that common colors among (horz, vert, pixel) are favored, the source pixel is favored is there is a disagreement, and if a diagonal penalty is provided, diag subtracts from the weight table -- to prevent excess staircasing.
The fitness function for tie-breaking will favor colors around middle brightness, over extremely dark and extremely bright colors because it seemed to give better results (so luminance = (r + g + b) / 3 that is closest to 127).
I am handwaving like crazy because really I just tried so crap until it worked, but yeah. maybe it's useful to someone. Read the code for more details.
Feel free to make a newer, better, more efficient scaling algorithm if this inspires you and the other ones out there don't fill the void in your life for nicely preserved color/details when upscaling pixels. zzz