Skip to content

Instantly share code, notes, and snippets.

@silverhammermba
Created December 7, 2017 20:53
Show Gist options
  • Save silverhammermba/b9c8afd8f771bfb0fe4bec0d39621fd7 to your computer and use it in GitHub Desktop.
Save silverhammermba/b9c8afd8f771bfb0fe4bec0d39621fd7 to your computer and use it in GitHub Desktop.
hackin on gamma
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
</head>
<body>
<article>
<h1>Yet another article about gamma correction</h1>
<style>
.gradstop {
height: 80px;
}
</style>
<h2 id="what-is-brightness">What is brightness?</h2>
<p>This boring gray square is much more interesting than it first appears.</p>
<canvas class="checkers" width="200" height="200"></canvas>
<p>In fact it isn’t gray at all; it’s a checkerboard of black and white
pixels. Depending on the sharpness of your eyes and screen you might
be able to easily pick out the individual squares of the
checkerboard. If that’s the case, do me a favor and move your head away
from the screen or squint a bit. Because we’ll be using this image to
strike at the heart of a seriously tricky issue related to how
computers display color: brightness.</p>
<p>Let’s imagine how we perceive this square in terms of measurable light.
If the square were pure white, it would be emitting whatever the
maximum amount of light is for your screen (disregarding the
independent brightness setting of your screen). Let’s call this maximum
amount X. If the square were pure black, similarly it would be emitting
the minimum amount of light, which we’ll call Y. Intuitively, since
half of the pixels are white and half are black the overall light
emitted by the square will average out to be halfway between those two extremes:
(X+Y)/2.</p>
<p>But we can also think about this square in another way. It is common knowledge
that pixels store colors as [red, green, blue] triples where each color is an
integer ranging from 0 to 255, often referred to as 24-bit RGB. So our white
pixels are [255, 255, 255] triples and our black pixels are [0, 0, 0] triples.
Again, half are white and half are black so overall it averages out to look like
a square full of a single color. X=0, Y=255, (X+Y)/2=127 with rounding. So that
single color should be [127, 127, 127]. Right?</p>
<p>Well let’s put those two side-by-side. The checkerboard from above on
the left, and that solid [127, 127, 127] on the right. By the above
logic, the two will look almost exactly the same.</p>
<canvas class="checkers" width="200" height="200"></canvas>
<canvas id="solid" width="200" height="200"></canvas>
<p>…WHAT!?</p>
<p>The reason why they don’t look the same is because of an incorrect
assumption in our second argument about 24-bit RGB. Namely, that 127 is
halfway between 0 and 255. Sure, it’s <em>mathematically</em> halfway
between 0 and 255 but we’re talking about <em>light intensity</em>, not
math. Think back to our original reasoning about brightness:
there is some minimum brightness X and maximum brightness Y. Certainly
0 should correspond with X and 255 should correspond with Y, but there
isn’t really a <em>need</em> for (0+255)/2 to correspond with (X+Y)/2.
Here’s another example to illustrate this point:</p>
<canvas id="compare" width="256" height="256"></canvas>
<p>In the middle we have two spectra from black to white. If you compare
these spectra with each side of the image, you’ll find that they most
closely match their respective sides right in the middle. Above we had
two arguments for why the checkerboard and the 127-gray should give us
a “middle” brightness, and here we see two different spectra where each
approach falls in the “middle” of a spectrum. It turns out that these
two spectra correspond with two different (but equally valid) ways of
thinking about brightness.</p>
<p>Notice how the spectrum on the right has many more dark shades than the
left one. The human eye is very good at distinguishing between dark
shades of color and not so good at distinguishing between bright ones;
this spectrum is based on “distinguishability”. The shades of gray on
this spectrum are chosen such that the distinguishability of brightness
remains (somewhat) constant throughout the spectrum. The spectrum on
the left certainly does not have this property. To me, at least, it has
a quick transition from black to gray at the top, then about two-thirds
of the way down it’s essentially white and I can barely tell the
difference from then on. This spectrum is based on measurable light
intensity. Again, the human eye is good at distinguishing between darks
and not so good at brights, and we see that in this spectrum: the
closer you look to the top of the spectrum, the more apparent
differences in brightness are.</p>
<p>So these are the two ways of thinking about brightness: perceived
brightness based on distinguishability to the human eye, and measurable
light intensity. For short, I’ll call these perceived brightness and
light intensity moving forward. From the previous examples, we learned
that 24-bit RGB is actually based on perceived brightness, because when
we look at the “middle” value of [127, 127, 127], we end up in the
middle of the perceived brightness spectrum.</p>
<h3 id="takeaway-1">Takeaway #1</h3>
<p>The intensity of a light source (or color) is <strong>not the same
thing</strong> as how bright we perceive that light (or color) to
be.</p>
<h2 id="gamma-and-srgb">Gamma and sRGB</h2>
<p>The relationship between these two ways of thinking about brightness is
well-studied. There is a function that will give you
the light intensity for a perceived brightness and vice versa. For the
purposes of this article, we don’t care what this function is exactly,
but in general this process of switching between perceived brightness
and light intensity is called gamma correction, and thus the function
is often called a gamma curve.</p>
<p>What complicates the issue somewhat is that there are many
different gamma curves out there. Because in general gamma is a way of
converting between how a color is <em>stored</em> and how a color is
<em>displayed</em>. However for the purposes of this article we’re only
looking at images stored and displayed on computers, and on computers
there’s one ubiquitious standard: sRGB. I fudged
things earlier when I was talking about “24-bit RGB”. As we’ve just learned,
there’s a difference between perceived brightness and light intensity.
We humans obviously care about the former, but your screen (which has
to show you color by controlling sub-pixel intensities) only cares
about the latter. Since we’ve already established that 127-gray
corresponds to perceived color, the <em>only way</em> that your screen
can display that color is if your computer can convert from perceived
brightness to light intensity. So I should have said “24-bit
<strong>s</strong>RGB” i.e. RGB in terms of perceived brightness plus a
gamma curve for converting to light intensity.</p>
<p>Almost all computer images store colors using sRGB. Unless your image
format has a specific feature for specifying a different color space,
the default is sRGB. This goes for images taken by most digital
cameras, images in your browser, images in your favorite image editing
software, anything!</p>
<h3 id="takeaway-2">Takeaway #2</h3>
<p>When you hear <strong>gamma correction</strong>, think “Colors are
not being stored as light intensities and need to be converted.” When
you hear <strong>sRGB</strong>, think “Colors are being stored in terms
of perceived brightness and there is a standard gamma curve for
converting that to light intensity.”</p>
<h2 id="the-problem">The problem</h2>
<p>So far this article has been purely academic and I’ve been focusing
only on correcting your understanding of brightness. But there are
occasionally situations where <em>your computer</em> needs to
understand these different ways of thinking about brightness. And when
it understands them incorrectly, problems can occur.</p>
<p>Let’s look at our familiar checkerboard again, and then right next
to it I’ll have your web browser show the same checkerboard shrunk by
25%.</p>
<canvas class="checkers" width="200" height="200"></canvas>
<canvas class="checkers" id="scaled" width="200" height="200"></canvas>
<p>…WHAT!?</p>
<p>I told your browser to make it smaller, which it did, but it also made it
<strong>darker</strong>. Is this the correct behavior? Here’s a simple test:
keep looking at the larger checkerboard and back away from your screen
until it looks about half as big. Does it look darker? Nope.</p>
<p>What’s happening here is your browser is making the exact same
incorrect assumption we made in the first section. Half of the pixels
are black and half are white, so when we shrink the image the <em>light
intensities</em> blend together and we should end up with a solid
square with <em>light intensity</em> halfway between black and white:
(X+Y)/2. But your browser instead says black=0, white=255, so we get
(0+255)/2=127. It’s mixing up percieved brightness and light intensity!</p>
<p>To understand why this is incorrect, think about this: when
colors blend together, they blend together <em>physically</em> i.e.
with groups of photons mixing together in the real world. The
checkerboard looks gray because the tiny groups of black photons from
the black pixels mix with the tiny groups of white photons from white
pixels. When you “shrink” the square by backing away from your screen,
you aren’t changing the way those photons are mixing, you’re only
changing the area of the eyeball they are hitting: same mixture, same
light intensity, less area. Or to put it more philosophically, colors exist
whether we perceive them or not, so the process of mixing colors should
be done independently of our perception.</p>
<p>This problem occurs because the smallest color your screen can display
is a single pixel, and the checkerboard is made of individual black and
white pixels. When I ask your browser to display those same pixels in a
quarter of the area, it’s forced to mix them together, thus I’m forcing
it to demonstrate its understanding of light intensity.</p>
<p>This problem crops up any time you ask your computer to blend
together colors stored in sRGB. For example, it’s well-known that
mixing together red and blue gives you purple, right? How does the browser do?</p>
<canvas id="gradient-test" width="255" height="120"></canvas>
<p>At the top I told the browser to draw a gradient from the most intense
red to the most intense blue. The dark, murky purple we get in the
middle of the gradient is also drawn on its own right below that. This
is the same result you get when you naively mix together sRGB color
values (which is what the browser is doing). On the bottom we have a
different gradient that I drew manually based on mixing light
intensities, and above that is the bright purple you get in the middle.</p>
<p>Bright red + bright blue = dark purple? Ew, no. The bright purple on
bottom makes both intuitive and visual sense. This is the general
pattern when mishandling brightness: colors end up too dark.</p>
<p>In this article, I’m picking examples that clearly point out the error.
In real applications, with real digital images, the difference is
usually more subtle. And when you start getting into photo
manipulation where color distortion is <em>intended</em> in order to achieve a certain
look, it can be even harder to tell what is “correct”. But the simple
fact is still that your web browser, which was made by hundreds of
highly trained software developers, and is used by billions of people
every day, is mixing colors incorrectly. Which means it’s also</p>
<ul>
<li>Resizing images incorrectly</li>
<li>Bluring incorrectly</li>
<li>Blending semi-transparent layers incorrectly</li>
<li>Drawing gradients incorrectly</li>
</ul>
<p>Oh, and it’s not just your browser. Remember what I said earlier about
sRGB being a ubiquitous standard in almost every bit of computer
software? Yeah, they’re all doing these things wrong too. Well, almost
all of them. Software like Photoshop and ImageMagick are capable of
mixing colors correctly, but you must explicitly specify the sRGB gamma
correction; by default they will do it wrong.</p>
<h3 id="takeaway-3">Takeaway #3</h3>
<p>If you’re using software to mix colors together, chances are it’s doing it wrong.</p>
<h2 id="colors">Colors</h2>
<p>But it’s not even that simple. So far we’ve been mainly talking about
shades of gray, it all gets even more complicated when you consider the
RGB channels.</p>
<p>sRGB stores three separate numbers representing the perceived
brightness of red, green, and blue, respectively. All three numbers go
from 0 to 255 so we can represent the brightest red, green, or blue by
drawing a color that has 255 in one channel and 0 in the other two.
We do that below, but pay attention to how you perceive its brightness:</p>
<canvas id="rgb" width="100" height="100"></canvas>
<p>Which of the three appears brightest? Which appears darkest? If your eyes work
like most people’s, you perceive green as the brightest and blue as the darkest.
But this is despite them all having the same sRGB value of 255 and thus the same
light intensity! It’s our human eyes playing tricks on us again: we perceive the
brightness of light differently depending on its color.</p>
<p>One commonly accepted standard (on which sRGB is based) which quantifies this is
ITU-R Recommandation BT.709, which defines a formula for how bright a color is
based on its RGB light intensities:</p>
<p>L = 0.2126R + 0.7152G + 0.0722B</p>
<p>Notice how the green component is multiplied by a much larger number than the
other two. According to this standard, a pure green color is about 10 times
brighter than a pure blue one! But wait… what does that value L represent? Is
it measurable light intensity or perceived brightness? Confusingly, it’s kind of
a mix of both. First of all, this formula is pretty subjective: it’s supposed to
reflect the “average” human eye’s receptiveness to various colors of light. So
it will certainly be wrong for some observers (think color blindness or
tetrachromacy). So in that sense it’s perceived brightness. But notice that I
said it’s based on “RGB light intensities”, so its output is also in terms of
light intensity. Think about it this way: this formula takes an RGB color in
terms of light intensities and gives you the light intensity of the shade of
gray that has the same perceived brightness as that color.</p>
<p>It’s essentially a method of converting a color to black-and-white. When you
make an image black-and-white, intuitively you want to set R=G=B for every pixel
(so they are all shades of gray) while keeping the brightness of each pixel
intact. Notice that if R=G=B, then plugging into the formula you get also
L=R=G=B. So if you set red, green, and blue to L you get the desired
brightness-preserving black-and-white image. But this is in terms of light
intensity, so we have to convert from and to sRGB if we want to use this formula
with digital images. So the steps should be:</p>
<ol>
<li>Convert from perceived brightness sRGB to light intensity RGB</li>
<li>Plug that into the BT.709 formula</li>
<li>Convert the resulting light intensity back to perceived brightness</li>
<li>Use that one value for red, green, and blue in sRGB</li>
</ol>
<p>And guess what, again most software does not do this correctly. Many
naively average the sRGB red, green, and blue values (ignoring that
green has a stronger contribution to brightness) while others try to be
smart by using BT.709, but they plug in the sRGB values directly even
though the formula is not designed for that color space. You can see
the results below:</p>
<canvas id="baw" width="400" height="200"></canvas>
<p>The four bottom squares show the green color converted to gray using, from left
to right: sRGB average, sRGB “lightness” (another naive average of channels),
incorrect BT.709 using sRGB, and correct BT.709. As expected, the two sRGB
averaged-based results are far too dark since they don’t account for green being
perceived more brightly. The incorrect BT.709 formula is almost right, but again
is a little too dark because sRGB emphasizes darker shades. If you focus on the
border between the green and the gray, the last (correct) square is the least
distinguishable from the color in terms of brightness, indicating that it is a
good match for a black-and-white conversion.</p>
<p>With all of these exampels of sRGB going bad, you might be thinking
that it’s no-brainer to always gamma correct your colors first. But
there are other complications. For example, let’s repeat the image from the
beginning of the article comparing the light intensity method with perceived
brightness method but now with black/white replaced with red/blue on the left
and with red/yellow on the right.</p>
<canvas id="compare-rgbrb" width="256" height="256"></canvas>
<canvas id="compare-rgbry" width="256" height="256"></canvas>
<p>Recall how both of these images were made: the left side is a checkerboard
showing a mix of light intensities, then a gradient based on light intensities,
then a naive sRGB-based gradient, then a solid color resulting from a naive sRGB
average. Focus on the left image first: this is just like the red/blue gradient
comparison I did earlier. Again, the left side looks correct since it gives us
the intuitive bright red + bright blue = bright purple result. But now look at the
red/yellow image on the right.</p>
<p>Even with everything I’ve talked about in this article, my gut still tells me
that the right side of the red/yellow image using incorrect sRGB averaging looks
“correct”. But why? Why does the left side of the left image look correct while
the right side of the right image looks correct? Why doesn’t a single method of
color mixing always give us the right answer?</p>
<p>The difference is in the colors. Red and blue are separate color channels, so
when you do a gradient between them the red and blue channels are both changing
inversely with each other. The result is a mixture of different amounts of
red and blue together. However yellow is represented by RGB as a mixture of red and
green, so a gradient from red to yellow is a gradient from [255, 0, 0] to [255,
255, 0]: red stays constant while green varies from 0 to 255. When we perceive
the left image from top to bottom, the brightness is affected both by red
becoming less intense and blue becoming more intense. However when we perceive
the right image from top to bottom, we perceive <em>no</em> change in redness so the
<em>only</em> brightness change is as a result from green. And we’re back to where we
started: the human eye perceives darker differences better than brighter ones,
so the sRGB gradient spreads out those dark differences more and ends up giving
a color that really is <strong>perceptually halfway</strong> between red and yellow.</p>
<p>This problem existed right from the beginning. Go back and look at the
black/white image from the start of the article. Sure, the left side illustrates
what color is halfway between black and white in terms of light intensities, but
which color actually <strong>looks</strong> like its halfway between black and white? The
right side. It <strong>has</strong> to be the right side because the whole point of sRGB is
to smooth out changes in perceived brightness.</p>
<p>The problem is that sRGB is working double duty. On the one hand, the real
purpose of sRGB is a kind of compression. There are infinitely many different
levels of brightness in the analog real world and we obviously can’t store them
all digitally. We have to choose some finite number of light levels: either 256
(per channel, as it is for 24-bit sRGB) or some other finite number.
We could choose those finite levels to be evenly spread out from the darkest
light intensity to the brightest but this would largely be a waste: we know that
our eyes are really bad at distinguishing between similar bright intensities.
sRGB fixes this by devoting more of those levels to the darks than the
brights meaning we can store the maximum number of <em>discernable differences in
brightness</em> within the limited number of levels we have available.</p>
<p>On the other hand, sRGB has also been coopted as a user-friendly way of picking
and working with colors. Because sRGB smooths out perceptual differences in
light intensities, an sRGB gradient from dark to light matches our (naive)
expectations of what the dark to light transition should look like: halfway
through the gradient <em>looks</em> halfway between darkest and lightest even though
it’s more like 21% brightness in terms of light intensity. This works great if
you lock some channels together and change them in-sync because then the change
in color appearance as you tweak the numerical values matches a change in
perceived brightness. Want to make a shade of gray look half as bright? Halve
all of its RGB values! Want to make a shade of orange look three times as bright?
Triple its green channel! In these limited situations, the wrong behavior is
actually intuitive. The more channels change independently of each other, the
less we get this intuitive matchup and the more strange the sRGB result looks.
The best case for sRGB is all channels changing together (the original
black/white gradient) while the worst case is two channels inverting (our
red/blue example). I must emphasize that this isn’t a case of sRGB being more
<strong>correct</strong> in some cases; mixing colors in the sRGB space is <strong>always incorrect</strong>. This
is about sRGB being more <strong>intuitive</strong> in situations where the naive
assumption that perceived brightness and light intensity are the same holds.</p>
<p>Go back and read the beginning of the article if you have to. Perceived
brightness ≠ light intensity.</p>
<p>So why does mishandling of sRGB color persist? In my opinion, it’s a perfect mix
of ignorant programmers, uneducated users, and the status quo. Many
programmers don’t know all of this stuff about sRGB and still think
that perceived brightness and light intensity are the same. They write image
editing software and web browsers that mishandle sRGB values. Users of this
software also don’t understand this distinction and thus think the software is
working correctly when it draws gradients or mixes colors together. They are
incorrectly unsurprised when simple operations that should be only mixing
together colors end up making their images darker. And lastly, these errors are
so common in essentially all color-handling software out there that we’re used
to them. Designers rely on them when picking colors and they expect them when
manipulating images. I’m sure that many developers would consider correct sRGB
handling to be a regression at this point, due to how it would upset their
users’ expectations.</p>
<h3 id="takeaway-4">Takeaway #4</h3>
<p>sRGB can be a useful tool when changes in color correspond with changes in
perceived brightness, for example when changing a color one channel at a time.
In those cases sRGB will match with the layman’s intuition.</p>
<p>But that doesn’t make it correct to manipulate colors in sRGB without performing
gamma correction.</p>
<h2 id="try-it-for-yourself">Try it for yourself</h2>
<p>Here’s an interactive gradient with the light intensity method on top and the
incorrect sRGB method on bottom. You can play around with it to get a feel for
how the two differ.</p>
<div>
<input id="grad1" class="gradstop" type="color" value="#000000" oninput="update_interactive()" />
<canvas id="interactive" height="80" width="256"></canvas>
<input id="grad2" class="gradstop" type="color" value="#ffffff" oninput="update_interactive()" />
</div>
<p>If you want to try out correct color manipulation, a great way to get started is
with <a href="https://www.imagemagick.org">ImageMagick</a>. If you want to use
your prefered image manipulation software, you’re going to have to look up
specific instructions for it elsewhere.</p>
<p>For most ImageMagick commands, you can get it to do the right thing by prefixing
your operations with <code class="highlighter-rouge">-colorspace RGB</code> to perform the gamma correction and then
preceding your final output with <code class="highlighter-rouge">-colorspace sRGB</code> to undo it for saving. For
example, this works for both resizing and blurs:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>convert inputfile.png -colorspace RGB -resize 800 -colorspace sRGB outputfile.png
convert inputfile.png -colorspace RGB -gaussian-blur 0x8 -colorspace sRGB outputfile.png
</code></pre></div></div>
<p>Gradient generation is a little different because ImageMagick creates all
gradients in sRGB space. Instead you have to force it to reinterpret the image
as light-intensity RGB (without performing gamma correction) so that the final
<code class="highlighter-rouge">-colorspace sRGB</code> gamma encodes it correctly:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>convert -size 200x200 gradient:red-blue -set colorspace RGB -colorspace sRGB outputfile.png
</code></pre></div></div>
<p>Converting an image to black and white is also different. If you specify the
BT.709 formula it does the gamma correction for you, but the output result is
light intensity so you still need the sRGB conversion at the end:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>convert inputfile.png -grayscale rec709luminance -colorspace sRGB outputfile.png
</code></pre></div></div>
<p><strong>Note:</strong> ImageMagick also provides the <strong>incorrect</strong> implementation of the
formula, which directly plugs in the sRGB values without gamma
correction (they call this <code class="highlighter-rouge">rec709luma</code>). Unfortunately, that is the formula used
by their default “gray” colorspace, and it’s also the one recommended in their
documentation.</p>
<h2 id="exceptions">Exceptions</h2>
<p>Annoyingly, there are exceptional cases.</p>
<p>The first and probably less important one is color inversion. This is a pretty
weird, uncommon operation but it’s actually what led me down this rabbit hole in
the first place. Obviously when we invert an image we want the colors to…
invert. Things that were very red before should be very not red after. Black
should become white, etc. But with what we know about sRGB, gamma correction
will actually hurt us here. Let’s do an example. We start with a black/white
sRGB gradient which gives us a smooth transition between black and white where
the halfway point is perceptually halfway between black and white. What will
happen if we gamma correct this gradient, do the inversion in terms of light
intensity and then convert back to sRGB? Look at the result below:</p>
<canvas id="invert" width="256" height="80"></canvas>
<p>It looks like a light intensity-based gradient! Why is this bad? Well remember
that the dark colors are where the human eye best perceives details. We’ve taken
a gradient that had a perceptually smooth transition and—by inverting
it in terms of light intensity—crammed all of the details into what were
the very brightest and least-distinguishable colors before. And all of the
darker shades in the lower half which used to be easily distinguishable are now
nearly-identical white! We intended to just invert the colors, but we ended up
inverting the details as well!</p>
<p>The trouble is that unlike color mixing, bluring, or image resizing, color
inversion doesn’t really have a real-world analogue so there’s no ground truth to
compare to. The most reasonable result I can think of is one that inverts colors
while maintaining perceptual differences between them so that details don’t get
lost/added as a side-effect. Since sRGB is perception-based, the naive inversion
of the sRGB channels actually does a pretty good job. The gradient gets flipped,
so the perceptual differences are maintained:</p>
<canvas id="invertrgb" width="256" height="80"></canvas>
<p>The only improvement I can think to make is that this naive inversion doesn’t
take into account the BT.709 formula. I can imagine it being desirable that
inverting a color also inverts its perceived brightness; the simple sRGB
inversion can’t do this because it treats each color channel equally (it works
for this simple black/white gradient, but not in general). But off
the top of my head I can’t think of any easy way to do this, and I imagine it
could actually be pretty expensive so the sRGB approach is a good compromise for
now.</p>
<p>The other, more tricky exception is font rendering. Fonts employ a number
of tricks to maintain readability at small sizes. The simplest trick is
anti-aliasing, which the font rendering engine applies to edges at small sizes
to keep them looking smooth. The engine might also mess with pixel and sub-pixel
alignment so that the edges of letters align with the pixel grid and are thus clearer. Font designers can
also design their glyphs to actually change shape at smaller sizes, for example
removing artistic flourishes that would hamper readability.</p>
<p>In one sense this does have a real world analogue. A small font is kind of like
a shrunken font, so in that sense it should use gamma correction just like image
resizing. But the goal of font rendering is to get a readable result, not a
real-world light-accurate one. If the inaccurate result is more readable it’s
the right result. What complicates this is that some font rendering engines
do perform gamma correction when doing anti-aliasing even at large font sizes
where readability is not an issue. But if font designers design their fonts
around these incorrect engines, they might be relying on that incorrect color
blending at the small sizes to improve readability. So you can’t fix the larger
case (which would make edges look smoother and more realistic) without breaking
the smaller case (making small fonts less readable).</p>
<h2 id="further-reading">Further reading</h2>
<p>Here are links to where I did most of my research:</p>
<p><a href="http://www.ericbrasseur.org/gamma.html">Gamma error in picture scaling</a>. A very extensive article with a focus on
image resizing. Good examples of real world images with noticable distortion due
to incorrect color space handling.</p>
<p><a href="https://www.youtube.com/watch?v=LKnqECcg6Gw">Computer Color is Broken</a>. A nicely animated explanation with a focus on
image bluring.</p>
<p><a href="http://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/">What every coder should know about gamma</a>. Really thorough article with even
more examples than I have here.</p>
<p><a href="http://www.imagemagick.org/Usage/resize/#resize_colorspace">Resizing with Colorspace Correction</a>. The ImageMagick manual’s section on
correctly resizing images, with some notes about other linear color spaces.</p>
<p>Lastly, if you’re interested in how the math works, you can read the source for
this page in your browser. All of the images in this article are procedurally
generated in javascript so you can see how I do the gamma-correct gradients and
such.</p>
<h2 id="for-the-pedants">For the pedants</h2>
<p>I fudged a lot of stuff in this article. It kind of drives me crazy how
widespread this issue is and yet how little attention it gets, so I simplified
things a bit to get the point across quicker. But if you really care about
technical details, read on.</p>
<p>If you’re reading this article on a high-DPI screen such as a smart phone or a
newish Macbook, I lied to you in all of those checkerboard example images. The
problem with high-DPI is that if you display an image as-is on a high-DPI screen
it ends up <em>tiny</em> because the pixels are smaller. To compensate, these devices
scale up images to match their size on a “normal” screen. Unfortunately most
normal image resizing algorithms are not designed to handle these kinds of
checkerboard patterns (called <a href="https://en.wikipedia.org/wiki/Dither">dither</a>) so you end up with a distorted
result. This isn’t a gamma issue; it’s a limitation any algorithm that tries to
enlarge dithered images. To compensate, I replaced black with a much lighter
shade on high-DPI devices so that you get the right light intensity for the
comparisons. You might also notice some weird patterns in the checkerboard
images (especially the shrunken one). That’s also not really a gamma correction
issue, it’s just a <a href="https://en.wikipedia.org/wiki/Moir%C3%A9_pattern">moiré patten</a> likely due to how the display
scaling interacts with the HTML canvas scaling and the checkerboard.</p>
<p>“Light intensity” and “perceived brightness” are not the correct terms for this
stuff. Usually people described sRGB as a “nonlinear RGB” color space. And when
you gamma correct sRGB to get light intensities, that’s “linear RGB”. I find
these terms a little vague because they are implicitly referring to light
intensities: sRGB has nonlinear light intensity and when you gamma correct it
you (obviously) get linear light intensity because that’s what gamma correction
is. But if you think about it in terms of perceived brightness then sRGB is the
linear one and the “linear RGB” is now nonlinear! I find that thinking linearly
is more intuitive, so I named both of them by the context in which they are
linear. sRGB is linear in the perceptual space and “linear RGB” is linear in the
light intensity space and both of them are nonlinear when viewed from the
perspective of the other space.</p>
<p>There is a large class of software that does color blending right: graphics
drivers. The code that runs on graphics processors to compute colors all runs in
a linear RGB color space so that they blend correctly. Though it’s still
possible to mess it up: when loading an sRGB image onto the graphics card you
still need to tell the driver to do the gamma correction. Otherwise it will load
the sRGB values directly and will end up doing linear operations on nonlinear
values!</p>
<p>You could argue that sRGB wasn’t designed as a compression scheme, it actually
has to do with CRT voltages and the whole perceived brightness thing was a happy
accident. But I believe that coincidence is why it has stuck around so long and
why this gamma correction issue is so hard to teach people about. So it might as
well be the reason for its existence.</p>
<script>
"use strict";
// fill canvases pixel-by-pixel
function procedural_canvas(selector, callback) {
var ratio = window.devicePixelRatio;
ratio = 2;
var canvases = document.querySelectorAll(selector);
for (var k = 0; k < canvases.length; ++k) {
var canvas = canvases[k];
var width = canvas.width;
var height = canvas.height;
var ctx = canvas.getContext('2d');
var div;
if (ratio > 1) {
div = document.createElement("div");
div.style.width = width + "px";
div.style.height = height + "px";
canvas.width *= ratio;
canvas.height *= ratio;
width = canvas.width;
height = canvas.height;
}
var image = new ImageData(width, height);
for (var y = 0; y < height; ++y) {
for (var x = 0; x < width; ++x) {
var color = callback(y, x, height, width);
for (var c = 0; c < 4; ++c) image.data[(y * width + x) * 4 + c] = color[c];
}
}
ctx.putImageData(image, 0, 0);
if (ratio > 1) {
div.style.background = "url(" + canvas.toDataURL("image/png") + ")";
div.style.backgroundSize = "cover";
canvas.parentNode.insertBefore(div, canvas);
canvas.style.display = "none";
}
}
}
var alpha = 0.055;
// nonlinear 0-1 to linear 0-1
function gamma2linear(comp) {
if (comp < 0.04045) return comp / 12.92;
return Math.pow((comp + alpha) / (1 + alpha), 2.4);
}
// linear 0-1 to nonlinear 0-1
function linear2gamma(comp) {
if (comp <= 0.0031308) return comp * 12.92;
return (1 + alpha) * Math.pow(comp, 1 / 2.4) - alpha;
}
function clamp(min, x, max) {
return Math.min(Math.max(x, min), max);
}
// 0-1 to 0-255 integer
function to8bit(x) {
return clamp(0, Math.round(x * 255), 255);
}
function hextonum(x) {
return [parseInt(x.slice(1, 3), 16) / 255, parseInt(x.slice(3, 5), 16) / 255, parseInt(x.slice(5, 7), 16) / 255];
}
function update_interactive() {
var grad1 = hextonum(document.getElementById('grad1').value);
var grad2 = hextonum(document.getElementById('grad2').value);
var grad1l = grad1.map(gamma2linear);
var grad2l = grad2.map(gamma2linear);
procedural_canvas('#interactive', function(y, x, h, w) {
var p = x / w;
var g1 = grad1;
var g2 = grad2;
if (y < h / 2) {
g1 = grad1l;
g2 = grad2l;
}
var color = [];
for (var c = 0; c < 3; ++c) {
color[c] = g1[c] * (1 - p) + g2[c] * p;
}
if (y < h / 2) {
for (var c = 0; c < 3; ++c) {
color[c] = linear2gamma(color[c]);
}
}
return [to8bit(color[0]), to8bit(color[1]), to8bit(color[2]), 255];
});
}
update_interactive();
/* on high-DPI displays, images are pre-scaled which ruins the checkerboards.
* Using a value of 118 for black gives the checkerboard the correct final
* brightness.
* XXX this was found through trial-and-error on a device with pixel ratio 2. On
* devices with a different ratio this will probably be wrong, but we'd need
* to know the scaling algorithm to do it programmatically
*/
function black() {
//if (window.devicePixelRatio > 1) return 118;
return 0;
}
procedural_canvas('.checkers', function(y, x) {
var color = (y + x) % 2 ? black() : 255;
return [color, color, color, 255];
});
var scaled = document.getElementById('scaled').previousSibling;
scaled.style.width = (parseInt(scaled.style.width) / 2) + "px";
scaled.style.height = (parseInt(scaled.style.height) / 2) + "px";
procedural_canvas('#solid', function() {
return [127, 127, 127, 255];
});
procedural_canvas('#compare', function(y, x, h, w) {
var edgesize = w / 4;
if (x <= edgesize) {
var color = (y + x) % 2 ? black() : 255;
return [color, color, color, 255];
}
if (x > edgesize && x <= w / 2) {
var color = to8bit(linear2gamma(y / (h - 1)));
return [color, color, color, 255];
}
if (x > w / 2 && x < w - edgesize) {
var c = to8bit(y / (h - 1));
return [c, c, c, 255];
}
return [127, 127, 127, 255];
});
procedural_canvas('#gradient-test', function(y, x, h, w) {
if (y > 3 * h / 4) {
return [to8bit(linear2gamma(1 - x / w)), 0, to8bit(linear2gamma(x / w)), 255];
}
if (y < h / 4) {
var c = to8bit(x / (w - 1));
return [255 - c, 0, c, 255];
}
if (x < w / 3) return [255, 0, 0, 255];
if (x > 2 * w / 3) return [0, 0, 255, 255];
if (y < h / 2) return [127, 0, 127, 255];
var c = to8bit(linear2gamma(0.5));
return [c, 0, c, 255];
});
procedural_canvas('#rgb', function(y, x, h, w) {
x = x - w / 2;
y = h / 2 - y;
if (x * x + y * y > w * w / 4) return [0, 0, 0, 0];
var theta = Math.atan2(y, x);
if (theta < -Math.PI/3) return [255, 0, 0, 255];
if (theta < Math.PI/3) return [0, 255, 0, 255];
return [0, 0, 255, 255];
});
procedural_canvas("#baw", function(y, x, h, w) {
var color = [28, 130, 60];
if (y < h / 2) return [color[0], color[1], color[2], 255];
var l;
if (x < w / 4) {
l = Math.round((color[0] + color[1] + color[2]) / 3);
}
else if (x < w / 2) {
l = (Math.max.apply(this, color) + Math.min.apply(this, color)) / 2;
}
else if (x < 3 * w / 4) {
l = Math.round(color[0] * 0.2126 + color[1] * 0.7152 + color[2] * 0.0722);
}
else {
l = to8bit(linear2gamma(gamma2linear(color[0] / 255) * 0.2126 + gamma2linear(color[1] / 255) * 0.7152 + gamma2linear(color[2] / 255) * 0.0722));
}
return [l, l, l, 255];
});
procedural_canvas('#compare-rgbry', function(y, x, h, w) {
var edgesize = w / 4;
if (x <= edgesize) {
var color = (y + x) % 2 ? black() : 255;
return [255, color, 0, 255];
}
if (x > edgesize && x <= w / 2) {
var color = to8bit(linear2gamma(y / (h - 1)));
return [255, color, 0, 255];
}
if (x > w / 2 && x < w - edgesize) {
var c = to8bit(y / (h - 1));
return [255, c, 0, 255];
}
return [255, 127, 0, 255];
});
procedural_canvas('#compare-rgbrb', function(y, x, h, w) {
var edgesize = w / 4;
if (x <= edgesize) {
if ((y + x) % 2) return [255, 0, black(), 255];
return [black(), 0, 255, 255];
}
if (x > edgesize && x <= w / 2) {
var c1 = to8bit(linear2gamma(y / (h - 1)));
var c2 = to8bit(linear2gamma(1 - y / (h - 1)));
return [c2, 0, c1, 255];
}
if (x > w / 2 && x < w - edgesize) {
var c = to8bit(y / (h - 1));
return [255 - c, 0, c, 255];
}
return [127, 0, 127, 255];
});
procedural_canvas('#invert', function(y, x, h, w) {
if (y < h / 2) {
var c = to8bit(x / (w - 1));
return [c, c, c, 255];
}
var c = to8bit(linear2gamma(1 - gamma2linear(x / (w - 1))));
return [c, c, c, 255];
});
procedural_canvas('#invertrgb', function(y, x, h, w) {
var c = to8bit(x / (w - 1));
if (y < h / 2) return [c, c, c, 255];
return [255 - c, 255 - c, 255 - c, 255];
});
</script>
</article>
</body>
</html>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment