Wednesday, March 27, 2013

Facebook statistics and a mid-season clip-show OR Obligatory self-promotional ramblings

Given that we just passed the point that I've been doing this weekly for seven months, I thought this was as good a time as any to look back on some of the statistics that get tossed out of the behind the scenes end of this whole process.  Particularly, some of the statistics that blogger and facebook give me.

I know that some of you follow the updates of this blog on facebook (by the way, I don't ever capitalize facebook - the fact that it's capitalized in the title is because of title formatting, nothing else), as I've had several of you mention that you missed a post because you didn't see it pop up in your news feed (or live feed, or news live feed, or timeline, or timeline feed, or timeline news feed, or livetime news feed, or linetime livefeed news ticker, or wall).  

If you've never had a 'non-regular' page on facebook, you might not have a good feel for the statistics that get put out on my end when I go and look at it. 

By the way, before we go too much further I should say that the page in question is here:

https://www.facebook.com/TheSkepticalStatistician

though even in a post specifically about it I'm remiss not to mention that I actually hate the facebook 'like' pages process and a lot of how it's utilized.  That said, if you want to use facebook as a means of keeping up with things, then go for it.  If you have your own qualms about it - as I do - well, there are other (potentially better) ways.  I don't 'dislike' you for not 'like'ing the page. 

I'm sorry, this post isn't supposed to be facebook bashing, it's supposed to be talking about the stats that facebook gives me about your actions.  So let's start with a picture and go from there:


  

So one of the big things that I can see whenever I post something through the facebook page is how many people have 'seen' that post.  You can 'see' it in this image, though don't stare at it too long waiting for it to go up - it's just a picture.  See, you're not really 'seeing' it.  Facebook knows when you 'see' things, unless you're 'seeing' it some other way.  Well, at least at the moment.  They're probably 'working' on that.

I'm not sure exactly what facebook means by 'see' (or rather 'saw'), but I would imagine that they're assuming that if it comes across your screen then you've looked at it.  There are some parts of the internets that I'd advise browsing with your eyes closed, but facebook isn't part of it (if you're doing it right, at least).  

The number of people who 'see' any given post on facebook are actually quite interesting.  It's fairly consistent with the number of people who 'like' the page, but it can be lower if the post doesn't come across your wall, or higher if something happens that facebook measures with 'virality'.  Unfortunately I don't have an easy way to map the number of people who 'like'd the page at any given time (I said easy), so it's hard to show how those change over time.

Time of day of posting seems to have some impact, though I'm also not really going to sit around and do the math on that - and you know the kinds of things that I sit around doing math on.  I'm not looking to take advantage of you to that degree.

There are some other 'admin' tools that facebook gives me to look at things, which give a picture of some of the demographics of the people who like the page, etc.  Most of them are actually pretty stupid, but if you're curious here's what some of them look like:



The thing that should really stand out to you is the way that facebook markets to the people who are one step up the 'commoditizing people' scale (e.g. me).  You can see that there are 56 people who 'like' the facebook page, but the number that facebook wants to shove in my face is 22,835 - the number of friends that all of you have.  Fifty-eight to some factor. 

You may have noticed the 'promote' thing in earlier pictures - it's because by throwing a little bit of money at facebook you can actually buy access to those 22,835 people.  That's what facebook is trying to commoditize - and they're doing it really well.

Yes, by liking this page you give me access to buy rights to your friends.  Re-read that.  Seriously, re-read that.  I'll wait.

Now, I'm not a total jerk (and I'm also cheap) so I'm not going to do that.  But all of facebook's stats make you feel bad if you don't.  All of facebook's methods are set up so that you do.

One of the ways they do this, (if all of you haven't immediately un'like'd the page now that you've 'seen' how the sausage is made, so to speak) is through another thing you'll notice that facebook talks about, which is 'virality.'  This is how things are spreading away from just your 'likes' into other areas - friends of 'friends' and beyond.  Now, I've just told you that you can buy access to this network, and that I don't.  So, how do I rely on 'achieving' 'virality'?

Well, by 'you' telling other people about posts, and especially by you guys 'sharing' the posts that you like with your friends (so I don't have to pay to just do it myself and actually share the posts that everyone hates).
The idea would be that your friends would then 'share' things with their friends, etc, etc.  Your friends are one step more away from me, and especially if you're talking about their friends, well, I don't know them, right? Let's talk about your friends' friends' friends.  And after we're done talking about them, let's commoditize them.  Yeeeeeeeeeeeees.  Too early?  Okay, let's continue.

In any case, I think the best information comes out of the number of views of posts and the actual page views from Blogger relating to any given post (which don't take into account people who simply check the main page when it contains new content).  We can toss actual views of specific posts on a graph along with facebook 'sees' as well as more specific facebook 'likes' and 'shares' - this is what it looks like:



You can see that the Star Wars posts seemed to drive a somewhat lagged boost in views, and before you ask, yes I am working on the final post on the Star Wars stuff.

You can also see that the facebook stuff is pretty robustly indifferent to actual change.  No matter how many people 'see' or 'like' or 'share' or 'view' or 'whatever', it doesn't seem to have much bearing on how many people actually view the post itself.  Throughput, if you will. 

There are other ways that people keep up with the blog, from what I'm told.  Some people have talked about RSS, and some have complained about it.  I think it works?  Blogger also lets you subscribe by email, but I've also never done it.  I guess it would be pretty nice, though.

Twitter is also always an option, as they seem to care a whole lot less about money than facebook does.  If you happen to have an account feel free to follow @skepticalstats - I'd talk more about it but twitter gives me a ton fewer (i.e. none) stats than facebook does.  Again, probably because I'm not elevated to a separate Twitter tier the same way I am in facebook. 

Before I started using facebook (and thus before I have stats on the above graph), I was using Twitter for this stuff and having some pretty good results - the fact that the graph is starting a little higher and trending down is because it was up pretty high from some of the The Price is Right posts that got spread a bit around Twitter.

But if you start using Twitter - whatever you do - don't get @skepticalstats confused with @ellipticalcats.

Oh, and tell your friends, so that I don't have to.

Wednesday, March 20, 2013

The Madness of March

Love it or hate it, March Madness is here.  Whether you've filled out a bracket (or five), or have no intention of ever bracketing anything (or have no idea what March Madness is), the annual college basketball tournament showdown that takes place every March gives us an interesting opportunity to talk about the probabilities at play.



Let's run through it really quick for those who are scratching their heads.  College basketball is a thing.  Play up to this point in the season has pointed out that some teams are doing better than others.  Teams can thus be rank ordered.  Best teams play worst teams, middle teams play middle teams, teams that win move to the next level of a bracket.  Teams that lose are out.  The winner of the whole thing wins college basketball (basically).

You like the idea of brackets but still hate college basketball.  I get it.  Maybe you wish I was just talking about Star Wars again?  PROBLEM SOLVED:


By the way, is anyone but Vader ever going to win something like that?  Prove me wrong, Internets, prove me wrong.  

Basketball.  For what it's worth, a lot more people are going to fill out basketball brackets than Star Wars brackets this year.  How many more people?  Well, it's hard to find accurate numbers, but it seems that it is a pretty safe claim from the numbers I've seen that the low end is that several million to tens of millions of brackets each year.

Has anyone ever filled out a perfect bracket?  Nope.  The odds are pretty rough against you, as the cascading contingencies get fairly complex.  We could spend the rest of this week just talking about calculating the odds of a perfect bracket (1 in like a lot), but that's not what I want to focus on. 

I'm also not going to tell you how to pick a perfect bracket (if I knew I probably wouldn't tell you, and I'd be a lot richer), nor which team is going to be the best upset, etc.  

What I want to take a look at is as what the system of brackets is really manifesting.  In fact, when we get down to the base of it, it's a lot like a lottery, or horse racing (which I've already talked about here).  Sorry is this is starting to feel like a clip-show.

What makes it more like horse racing than a lottery (at least a random lottery) is the odds that are present in each match-up.  Until you get to the final four (and even then if you count certain ranking groups) every team that plays against every other team is ranked by the tournament as either a favorite or an underdog.  If you know anything about statistics (and nothing about basketball), that means that the safe bet is always the more probable outcome.  

At the horse races, there is a favorite in every race.  The problem is that your odds are not good for betting on the favorite.  In theory, those odds could sometimes drop below 1:1 - for every dollar bet you might get less than a dollar back for a winning bet.

Some of you might not follow horse racing, so let's go back to a lottery and just fix it (figuratively and literally) so that it's a little less random.  

For a game like pick 3, there are three boxes full of balls with numbers from 0-9 on them.  A ball is drawn from three of these pools in order, resulting in a real three digit number that is the winner (i.e. any number between 000 and 999).  Every number (e.g. 333, 148, 042, 611, 500, 007, 554) has an equal odds of being drawn, despite (incorrect) perceptions of dependence between draws (e.g. thinking something like 444 is less likely than something like 435).  

So, let's rig this lotto.  Let's rig it good.  

Instead of there being 10 balls in these draws, let's make things a bit more interesting.  Let's put in as many balls of each number in each draw as...well, that number.  That means 3 balls that say '3', 5 balls that say '5', 9 balls that say '9'.  We've just made a draw of '999' a lot more likely than a draw of '111'.  We've also made a draw of '000' or '990' impossible, oddly enough.  

So, this is simple, right?  Go out and buy a ticket for '999' and sit back and wait for fat sacks of cash. Problem solved?

Well, not quite.  Remember, even a rigged lottery really isn't statistically fair.  Betting on every horse might get you a cool story, but in the end still costs you money.  

Why, you ask?  Well, you might have the (relatively) genius idea to go out and buy a '999' ticket, but I'm betting (a good bet, not a lotto bet) that you'd not be the only person with the same idea.  

The way the lotto works is conceptually similar to the way horse racing works (which again was covered in that horse racing post).  Everyone buys a chance at winning, and it is that cash input that is used as a take after the house takes its share (the rake).  If the house didn't take a rake you'd be in a situation where acting on the odds could potentially just help you break even a whole bunch.  With the rake in place you're going to end up tending toward a slow drain (again, just like the horse racing charts in that much earlier post).  

This is because the take is divided between all the people who shared in the winning of it.  If a million people buy dollar tickets, but they all buy a '999' ticket - and that ticket ends up being drawn - then what they are walking away with is a millionth share of a million dollars, less the rake.  If the rake is 10% they're walking away with a millionth share of 900,000 dollars, or about 90 cents for every dollar in.  That's not a great place to be.

The problem is simply that a lot of people are going to make the smartest bet (and not play the lotto? - okay, the second smartest bet) and play in line with the odds.  The way to win is to somehow figure out how to get outside of the main pack and still end up with a winning ticket.  You can buy '111' tickets over and over until that hits (and you're the only person silly enough to be holding one), but the numbers would suggest that on average you're going to spend more money on all those tickets while waiting for your number to come up.  You can also go to Vegas and just slam down Lincolns (I'm assuming none of you are going to Vegas and dropping Benjamins) on the green spot(s) on a roulette wheel (or any spot, really).

You might hit early and get super lucky on the first draw.  The moral of that - for hopefully obviously reasons - would be to stop playing the lotto and be happy that you capitalized on noise instead of having the odds capitalizing on you.  

Enough with these pretend, odd lotteries, you say?  You're here for basketball (or Star Wars)?  How does this relate?

It relates because there is a bracket that corresponds to a lotto ticket of '999'.  I have filled out that bracket in the past - it makes it a lot easier to watch games and remember who to cheer for.  Always cheer for the favorite, and hope they're the favorite for a reason.  When you get to the final four make your picks based on records as well as strength of conference.  It's...pretty simple.  

The problem is that it's simple enough that you can be assured you're not the only one that's through about it. Picking a safe favorites bracket will not make you stand out among the crowd - if you do happen to pick a perfect bracket you're now sharing that prestige with a whole bunch of others (as well as any prize money - several places like Yahoo offer big prizes for registering a perfect bracket).  

Let's look at the extreme opposite for a moment.  What if instead of picking every favorite you misinterpret the numbering system (biggest number is best number, right?) and instead pick every underdog?

This would be similar to the '111' ticket, but in practicality might closer to a '000' ticket.  Such a situation seems so unlikely that in order for it to pay out you might have to do something crazy, like play March Madness longer than the lifetime of the universe.  Some of you might be fine with that.

It's also the case that a '111' ticket is easy prey, so you can be assured you'd also not be alone in your strategy.  As dumb as you might feel picking that bracket, you can be sure that the law of large numbers gave you at least a handful of partners in stupidity.  

So you want to be somewhere in that happy middle ground.  A bracket that's not too obvious, but one that is still unique.  Not too hard to hit the unique part - estimates of the number of possible brackets are well into the 10+ digits.  Again, cascading choices produce exponential growth (well, I got it right that James Madison made it to the final four, and everything else right, but I totally missed that Missouri upset Colorado State! Missouri!)  

You can start, then, with a totally safe bracket - the statistician in me would advise it.  From there, it's time to add your own pieces of flair to make it yours.  Who do you think stands the best chance of being upset? Flip that one for the underdog and you'd just reduced the number of people who have done the same thing. Pick another underdog and the pool has probably again diminished.  Now, figure out how far those upsets are going to go along the path to the finals.  Figure out what upsets might occur in the Sweet 16 or Elite 8.  Pick those.  

You're walking a fine line here - it's conceptually similar to the trade-off of type I and type II error.  The farther you push away from the most likely bracket the farther you also push away from the group of people playing it safe.  The more risks you take the more likely you'd be standing alone with a perfect bracket if you ever hit one.  Make sense?  Makes sense. 

In any case, go and get filling out those brackets!  ESPECIALLY THE STAR WARS ONE.

If - though - all this talk about the way brackets are really working has you totally disillusioned on the idea of March Madness, then bracket-less brackets might be fore you.  Bracket-less brackets are another cool take on this that actually makes it a lot harder to play it 'safe'.  Instead of filling out a bracket you pick a team for each seed that you think will go the fartherst, and get points when they move on.  I'm not going to get much deeper into it, because the guys over at Stuff Smart People Like already do a good job at it - they also have a bracket-less bracket contest you can easily take part in here:


Cheers to cross-promotion!  Now fill out some brackets!  

Wednesday, March 13, 2013

Melodifestivalen: Swedish singing competitions and those silly Brits

I'm suspecting that only a small fraction of my readers happen to be up on the most recent developments in Swedish-based musical competition programs as preliminary selection to the Eurovision song contest.


Well, that's a thing.  And it's actually quite a bit more interesting than any American musical competition show that I've ever seen.  So there's that.

Anyway, after initial selection, eight musical acts perform each week for four weeks.  Based on Swedish call-in vote, two of the competitors from each week go to the final.  In addition, two from each week go to a second chance round.  During this second chance round eight competitors are first cut down to four, who then face off in pairs until the two remaining go to the final.

The final thus consists of 10 competitors, and voting is a mix of international juries and Swedish call-in vote. 

It turns out some really interesting acts, like this:



and this:


and this:


and this:



Overall, though, it also puts a whole bunch of acts through multiple rounds of voting from multiple sources.  This produces some pretty interesting data, and gives us some interesting statistical options.

I hate to keep saying it, but once again Wikipedia provides (collects!) some pretty great data on how everything went down.  The results of all four rounds, the andra chansen round, and the final can be found here:

http://en.wikipedia.org/wiki/Melodifestivalen_2013

There are six big picture instances where the Swedish people were able to call in and express their vote.  These six times are the four main rounds, the second chance round, and part of the final.  I mentioned that part of the final was points from international juries.  In fact, there are 11 international juries: Cyprus, Spain, Italy, Iceland, Malta, the Ukraine, Israel, France, the UK, Croatia, and Germany.

Because only a certain portion of contestants move on to the final (and because there is no distinction before the final between first and second, or between third and forth) there's actually not much variance in scoring from those first few rounds.  The most useful information comes from the final - 11 international juries and the Swedish people rate each of the 10 final contestants.

What can we learn from this?  Well, what I started thinking about was finding which countries scored the contestants in similar ways.  After a little more thinking I got to wondering if we could treat each country as a item by which each contestant is measured.  In such a case we could examine how well each country was measuring the same thing through a reliability analysis.

Now, I recognize that this is not without pitfalls - the fact that this is rank data means first that cases aren't independent.  It also means that reliability based on continuous measurements might not be the most applicable, but I'm going to look the other way for now.

A straight-up reliability analysis of the 11 country votes and the final Swedish phone vote gives us some numbers that are almost respectable - including a Cronbach's alpha of .695.  If this is a means of gathering consensus, though, which countries are simply coming out of left field?

We can take a look at the inter-item and item-total correlations, and what we find is that the UK seems to be the country acting the most strange (this is also backed up by individual item-total(ish) Spearman's rho rank order correlation for the UK).  The inter-item total correlation for the UK is actually negative (-.11), implying that they are measuring something reasonably different than the rest of the countries.

So what happens if we pretend that the UK forgot to show up last weekend?

First off, our Cronbach's alpha jumps up to .734.

Here are the rankings and point totals as they played out in reality:

Competitor      Final    
    Points
       Rank
Robin Stjernberg 166 1
Yohio 133 2
Ulrik Munther 126 3
Anton Ewald 108 4
Louise Hoffsten 85 5
Sean Banan 78 6
Ralf Gyllenhammar 73 7
David Lindgren 69 8
State of Drama 68 9
Ravaillacz 40 10


And here are the rankings and point totals as they would have played out without the UK:

Competitor      Final
    Points
       Rank
Robin Stjernberg 166 1
Yohio 131 2
Ulrik Munther 114 3
Anton Ewald 104 4
Louise Hoffsten 85 5
Sean Banan 68 7
Ralf Gyllenhammar 65 8
David Lindgren 69 6
State of Drama 62 9
Ravaillacz 39 10


You can see that not much has changed, as the only change seems to be David Lindgren jumping up two places above Sean Banan (boooooooo).

Cyprus also had a negative item-total correlation (albeit smaller), which is still has after the removal of UK.  Removing Cyprus in addition to the UK bumps our Cronbach's alpha up to .775.  It also continues to move Sean Banan down the ranks:

Competitor      Final    
    Points
      Rank
Robin Stjernberg 165 1
Yohio 121 2
Ulrik Munther 108 3
Anton Ewald 96 4
Louise Hoffsten 83 5
Sean Banan 56 9
Ralf Gyllenhammar 65 6
David Lindgren 65 6
State of Drama 62 8
Ravaillacz 39 10


We could keep at it, but given the pretty sizable gap between first and second place it seems like the removal of particular countries isn't going to swing things much in any substantial way (the rest of the countries are also considerably more consistent).  It also pains me to begin to question if Sean Banan doing even as well as he did was simply due to noise in the data.

However painful, let's take a look.

We've been looking so far at the consistency of any given country, but we can also take a look at how stable the individual competitors were in terms of rank.  How to do this?  Well, let's see if means and standard deviations can help to paint a picture.

The idea here would be that if countries can't agree on how to rank a contestant then that contestant should have a higher standard deviation (error) around their mean rank.  Hand-waving again around some of the ceiling and floor suppression effects on SD from this kind of scale, here you go:

Competitor Mean rank SD rank
Robin Stjernberg 2.916666667 2.274696
Yohio 5.583333333 2.314316
Ulrik Munther 3.5 2.153222
Anton Ewald 4.916666667 2.466441
Louise Hoffsten 5.666666667 2.348436
Sean Banan 5.833333333 2.552479
Ralf Gyllenhammar 6.166666667 2.480225
David Lindgren 5.083333333 2.84312
State of Drama 5.25 2.261335
Ravaillacz 7.333333333 0.887625


Interestingly, the only competitor that really stands out is Ravaillacz (almost universally ranked toward the bottom of every country's list).  David Lindgren is also a bit high, though Sean Banan doesn't seem to be standing out as I expected.  Most ranks stay pretty consistent.

[You also may notice that Yohio drops quite a bit if we just look at mean ranks.  It is because he did really well with the Swedish phone vote, which is weighted higher than any of the other individual countries.]

Overall, it seems that Robin Stjernberg was pretty safe in his win, though perhaps in the future the British shouldn't get to vote on anything relating to music.  At least if that music is crazy Swedish music.

Sing us out, Mr. Stjernberg!





Wednesday, March 6, 2013

Cameras and the nature of noise

I'm pretty happy here - today we get to talk about two things that I really enjoy: cameras, and randomness.

With the proliferation of digital cameras and digital images it is very likely that at least some of you have an incorrect image or concept in your head when you hear the word 'noise' in the context of pictures.  That incorrect image may not be one of noise, but of pixelization. 

For example, let's start with a standard picture to which we can compare all others we talk about.  That picture will be this:


For reference, this is 3456x2304 pixels (8 megapixels), tripod-mounted, 200mm, 1.3 sec at F6.3, ISO100.

There's a lot of information there, and one of the things I'm looking forward to today is explaining the majority of it to you.  

Pictures - be they analog or digital - are made when light is focused onto a surface by a lens for a specified amount of time.  In traditional photography that surface is a frame of film.  In digital photography that surface is a charge-coupled device, or CCD (or something like it). 

It's easier to talk about resolution when it comes to digital images, so we'll start there.  The first number I tossed out (3456x2304) are the number of pixels in the above image.  The first number is by width, and the second is by height.  The multiplication of the two clocks in just below 8 million, which is how the concept of 8 megapixels (MP) is derived - it's how many pixels make up the total image.  

If you zoom in on the above image you're going to see some pretty true lines - this number of pixels give a very good resolution for defining objects.  Pixelization occurs when there aren't enough pixels to adequately define a line.  We can demonstrate this by taking the exact same image and reducing its size.  First, let's take it down to 200x133.  That's not even quite 1 megapixel:


If you look at the hard edges of some of the dice (especially the white one in the center with a 2 showing), you can start to see how the edges are beginning to look more like steps than hard lines.  This is because there aren't enough pixels spread across the image to 'smooth' that line.  This will be less apparent on true horizontal or vertical lines and worst on lines angled 45 degrees to the sides. 

We can make this worse to really illustrate the point - here's the same image taken down to 20x13


When you cut the number of pixels - in any given horizontal or vertical line - in half, the new pixels are created by averaging two pixels into one.  You can see that happening here, to large effect.  Each pixel no longer describes a really small area of the photo (think grains of sand in the first image), but a wide section of it.  This is pixelization.  It is not noise - it is lack of resolution.  When you see little staircases in your pictures (granted you're not taking pictures of little staircases), your problem is image size.  For most of us, something like 8MP is more than enough.

One of the other things you might recognize in this picture is that the only lines that can be defined in any reasonable way are those that are horizontal or vertical.  At this degree of pixelization the ability to meaningfully describe 45 degree angles is practically non-existent.

I mentioned above that a picture is made when light is focused onto a surface for a specified amount of time.  This takes us to the next point - the lens. 

If you're reading this article you're using some lenses - your eyes.  Some of you might be using some extra lenses in addition to your eyes in the form of glasses or contact lenses.  The notion is that these lenses are taking light from a wide area and focusing it down to a point or area.  If you want to learn more about lenses you can start here:

http://en.wikipedia.org/wiki/Lens_%28optics%29

Or just spend some time staring at this totally public domain image from the same article:



The number I quoted above (200mm) relates to the length of the lens.  Most people (I did for a long time) think that this relates to how long the actual lens you put on your camera is.  While that is somewhat true, it's not the actual measurement.  A 'mm' measurement on a camera lens is actually fairly complicated, but for our purposes is most directly related to how far the back element of the lens (the last piece of glass in the lens) is from the surface that you're focusing on.

If you have a zoom lens sitting around (okay, so not everyone does), you can check this by taking it off your camera and zooming it.  While the whole lens gets longer this is actually being done to move the last lens element away from the body of the camera. 

This measure of a lens isn't really important to our discussion, but now you know a little more about cameras, right? 

The last three numbers above are actually pretty important - they're the heart of exposure in photography. 

To remind you, they were "1.3 sec at F6.3, ISO100." 

Remember that a picture is made when a lens focuses light on a surface for a specified amount of time. 

The three things that drive exposure are shutter speed, aperture, and film/sensor sensitivity.  They are a balancing act - as you increase or decrease one you have to account for that change in some change in one or both of the other two.   

When you take a picture you're opening the shutter behind the lens to let light hit the sensor for as long as the shutter is open.  In the above example this was 1.3 seconds (one of the reasons I did this tripod mounted).  This is fairly simple.

Also behind (or often in) the lens is the aperture - a series of panels that close down to let less light back into the camera.  To demonstrate this, hold up your hand with your thumb closest to your and the rest of your fingers in a line behind it.  Now touch the tip of your index finger to the inside tip of your thumb, to form a circle - think of giving someone a gesture of 'okay'.

That circle is a decent size, at least the largest you're going to be able to make with those two fingers.  Slowly move the tip of your index finger down the side of your thumb and into the crease between them.  As you do this you should notice that the circle formed by them is (more or less) staying a circle, but slowly getting smaller.

This is more or less what the aperture is doing inside your camera.

The measure for aperture is usually represented as F'something' - in the above example it's F6.3.  This number is actually a ratio - it's the ratio of the mm length of the lens to the mm opening of the aperture.  Thus, as the size of the aperture opening gets smaller and smaller the F number actually gets bigger.  This is because the numerator of the fraction is staying the same and the denominator is getting smaller - the outcome is a larger number (think 1/4 vs 1/2, or 4/4 vs 4/2).

With some quick math, we can thus figure out (no one ever figures this out, ever) how wide open the aperture was for this shot.  6.3 = 200mm/x    ->    x ~ 31mm  

The higher the aperture (F) number, the less light is getting into the camera.  This means that the shutter has to be open longer.  Exposure is balanced in this way - the more light the aperture lets in the shorter the shutter is open, and vice versa.  That's not the whole story, though.

We're almost to the punchline, by the way.  The last number is ISO100, and it translates to the sensitivity of the medium.  In traditional photography this is the sensitivity of the film surface to light - achieved by packing more light sensitive chemicals in the film frame.  Each roll of film is a particular sensitivity and can't be changed on the fly.  In digital photography this is the sensitivity of the CCD to light - achieved by...well, turning up the sensitivity of the CCD to light.  One of the advantages of digital imagery is that this can be easily changed on the fly at any time. 

Most digital cameras range from an ISO of around 100 to somewhere around 1600 or 3200.  Some make it up to 6400, but the scale follows doubling rules - that is the jump from 100 to 200 is the same as from 400 to 800 or 3200 to 6400.  

Like I said, if you want to know a whole lot more about CCDs you can start on that wikipedia page.  They're pretty cool, but complicated enough that I'm not going to get into them too deeply here.

What we're going to consider them as are electronic devices with a number of holes that photons can fall into.  What are photons?  For our purposes...uh, pieces of light.

If you think of the first image we looked at we know that there are something like 8 million pixels in the image.  As long as the lens is focusing things adequately, that means that the picture can account for light coming from (being reflected by in this case) around 8 million sources.

Electronics are imperfect.  Many are designed to operate in particular tolerances.  My camera's CCD may very well be rated for use at ISO100.  Pushing it higher than that - again for our purposes - can be thought of as a way of overclocking it.  More specifically, sacrificing accuracy for power.  

You see, there are error rates at play here.  If we're treating the CCD as a photon collector then its job is to tell the onboard computer every time a photon passes through the gates at any pixel.  If you want this to be pretty accurate you need to make sure you're letting in what you are actually looking for.  This means setting higher thresholds for what is considered a photon. 

Think of it a different way.  At the lowest ISO settings you're setting pretty high standards in terms of letting things through.  Imagine 8 million tiny bouncers - complete with velvet ropes - each standing next to one of the pixels on the CCD.  They are responsible for making sure that what's getting through the gates is actually a photon, and not just a false positive.  At low ISO they are pretty thorough - they have a list and you had best be on it.  They're so thorough that they may stop some actual photons from getting in.  They're willing to sacrifice some false negatives to make sure that false positives are near zero.  

If you're in a situation with a lot of light this isn't a problem.  This might be because there's a lot of light in the scene, or because you're allowing a lot of light to get into the camera (long shutter speed, tripod). 

If you're in a situation with not much light (and no tripod), you might be willing to relax your draconian stance on false positives to make sure you're catching more of those photons you turned away as false negatives.  An ISO200 bouncer is twice as relaxed as an ISO100 bouncer, an ISO400 bouncer is twice as relaxed as that, etc.

At a certain point, the bouncer is barely doing his job at all.  For my camera that's around ISO1600.  He's letting in photons, but he's also letting in his friends and any rift-raft that happen to wander along.  It is a party, and everyone is invited.

Here's how it begins to play out:

ISO100:


ISO400:


ISO1600:  



FYI, all of these are now at a fixed aperture of F8 (which is about the sweet spot on my lens), so shutter speed varies (goes down) as the sensitivity increases.

If you have a keen eye you might start to notice a bit more noise in some of the areas of the picture as the sensitivity goes up.  This brings us to an actual conversation of what that noise represents. 

There's some 'true' image here that represents what was actually sitting on the table in front of me while I took these pictures.  Each pixel should have a value relating to the number of pixels that made it past each gate (I'm simplifying the fact that this is a color photo), and the lower the ISO the closer to 'truth' this should be.  Put another way, the observed values should cluster more closely around the actual values at lower ISO values. 

When you turn up the sensitivity by raising the ISO you start getting measurements of things that aren't actually true - this is noise.  You begin to see it as a sort of 'digital grain' - if you weren't able to pick up on it above we can zoom in a bit to really make it clear:

ISO100:



ISO400:


ISO1600:


At this point you should be able to see the pixel-level noise that starts to manifest itself at higher sensitivity.  Things look fairly 'real' at ISO100, even at this level of zoom.  At ISO400 you start to see a bit of that 'digital grain', and at ISO1600 it is very pronounced.

What is it, though?

Well, it's noise, and it's (presumably) random.  If this image was gray-scale each pixel would be establishing a level between black and white, represented by a number.  We can think the same way for each pixel in a color image, except there are actually a few color channels being measured.

Let's say, though, that any given pixel in the scene is actually reflecting x quantity of light.  If that pixel is being measured by the sensor as x, then you're getting a highly accurate representation of the scene.  It's more likely that there's some error present in your measurement, that is:  observed = true + error

That error can be positive or negative by pixel, and again should be fairly random.  The less you are trying to push the sensitivity the more accurate each pixel can be - the closer observed will be to true.  That's why the image at ISO100 looks fairly 'true' - the bouncer at this level is providing a good deal of scrutiny and making sure things are what they seem.

The reason the image at ISO1600 looks 'grainy' is because these error bars increase with increased sensitivity.  If the magnitude of error is higher, then your observation (the CCD's observation) of any given pixel is going to tend to be less 'true' - farther away from x on average. 

If you're particularly inclined, you can imagine a normal distribution around this true value x.  The higher the sensitivity, the flatter and wider this distribution.  You're much more likely to pull a number that's highly incorrect at ISO1600 than you are at ISO100.

When you look at the ISO1600 image, you're seeing individual pixels because there's a contrast emerging between adjacent pixels.  Contrast is simply the magnitude of the difference between two areas - if you're looking to define a difference between black and white, or lighter or darker, contrast is great.  Flat-screen TVs often let you know what their contrast is, something displayed like 10,000:1 - this means that the whites that the TV is capable of producing are 10,000 times brighter than the blacks it is capable of producing. 

The contrast you're achieving at a pixel level in the ISO1600 image is partly false contrast, however.  The color and luminosity of one of the dice at any given location is actually pretty similar to the area (pixels) directly adjacent.  The reason that you're seeing a difference is because noise is being distributed randomly and making some pixels lighter and some pixels darker.  When a pixel is randomly made lighter than those around it you notice it much more than if it was similar to those pixels around it.  It looks like a small bit of light surrounded by darker area.   

This is why you're seeing 'grain' - what you're seeing is deviations from 'true' at the pixel level.  
 
All said, there's still an assumption there that this error and noise is randomly distributed.  Because of this, there's a way that you can have the best of both worlds - high sensitivity but accurate observation.

It may be a fun thought experiment to stop reading here for a second to see if you can use the information provided to independently come up with a way to remove noise in images like those at ISO1600.

It has to do with averaging.

If we assume that the noise at each pixel is randomly distributed then the best guess for the actual 'true' value for any pixel should be the average information for that pixel across multiple images.  This is also where a tripod is necessary - if you take a number of pictures of the same thing (in this case I did around 70, but that's way overkill) you can average them to find a best guess for the 'true' value of that pixel.
   
There are actually some good tools available to do this - the one I use is called ImageStacker.  An image is really just a lot of information relating to the value of each pixel in the image.  In a grayscale image that has to do with gradients of black and white, and in a color image it relates to the gradients of a few different colors. 

Basically, though, you can conceptualize an 8MP digital image as 8 million numbers telling you the state of each pixel (again, more if you want to consider color).  It's easy enough to take 8 million averages using today's computers, and that's what programs that do this are doing.

Perhaps the result would speak best for itself.

Here's the full single image at ISO1600:

 
And here's 70 images at ISO1600, averaged into one image:


Again, the best picture is probably painted at a high zoom where noise would be the most apparent.  Here's the zoom on a single image at ISO1600:


And here's the zoom on 70 images at ISO1600, averaged:


For comparison, here's also again the zoomed image at ISO100:


Obviously, the average is quite a bit closer to the single ISO100 image.

Keep in mind as well that the image averaged from 70 images is NOT 70 times larger as a file.  It's still just one image, and is the same size as any one of the images that were used to produce it.  What's being thrown away in the process isn't image information, but redundancy and noise.  The 'true' information that exists in every picture is that which is shared - the redundancy across those images is used (in effect) to determine what is 'true'.  That - again - is done through simple averaging.

The fact that it works demonstrates that this noise is in fact randomly distributed - if there was a bias one way or another the averaged image would have a luminosity or color shift when compared to the single image at ISO100.  It does not.  In fact, they're fairly indistinguishable to the eye.  What I'm noticing when I really look is that the ISO100 image actually has a bit more noise than the average.  For instance, take a closer look at the green d6 in the lower right corner.    

Hopefully - if you're still with me - you understand a whole lot more about how cameras work.  Oh, and maybe you have a better understanding of random error.  Or maybe you just want to roll some dice.  I'd be happy with any of those outcomes.

And for those that are just wondering if they can have a full size example of all these dice, here you go: