Wednesday, March 27, 2013

Facebook statistics and a mid-season clip-show OR Obligatory self-promotional ramblings

Given that we just passed the point that I've been doing this weekly for seven months, I thought this was as good a time as any to look back on some of the statistics that get tossed out of the behind the scenes end of this whole process.  Particularly, some of the statistics that blogger and facebook give me.

I know that some of you follow the updates of this blog on facebook (by the way, I don't ever capitalize facebook - the fact that it's capitalized in the title is because of title formatting, nothing else), as I've had several of you mention that you missed a post because you didn't see it pop up in your news feed (or live feed, or news live feed, or timeline, or timeline feed, or timeline news feed, or livetime news feed, or linetime livefeed news ticker, or wall).  

If you've never had a 'non-regular' page on facebook, you might not have a good feel for the statistics that get put out on my end when I go and look at it. 

By the way, before we go too much further I should say that the page in question is here:

https://www.facebook.com/TheSkepticalStatistician

though even in a post specifically about it I'm remiss not to mention that I actually hate the facebook 'like' pages process and a lot of how it's utilized.  That said, if you want to use facebook as a means of keeping up with things, then go for it.  If you have your own qualms about it - as I do - well, there are other (potentially better) ways.  I don't 'dislike' you for not 'like'ing the page. 

I'm sorry, this post isn't supposed to be facebook bashing, it's supposed to be talking about the stats that facebook gives me about your actions.  So let's start with a picture and go from there:


  

So one of the big things that I can see whenever I post something through the facebook page is how many people have 'seen' that post.  You can 'see' it in this image, though don't stare at it too long waiting for it to go up - it's just a picture.  See, you're not really 'seeing' it.  Facebook knows when you 'see' things, unless you're 'seeing' it some other way.  Well, at least at the moment.  They're probably 'working' on that.

I'm not sure exactly what facebook means by 'see' (or rather 'saw'), but I would imagine that they're assuming that if it comes across your screen then you've looked at it.  There are some parts of the internets that I'd advise browsing with your eyes closed, but facebook isn't part of it (if you're doing it right, at least).  

The number of people who 'see' any given post on facebook are actually quite interesting.  It's fairly consistent with the number of people who 'like' the page, but it can be lower if the post doesn't come across your wall, or higher if something happens that facebook measures with 'virality'.  Unfortunately I don't have an easy way to map the number of people who 'like'd the page at any given time (I said easy), so it's hard to show how those change over time.

Time of day of posting seems to have some impact, though I'm also not really going to sit around and do the math on that - and you know the kinds of things that I sit around doing math on.  I'm not looking to take advantage of you to that degree.

There are some other 'admin' tools that facebook gives me to look at things, which give a picture of some of the demographics of the people who like the page, etc.  Most of them are actually pretty stupid, but if you're curious here's what some of them look like:



The thing that should really stand out to you is the way that facebook markets to the people who are one step up the 'commoditizing people' scale (e.g. me).  You can see that there are 56 people who 'like' the facebook page, but the number that facebook wants to shove in my face is 22,835 - the number of friends that all of you have.  Fifty-eight to some factor. 

You may have noticed the 'promote' thing in earlier pictures - it's because by throwing a little bit of money at facebook you can actually buy access to those 22,835 people.  That's what facebook is trying to commoditize - and they're doing it really well.

Yes, by liking this page you give me access to buy rights to your friends.  Re-read that.  Seriously, re-read that.  I'll wait.

Now, I'm not a total jerk (and I'm also cheap) so I'm not going to do that.  But all of facebook's stats make you feel bad if you don't.  All of facebook's methods are set up so that you do.

One of the ways they do this, (if all of you haven't immediately un'like'd the page now that you've 'seen' how the sausage is made, so to speak) is through another thing you'll notice that facebook talks about, which is 'virality.'  This is how things are spreading away from just your 'likes' into other areas - friends of 'friends' and beyond.  Now, I've just told you that you can buy access to this network, and that I don't.  So, how do I rely on 'achieving' 'virality'?

Well, by 'you' telling other people about posts, and especially by you guys 'sharing' the posts that you like with your friends (so I don't have to pay to just do it myself and actually share the posts that everyone hates).
The idea would be that your friends would then 'share' things with their friends, etc, etc.  Your friends are one step more away from me, and especially if you're talking about their friends, well, I don't know them, right? Let's talk about your friends' friends' friends.  And after we're done talking about them, let's commoditize them.  Yeeeeeeeeeeeees.  Too early?  Okay, let's continue.

In any case, I think the best information comes out of the number of views of posts and the actual page views from Blogger relating to any given post (which don't take into account people who simply check the main page when it contains new content).  We can toss actual views of specific posts on a graph along with facebook 'sees' as well as more specific facebook 'likes' and 'shares' - this is what it looks like:



You can see that the Star Wars posts seemed to drive a somewhat lagged boost in views, and before you ask, yes I am working on the final post on the Star Wars stuff.

You can also see that the facebook stuff is pretty robustly indifferent to actual change.  No matter how many people 'see' or 'like' or 'share' or 'view' or 'whatever', it doesn't seem to have much bearing on how many people actually view the post itself.  Throughput, if you will. 

There are other ways that people keep up with the blog, from what I'm told.  Some people have talked about RSS, and some have complained about it.  I think it works?  Blogger also lets you subscribe by email, but I've also never done it.  I guess it would be pretty nice, though.

Twitter is also always an option, as they seem to care a whole lot less about money than facebook does.  If you happen to have an account feel free to follow @skepticalstats - I'd talk more about it but twitter gives me a ton fewer (i.e. none) stats than facebook does.  Again, probably because I'm not elevated to a separate Twitter tier the same way I am in facebook. 

Before I started using facebook (and thus before I have stats on the above graph), I was using Twitter for this stuff and having some pretty good results - the fact that the graph is starting a little higher and trending down is because it was up pretty high from some of the The Price is Right posts that got spread a bit around Twitter.

But if you start using Twitter - whatever you do - don't get @skepticalstats confused with @ellipticalcats.

Oh, and tell your friends, so that I don't have to.

Wednesday, March 20, 2013

The Madness of March

Love it or hate it, March Madness is here.  Whether you've filled out a bracket (or five), or have no intention of ever bracketing anything (or have no idea what March Madness is), the annual college basketball tournament showdown that takes place every March gives us an interesting opportunity to talk about the probabilities at play.



Let's run through it really quick for those who are scratching their heads.  College basketball is a thing.  Play up to this point in the season has pointed out that some teams are doing better than others.  Teams can thus be rank ordered.  Best teams play worst teams, middle teams play middle teams, teams that win move to the next level of a bracket.  Teams that lose are out.  The winner of the whole thing wins college basketball (basically).

You like the idea of brackets but still hate college basketball.  I get it.  Maybe you wish I was just talking about Star Wars again?  PROBLEM SOLVED:


By the way, is anyone but Vader ever going to win something like that?  Prove me wrong, Internets, prove me wrong.  

Basketball.  For what it's worth, a lot more people are going to fill out basketball brackets than Star Wars brackets this year.  How many more people?  Well, it's hard to find accurate numbers, but it seems that it is a pretty safe claim from the numbers I've seen that the low end is that several million to tens of millions of brackets each year.

Has anyone ever filled out a perfect bracket?  Nope.  The odds are pretty rough against you, as the cascading contingencies get fairly complex.  We could spend the rest of this week just talking about calculating the odds of a perfect bracket (1 in like a lot), but that's not what I want to focus on. 

I'm also not going to tell you how to pick a perfect bracket (if I knew I probably wouldn't tell you, and I'd be a lot richer), nor which team is going to be the best upset, etc.  

What I want to take a look at is as what the system of brackets is really manifesting.  In fact, when we get down to the base of it, it's a lot like a lottery, or horse racing (which I've already talked about here).  Sorry is this is starting to feel like a clip-show.

What makes it more like horse racing than a lottery (at least a random lottery) is the odds that are present in each match-up.  Until you get to the final four (and even then if you count certain ranking groups) every team that plays against every other team is ranked by the tournament as either a favorite or an underdog.  If you know anything about statistics (and nothing about basketball), that means that the safe bet is always the more probable outcome.  

At the horse races, there is a favorite in every race.  The problem is that your odds are not good for betting on the favorite.  In theory, those odds could sometimes drop below 1:1 - for every dollar bet you might get less than a dollar back for a winning bet.

Some of you might not follow horse racing, so let's go back to a lottery and just fix it (figuratively and literally) so that it's a little less random.  

For a game like pick 3, there are three boxes full of balls with numbers from 0-9 on them.  A ball is drawn from three of these pools in order, resulting in a real three digit number that is the winner (i.e. any number between 000 and 999).  Every number (e.g. 333, 148, 042, 611, 500, 007, 554) has an equal odds of being drawn, despite (incorrect) perceptions of dependence between draws (e.g. thinking something like 444 is less likely than something like 435).  

So, let's rig this lotto.  Let's rig it good.  

Instead of there being 10 balls in these draws, let's make things a bit more interesting.  Let's put in as many balls of each number in each draw as...well, that number.  That means 3 balls that say '3', 5 balls that say '5', 9 balls that say '9'.  We've just made a draw of '999' a lot more likely than a draw of '111'.  We've also made a draw of '000' or '990' impossible, oddly enough.  

So, this is simple, right?  Go out and buy a ticket for '999' and sit back and wait for fat sacks of cash. Problem solved?

Well, not quite.  Remember, even a rigged lottery really isn't statistically fair.  Betting on every horse might get you a cool story, but in the end still costs you money.  

Why, you ask?  Well, you might have the (relatively) genius idea to go out and buy a '999' ticket, but I'm betting (a good bet, not a lotto bet) that you'd not be the only person with the same idea.  

The way the lotto works is conceptually similar to the way horse racing works (which again was covered in that horse racing post).  Everyone buys a chance at winning, and it is that cash input that is used as a take after the house takes its share (the rake).  If the house didn't take a rake you'd be in a situation where acting on the odds could potentially just help you break even a whole bunch.  With the rake in place you're going to end up tending toward a slow drain (again, just like the horse racing charts in that much earlier post).  

This is because the take is divided between all the people who shared in the winning of it.  If a million people buy dollar tickets, but they all buy a '999' ticket - and that ticket ends up being drawn - then what they are walking away with is a millionth share of a million dollars, less the rake.  If the rake is 10% they're walking away with a millionth share of 900,000 dollars, or about 90 cents for every dollar in.  That's not a great place to be.

The problem is simply that a lot of people are going to make the smartest bet (and not play the lotto? - okay, the second smartest bet) and play in line with the odds.  The way to win is to somehow figure out how to get outside of the main pack and still end up with a winning ticket.  You can buy '111' tickets over and over until that hits (and you're the only person silly enough to be holding one), but the numbers would suggest that on average you're going to spend more money on all those tickets while waiting for your number to come up.  You can also go to Vegas and just slam down Lincolns (I'm assuming none of you are going to Vegas and dropping Benjamins) on the green spot(s) on a roulette wheel (or any spot, really).

You might hit early and get super lucky on the first draw.  The moral of that - for hopefully obviously reasons - would be to stop playing the lotto and be happy that you capitalized on noise instead of having the odds capitalizing on you.  

Enough with these pretend, odd lotteries, you say?  You're here for basketball (or Star Wars)?  How does this relate?

It relates because there is a bracket that corresponds to a lotto ticket of '999'.  I have filled out that bracket in the past - it makes it a lot easier to watch games and remember who to cheer for.  Always cheer for the favorite, and hope they're the favorite for a reason.  When you get to the final four make your picks based on records as well as strength of conference.  It's...pretty simple.  

The problem is that it's simple enough that you can be assured you're not the only one that's through about it. Picking a safe favorites bracket will not make you stand out among the crowd - if you do happen to pick a perfect bracket you're now sharing that prestige with a whole bunch of others (as well as any prize money - several places like Yahoo offer big prizes for registering a perfect bracket).  

Let's look at the extreme opposite for a moment.  What if instead of picking every favorite you misinterpret the numbering system (biggest number is best number, right?) and instead pick every underdog?

This would be similar to the '111' ticket, but in practicality might closer to a '000' ticket.  Such a situation seems so unlikely that in order for it to pay out you might have to do something crazy, like play March Madness longer than the lifetime of the universe.  Some of you might be fine with that.

It's also the case that a '111' ticket is easy prey, so you can be assured you'd also not be alone in your strategy.  As dumb as you might feel picking that bracket, you can be sure that the law of large numbers gave you at least a handful of partners in stupidity.  

So you want to be somewhere in that happy middle ground.  A bracket that's not too obvious, but one that is still unique.  Not too hard to hit the unique part - estimates of the number of possible brackets are well into the 10+ digits.  Again, cascading choices produce exponential growth (well, I got it right that James Madison made it to the final four, and everything else right, but I totally missed that Missouri upset Colorado State! Missouri!)  

You can start, then, with a totally safe bracket - the statistician in me would advise it.  From there, it's time to add your own pieces of flair to make it yours.  Who do you think stands the best chance of being upset? Flip that one for the underdog and you'd just reduced the number of people who have done the same thing. Pick another underdog and the pool has probably again diminished.  Now, figure out how far those upsets are going to go along the path to the finals.  Figure out what upsets might occur in the Sweet 16 or Elite 8.  Pick those.  

You're walking a fine line here - it's conceptually similar to the trade-off of type I and type II error.  The farther you push away from the most likely bracket the farther you also push away from the group of people playing it safe.  The more risks you take the more likely you'd be standing alone with a perfect bracket if you ever hit one.  Make sense?  Makes sense. 

In any case, go and get filling out those brackets!  ESPECIALLY THE STAR WARS ONE.

If - though - all this talk about the way brackets are really working has you totally disillusioned on the idea of March Madness, then bracket-less brackets might be fore you.  Bracket-less brackets are another cool take on this that actually makes it a lot harder to play it 'safe'.  Instead of filling out a bracket you pick a team for each seed that you think will go the fartherst, and get points when they move on.  I'm not going to get much deeper into it, because the guys over at Stuff Smart People Like already do a good job at it - they also have a bracket-less bracket contest you can easily take part in here:


Cheers to cross-promotion!  Now fill out some brackets!  

Wednesday, March 13, 2013

Melodifestivalen: Swedish singing competitions and those silly Brits

I'm suspecting that only a small fraction of my readers happen to be up on the most recent developments in Swedish-based musical competition programs as preliminary selection to the Eurovision song contest.


Well, that's a thing.  And it's actually quite a bit more interesting than any American musical competition show that I've ever seen.  So there's that.

Anyway, after initial selection, eight musical acts perform each week for four weeks.  Based on Swedish call-in vote, two of the competitors from each week go to the final.  In addition, two from each week go to a second chance round.  During this second chance round eight competitors are first cut down to four, who then face off in pairs until the two remaining go to the final.

The final thus consists of 10 competitors, and voting is a mix of international juries and Swedish call-in vote. 

It turns out some really interesting acts, like this:



and this:


and this:


and this:



Overall, though, it also puts a whole bunch of acts through multiple rounds of voting from multiple sources.  This produces some pretty interesting data, and gives us some interesting statistical options.

I hate to keep saying it, but once again Wikipedia provides (collects!) some pretty great data on how everything went down.  The results of all four rounds, the andra chansen round, and the final can be found here:

http://en.wikipedia.org/wiki/Melodifestivalen_2013

There are six big picture instances where the Swedish people were able to call in and express their vote.  These six times are the four main rounds, the second chance round, and part of the final.  I mentioned that part of the final was points from international juries.  In fact, there are 11 international juries: Cyprus, Spain, Italy, Iceland, Malta, the Ukraine, Israel, France, the UK, Croatia, and Germany.

Because only a certain portion of contestants move on to the final (and because there is no distinction before the final between first and second, or between third and forth) there's actually not much variance in scoring from those first few rounds.  The most useful information comes from the final - 11 international juries and the Swedish people rate each of the 10 final contestants.

What can we learn from this?  Well, what I started thinking about was finding which countries scored the contestants in similar ways.  After a little more thinking I got to wondering if we could treat each country as a item by which each contestant is measured.  In such a case we could examine how well each country was measuring the same thing through a reliability analysis.

Now, I recognize that this is not without pitfalls - the fact that this is rank data means first that cases aren't independent.  It also means that reliability based on continuous measurements might not be the most applicable, but I'm going to look the other way for now.

A straight-up reliability analysis of the 11 country votes and the final Swedish phone vote gives us some numbers that are almost respectable - including a Cronbach's alpha of .695.  If this is a means of gathering consensus, though, which countries are simply coming out of left field?

We can take a look at the inter-item and item-total correlations, and what we find is that the UK seems to be the country acting the most strange (this is also backed up by individual item-total(ish) Spearman's rho rank order correlation for the UK).  The inter-item total correlation for the UK is actually negative (-.11), implying that they are measuring something reasonably different than the rest of the countries.

So what happens if we pretend that the UK forgot to show up last weekend?

First off, our Cronbach's alpha jumps up to .734.

Here are the rankings and point totals as they played out in reality:

Competitor      Final    
    Points
       Rank
Robin Stjernberg 166 1
Yohio 133 2
Ulrik Munther 126 3
Anton Ewald 108 4
Louise Hoffsten 85 5
Sean Banan 78 6
Ralf Gyllenhammar 73 7
David Lindgren 69 8
State of Drama 68 9
Ravaillacz 40 10


And here are the rankings and point totals as they would have played out without the UK:

Competitor      Final
    Points
       Rank
Robin Stjernberg 166 1
Yohio 131 2
Ulrik Munther 114 3
Anton Ewald 104 4
Louise Hoffsten 85 5
Sean Banan 68 7
Ralf Gyllenhammar 65 8
David Lindgren 69 6
State of Drama 62 9
Ravaillacz 39 10


You can see that not much has changed, as the only change seems to be David Lindgren jumping up two places above Sean Banan (boooooooo).

Cyprus also had a negative item-total correlation (albeit smaller), which is still has after the removal of UK.  Removing Cyprus in addition to the UK bumps our Cronbach's alpha up to .775.  It also continues to move Sean Banan down the ranks:

Competitor      Final    
    Points
      Rank
Robin Stjernberg 165 1
Yohio 121 2
Ulrik Munther 108 3
Anton Ewald 96 4
Louise Hoffsten 83 5
Sean Banan 56 9
Ralf Gyllenhammar 65 6
David Lindgren 65 6
State of Drama 62 8
Ravaillacz 39 10


We could keep at it, but given the pretty sizable gap between first and second place it seems like the removal of particular countries isn't going to swing things much in any substantial way (the rest of the countries are also considerably more consistent).  It also pains me to begin to question if Sean Banan doing even as well as he did was simply due to noise in the data.

However painful, let's take a look.

We've been looking so far at the consistency of any given country, but we can also take a look at how stable the individual competitors were in terms of rank.  How to do this?  Well, let's see if means and standard deviations can help to paint a picture.

The idea here would be that if countries can't agree on how to rank a contestant then that contestant should have a higher standard deviation (error) around their mean rank.  Hand-waving again around some of the ceiling and floor suppression effects on SD from this kind of scale, here you go:

Competitor Mean rank SD rank
Robin Stjernberg 2.916666667 2.274696
Yohio 5.583333333 2.314316
Ulrik Munther 3.5 2.153222
Anton Ewald 4.916666667 2.466441
Louise Hoffsten 5.666666667 2.348436
Sean Banan 5.833333333 2.552479
Ralf Gyllenhammar 6.166666667 2.480225
David Lindgren 5.083333333 2.84312
State of Drama 5.25 2.261335
Ravaillacz 7.333333333 0.887625


Interestingly, the only competitor that really stands out is Ravaillacz (almost universally ranked toward the bottom of every country's list).  David Lindgren is also a bit high, though Sean Banan doesn't seem to be standing out as I expected.  Most ranks stay pretty consistent.

[You also may notice that Yohio drops quite a bit if we just look at mean ranks.  It is because he did really well with the Swedish phone vote, which is weighted higher than any of the other individual countries.]

Overall, it seems that Robin Stjernberg was pretty safe in his win, though perhaps in the future the British shouldn't get to vote on anything relating to music.  At least if that music is crazy Swedish music.

Sing us out, Mr. Stjernberg!





Wednesday, March 6, 2013

Cameras and the nature of noise

I'm pretty happy here - today we get to talk about two things that I really enjoy: cameras, and randomness.

With the proliferation of digital cameras and digital images it is very likely that at least some of you have an incorrect image or concept in your head when you hear the word 'noise' in the context of pictures.  That incorrect image may not be one of noise, but of pixelization. 

For example, let's start with a standard picture to which we can compare all others we talk about.  That picture will be this:


For reference, this is 3456x2304 pixels (8 megapixels), tripod-mounted, 200mm, 1.3 sec at F6.3, ISO100.

There's a lot of information there, and one of the things I'm looking forward to today is explaining the majority of it to you.  

Pictures - be they analog or digital - are made when light is focused onto a surface by a lens for a specified amount of time.  In traditional photography that surface is a frame of film.  In digital photography that surface is a charge-coupled device, or CCD (or something like it). 

It's easier to talk about resolution when it comes to digital images, so we'll start there.  The first number I tossed out (3456x2304) are the number of pixels in the above image.  The first number is by width, and the second is by height.  The multiplication of the two clocks in just below 8 million, which is how the concept of 8 megapixels (MP) is derived - it's how many pixels make up the total image.  

If you zoom in on the above image you're going to see some pretty true lines - this number of pixels give a very good resolution for defining objects.  Pixelization occurs when there aren't enough pixels to adequately define a line.  We can demonstrate this by taking the exact same image and reducing its size.  First, let's take it down to 200x133.  That's not even quite 1 megapixel:


If you look at the hard edges of some of the dice (especially the white one in the center with a 2 showing), you can start to see how the edges are beginning to look more like steps than hard lines.  This is because there aren't enough pixels spread across the image to 'smooth' that line.  This will be less apparent on true horizontal or vertical lines and worst on lines angled 45 degrees to the sides. 

We can make this worse to really illustrate the point - here's the same image taken down to 20x13


When you cut the number of pixels - in any given horizontal or vertical line - in half, the new pixels are created by averaging two pixels into one.  You can see that happening here, to large effect.  Each pixel no longer describes a really small area of the photo (think grains of sand in the first image), but a wide section of it.  This is pixelization.  It is not noise - it is lack of resolution.  When you see little staircases in your pictures (granted you're not taking pictures of little staircases), your problem is image size.  For most of us, something like 8MP is more than enough.

One of the other things you might recognize in this picture is that the only lines that can be defined in any reasonable way are those that are horizontal or vertical.  At this degree of pixelization the ability to meaningfully describe 45 degree angles is practically non-existent.

I mentioned above that a picture is made when light is focused onto a surface for a specified amount of time.  This takes us to the next point - the lens. 

If you're reading this article you're using some lenses - your eyes.  Some of you might be using some extra lenses in addition to your eyes in the form of glasses or contact lenses.  The notion is that these lenses are taking light from a wide area and focusing it down to a point or area.  If you want to learn more about lenses you can start here:

http://en.wikipedia.org/wiki/Lens_%28optics%29

Or just spend some time staring at this totally public domain image from the same article:



The number I quoted above (200mm) relates to the length of the lens.  Most people (I did for a long time) think that this relates to how long the actual lens you put on your camera is.  While that is somewhat true, it's not the actual measurement.  A 'mm' measurement on a camera lens is actually fairly complicated, but for our purposes is most directly related to how far the back element of the lens (the last piece of glass in the lens) is from the surface that you're focusing on.

If you have a zoom lens sitting around (okay, so not everyone does), you can check this by taking it off your camera and zooming it.  While the whole lens gets longer this is actually being done to move the last lens element away from the body of the camera. 

This measure of a lens isn't really important to our discussion, but now you know a little more about cameras, right? 

The last three numbers above are actually pretty important - they're the heart of exposure in photography. 

To remind you, they were "1.3 sec at F6.3, ISO100." 

Remember that a picture is made when a lens focuses light on a surface for a specified amount of time. 

The three things that drive exposure are shutter speed, aperture, and film/sensor sensitivity.  They are a balancing act - as you increase or decrease one you have to account for that change in some change in one or both of the other two.   

When you take a picture you're opening the shutter behind the lens to let light hit the sensor for as long as the shutter is open.  In the above example this was 1.3 seconds (one of the reasons I did this tripod mounted).  This is fairly simple.

Also behind (or often in) the lens is the aperture - a series of panels that close down to let less light back into the camera.  To demonstrate this, hold up your hand with your thumb closest to your and the rest of your fingers in a line behind it.  Now touch the tip of your index finger to the inside tip of your thumb, to form a circle - think of giving someone a gesture of 'okay'.

That circle is a decent size, at least the largest you're going to be able to make with those two fingers.  Slowly move the tip of your index finger down the side of your thumb and into the crease between them.  As you do this you should notice that the circle formed by them is (more or less) staying a circle, but slowly getting smaller.

This is more or less what the aperture is doing inside your camera.

The measure for aperture is usually represented as F'something' - in the above example it's F6.3.  This number is actually a ratio - it's the ratio of the mm length of the lens to the mm opening of the aperture.  Thus, as the size of the aperture opening gets smaller and smaller the F number actually gets bigger.  This is because the numerator of the fraction is staying the same and the denominator is getting smaller - the outcome is a larger number (think 1/4 vs 1/2, or 4/4 vs 4/2).

With some quick math, we can thus figure out (no one ever figures this out, ever) how wide open the aperture was for this shot.  6.3 = 200mm/x    ->    x ~ 31mm  

The higher the aperture (F) number, the less light is getting into the camera.  This means that the shutter has to be open longer.  Exposure is balanced in this way - the more light the aperture lets in the shorter the shutter is open, and vice versa.  That's not the whole story, though.

We're almost to the punchline, by the way.  The last number is ISO100, and it translates to the sensitivity of the medium.  In traditional photography this is the sensitivity of the film surface to light - achieved by packing more light sensitive chemicals in the film frame.  Each roll of film is a particular sensitivity and can't be changed on the fly.  In digital photography this is the sensitivity of the CCD to light - achieved by...well, turning up the sensitivity of the CCD to light.  One of the advantages of digital imagery is that this can be easily changed on the fly at any time. 

Most digital cameras range from an ISO of around 100 to somewhere around 1600 or 3200.  Some make it up to 6400, but the scale follows doubling rules - that is the jump from 100 to 200 is the same as from 400 to 800 or 3200 to 6400.  

Like I said, if you want to know a whole lot more about CCDs you can start on that wikipedia page.  They're pretty cool, but complicated enough that I'm not going to get into them too deeply here.

What we're going to consider them as are electronic devices with a number of holes that photons can fall into.  What are photons?  For our purposes...uh, pieces of light.

If you think of the first image we looked at we know that there are something like 8 million pixels in the image.  As long as the lens is focusing things adequately, that means that the picture can account for light coming from (being reflected by in this case) around 8 million sources.

Electronics are imperfect.  Many are designed to operate in particular tolerances.  My camera's CCD may very well be rated for use at ISO100.  Pushing it higher than that - again for our purposes - can be thought of as a way of overclocking it.  More specifically, sacrificing accuracy for power.  

You see, there are error rates at play here.  If we're treating the CCD as a photon collector then its job is to tell the onboard computer every time a photon passes through the gates at any pixel.  If you want this to be pretty accurate you need to make sure you're letting in what you are actually looking for.  This means setting higher thresholds for what is considered a photon. 

Think of it a different way.  At the lowest ISO settings you're setting pretty high standards in terms of letting things through.  Imagine 8 million tiny bouncers - complete with velvet ropes - each standing next to one of the pixels on the CCD.  They are responsible for making sure that what's getting through the gates is actually a photon, and not just a false positive.  At low ISO they are pretty thorough - they have a list and you had best be on it.  They're so thorough that they may stop some actual photons from getting in.  They're willing to sacrifice some false negatives to make sure that false positives are near zero.  

If you're in a situation with a lot of light this isn't a problem.  This might be because there's a lot of light in the scene, or because you're allowing a lot of light to get into the camera (long shutter speed, tripod). 

If you're in a situation with not much light (and no tripod), you might be willing to relax your draconian stance on false positives to make sure you're catching more of those photons you turned away as false negatives.  An ISO200 bouncer is twice as relaxed as an ISO100 bouncer, an ISO400 bouncer is twice as relaxed as that, etc.

At a certain point, the bouncer is barely doing his job at all.  For my camera that's around ISO1600.  He's letting in photons, but he's also letting in his friends and any rift-raft that happen to wander along.  It is a party, and everyone is invited.

Here's how it begins to play out:

ISO100:


ISO400:


ISO1600:  



FYI, all of these are now at a fixed aperture of F8 (which is about the sweet spot on my lens), so shutter speed varies (goes down) as the sensitivity increases.

If you have a keen eye you might start to notice a bit more noise in some of the areas of the picture as the sensitivity goes up.  This brings us to an actual conversation of what that noise represents. 

There's some 'true' image here that represents what was actually sitting on the table in front of me while I took these pictures.  Each pixel should have a value relating to the number of pixels that made it past each gate (I'm simplifying the fact that this is a color photo), and the lower the ISO the closer to 'truth' this should be.  Put another way, the observed values should cluster more closely around the actual values at lower ISO values. 

When you turn up the sensitivity by raising the ISO you start getting measurements of things that aren't actually true - this is noise.  You begin to see it as a sort of 'digital grain' - if you weren't able to pick up on it above we can zoom in a bit to really make it clear:

ISO100:



ISO400:


ISO1600:


At this point you should be able to see the pixel-level noise that starts to manifest itself at higher sensitivity.  Things look fairly 'real' at ISO100, even at this level of zoom.  At ISO400 you start to see a bit of that 'digital grain', and at ISO1600 it is very pronounced.

What is it, though?

Well, it's noise, and it's (presumably) random.  If this image was gray-scale each pixel would be establishing a level between black and white, represented by a number.  We can think the same way for each pixel in a color image, except there are actually a few color channels being measured.

Let's say, though, that any given pixel in the scene is actually reflecting x quantity of light.  If that pixel is being measured by the sensor as x, then you're getting a highly accurate representation of the scene.  It's more likely that there's some error present in your measurement, that is:  observed = true + error

That error can be positive or negative by pixel, and again should be fairly random.  The less you are trying to push the sensitivity the more accurate each pixel can be - the closer observed will be to true.  That's why the image at ISO100 looks fairly 'true' - the bouncer at this level is providing a good deal of scrutiny and making sure things are what they seem.

The reason the image at ISO1600 looks 'grainy' is because these error bars increase with increased sensitivity.  If the magnitude of error is higher, then your observation (the CCD's observation) of any given pixel is going to tend to be less 'true' - farther away from x on average. 

If you're particularly inclined, you can imagine a normal distribution around this true value x.  The higher the sensitivity, the flatter and wider this distribution.  You're much more likely to pull a number that's highly incorrect at ISO1600 than you are at ISO100.

When you look at the ISO1600 image, you're seeing individual pixels because there's a contrast emerging between adjacent pixels.  Contrast is simply the magnitude of the difference between two areas - if you're looking to define a difference between black and white, or lighter or darker, contrast is great.  Flat-screen TVs often let you know what their contrast is, something displayed like 10,000:1 - this means that the whites that the TV is capable of producing are 10,000 times brighter than the blacks it is capable of producing. 

The contrast you're achieving at a pixel level in the ISO1600 image is partly false contrast, however.  The color and luminosity of one of the dice at any given location is actually pretty similar to the area (pixels) directly adjacent.  The reason that you're seeing a difference is because noise is being distributed randomly and making some pixels lighter and some pixels darker.  When a pixel is randomly made lighter than those around it you notice it much more than if it was similar to those pixels around it.  It looks like a small bit of light surrounded by darker area.   

This is why you're seeing 'grain' - what you're seeing is deviations from 'true' at the pixel level.  
 
All said, there's still an assumption there that this error and noise is randomly distributed.  Because of this, there's a way that you can have the best of both worlds - high sensitivity but accurate observation.

It may be a fun thought experiment to stop reading here for a second to see if you can use the information provided to independently come up with a way to remove noise in images like those at ISO1600.

It has to do with averaging.

If we assume that the noise at each pixel is randomly distributed then the best guess for the actual 'true' value for any pixel should be the average information for that pixel across multiple images.  This is also where a tripod is necessary - if you take a number of pictures of the same thing (in this case I did around 70, but that's way overkill) you can average them to find a best guess for the 'true' value of that pixel.
   
There are actually some good tools available to do this - the one I use is called ImageStacker.  An image is really just a lot of information relating to the value of each pixel in the image.  In a grayscale image that has to do with gradients of black and white, and in a color image it relates to the gradients of a few different colors. 

Basically, though, you can conceptualize an 8MP digital image as 8 million numbers telling you the state of each pixel (again, more if you want to consider color).  It's easy enough to take 8 million averages using today's computers, and that's what programs that do this are doing.

Perhaps the result would speak best for itself.

Here's the full single image at ISO1600:

 
And here's 70 images at ISO1600, averaged into one image:


Again, the best picture is probably painted at a high zoom where noise would be the most apparent.  Here's the zoom on a single image at ISO1600:


And here's the zoom on 70 images at ISO1600, averaged:


For comparison, here's also again the zoomed image at ISO100:


Obviously, the average is quite a bit closer to the single ISO100 image.

Keep in mind as well that the image averaged from 70 images is NOT 70 times larger as a file.  It's still just one image, and is the same size as any one of the images that were used to produce it.  What's being thrown away in the process isn't image information, but redundancy and noise.  The 'true' information that exists in every picture is that which is shared - the redundancy across those images is used (in effect) to determine what is 'true'.  That - again - is done through simple averaging.

The fact that it works demonstrates that this noise is in fact randomly distributed - if there was a bias one way or another the averaged image would have a luminosity or color shift when compared to the single image at ISO100.  It does not.  In fact, they're fairly indistinguishable to the eye.  What I'm noticing when I really look is that the ISO100 image actually has a bit more noise than the average.  For instance, take a closer look at the green d6 in the lower right corner.    

Hopefully - if you're still with me - you understand a whole lot more about how cameras work.  Oh, and maybe you have a better understanding of random error.  Or maybe you just want to roll some dice.  I'd be happy with any of those outcomes.

And for those that are just wondering if they can have a full size example of all these dice, here you go:


Wednesday, February 27, 2013

OCD of National Proportions: Or How I Learned to Stop Worrying and Love Big Round Numbers

I'm going to make an assumption today that the majority of you reading this have spent their entire lives in a world where the United States of America consisted of 50 states.  I have, and it's always been one of those things that stands out in the back of my mind.

It just seems too convenient -50 is just such a nice number, and graphical representations of the states (e.g. the fifty stars on the flag) are just so nicely symmetrical. 

I've always wondered what it would take for the US to add more states from the very adequate list of territories (e.g. Puerto Rico, Guam, The Virgin Islands, American Samoa...Guam, etc), as this would potentially disrupt some pretty important stuff (like having to make a new flag!). 

It does seem a bit suspicious that it's been so long since we've changed off of a nice round number, though.  It got me to wondering about if there were other numbers that the United States stayed with for a while during the slow growth of the nation.  Let's take a quick look:


You can see that the United States has stuck with 50 states for a while now (about a quarter of the time there have been states), and they had been at 48 for a while as well.  If we're looking at numbers that end in 5 or 0 as those that fit the criteria of big and round, you can see that the US also spent a brief time (around 1900) at the 45 mark. 

Other than that, though, things seem to be pretty random.  There's some periods of time where the number of states was constant for a while, but none of those numbers seem to be big or round. 

Bit of trivia, by the way - all states except two have a well-established order in which they were admitted into the United States.  States admitted on the same day are often ordered by what sequence the president officially signed them into statehood.  President Harrison intentionally shuffled up the paperwork for two states, signed them in a thereby random order, and took the secret of the order to his grave.  Which two states?

Anyone?

North and South Dakota.

Before we move on to thinking outside of the states, I have another graph that I made to see what it would look like, and figured it was worth sharing.  It's the same graph as above, but takes into account that a number of states removed themselves from the US roster during the Civil War.  They weren't all readmitted immediately following the end of the war - they were readmitted over a period of a few years.

Anyway, here's what it looks like if we take the Civil War into account:


Beyond this, I started to wonder if other countries naturally settled into nice round numbers that help out with building their flags, etc.

Before we move on, let's have a quick quiz.  How many provinces does Canada have?  Does Mexico call their state-like things states or provinces?  How many of them do they have?

Let's start with Canada.  If you totally blanked on your quiz, they have 10 provinces.  Here's how that has played out historically:

    
You can see from this that Canada has actually spent more time at 10 provinces than the US has spent at 50 states.  Provinces have been added fairly slowly, but Canada has also only added only a fifth as many as the US.  The bottom line would seem to be that they haven't made any changes in quite a while.  Right?

Well, no.  Some of you might be clever enough (or Canadian enough) to point out the fact that Canada has some territories that are much more like provinces than US territories are like states.  They're also contiguous, which helps to create an overall 'picture' of Canada that includes them.

Putting them on the graph as well produces this:

So Canada did spent a decent amount of time with 12 province-like things, but none of these numbers look as nice and round as 10 does.  They've also settled in - fairly recently - to a 13 province-like thing system. 

By the way, if you live in the US and have no idea of what Canada did to change things up in 1999 you should spend a bit of time on the Googles.

Which brings us to Mexico.  Mexico has 31 states.  If you had no idea of that - or have no idea of what any of them might even be named - maybe you should head over to wikipedia for a bit. 

Here's the historical punchline:

Mexico spent a decent amount of time with 20 states (a nice round number), but only a little with 25, and jumped past 30 altogether!  Like Canada, they've also made more recent changes to their state makeup when compared to the US. 

I kind of feel that these graphs are interesting enough in and of themselves, but I just had to push myself a little farther.  What other countries are large and have a number of internal divisions? 

Who else is wondering how the graph for the People's Republic of China (PRC) would look? 

Well...messy.

If you know as much about China as you know about Mexico or Canada you may be unaware (as I will admit I was) of the number of different divisional concepts that the PRC has moved through in 60 or so years.

First off, the PRC started with some provinces already established from the prior several thousand years of civilization.  Most proximal are those that were in place in the preceding Republic of China (the remnants of which are now confined to Taiwan).

The PRC calls all of their divisional concepts (like states, etc) provinces, but one of the things on that list of provinces is also province - the kind that's most like states.  If we only look at things that the PRC call 'provinces' within the larger concept of provinces, we can start by making something like this:

  
If you're thinking about things too much from a US perspective you might be wondering if some provinces seceded or something.  Nope.  The PRC is simply a lot more likely than the US to shift things around and redraw - or divide or combine - provinces as they see the need. 

This might make things look as if not much goes on in terms of China adding/removing provinces, but things couldn't be further from the truth.  In fact, a ton of shuffling has gone on over the last half century. 

Let's add in all the other things that are in that big category of provinces.  This includes 'Greater Administrative Areas', 'Provinces', 'Autonomous Regions', 'Municipalities', 'Special Administrative Regions', 'Regions', 'Territories', and 'Administrative Territories'. 

If you're interested, the best source of information that I could find was actually on wikipedia (yes, yes, sources, etc), and specifically the article here:

http://en.wikipedia.org/wiki/History_of_the_political_divisions_of_China#List_of_all_provincial-level_divisions_since_the_proclamation_of_the_People.27s_Republic

It's also the first time on this blog that I've been unable to get all my numbers to match up.  The numbers work out against source for that last graph, but they don't for the next two.  They're close, but all the double-checking I've done has not revealed the small mistake I may have in there. 

That said, if you check out that wikipedia page you can see how difficult it is to systematically track the progression of  these state-like things through all the different terminologies, as well as through all the mergers, dissolutions, and reinstatements.  At the end of the day, my take is that if I have any Chinese readers I would absolutely love to sit down and get some input on the last 60 years of your history.

You're waiting for the chart, so here it is:

   
You can see that there were a lot of changes to things in the 50s, as there seemed to be a drive to simplify some of the naming and state-like things.  To really paint this picture I think it's more interesting to look at the same graph set up like this:



The blue line, then, is really the difference between the red and yellow/orange lines.  You might be tempted to say that the PRC hasn't changed anything in a while, but keep note of the different scale of time we're talking about.  Like Canada, they were doing things up into the 90s.

It's hard to imagine a US where this much shuffling took place, but it's a good example of a country not getting stuck on one number or another.  Mexico is similar with their 31 states.  Canada is interesting, as they seem to be pretty stuck on 10 provinces but willing to toss around territories all willy-nilly.  Perhaps my many Canadian readers can illuminate me on what makes their territories different from their provinces.

Perhaps someday the US will follow Mexico's lead and head on up to 51 states.  My advice?  Start working on 51 through 55 star flags right now so you can win the new flag competition post-haste when it's introduced.  Because seriously, isn't that the important part of all of this?

Wednesday, February 20, 2013

On the things that fall out of the sky

If you've been following the news of the last week you might have heard about the recent meteor that was seen (and paparazzied to death) over Russia.  While it's only somewhat statistical (okay, minimally statistical), I figured it would be a fun topic to talk about this week. 

On second though, let's get some stats out of the way up front.  You may have heard reporters talking/writing about this meteor impact as a one in one hundred year event.  Despite the difficulty of determining these sorts of meteor impacts over oceans (70% of the globe) before we had a satellite network, or of determining these impacts over non-explored portions of the globe before, say, the 1500s (90%?), let's say that this number is correct based on our 100 years or so of good global record-keeping.  I have literally read articles that paint this recent strike as a positive thing due to the fact that something like this shouldn't happen for another hundred years now that we've had our strike during this hundred years.

If you've been reading this blog for a while I'm not even going to insult you by explaining how much is wrong with that sort of assumption.  Let's get back to the fun stuff - meteors.  (If you're still wondering why there's so much wrong with such a claim just google 'statistical independence')    

There are a few reasons that I find it fun to talk about meteors.  First, meteors are cool.  That might be enough for some of you.  Beyond that, though, I think there's a lot of great points to learn about meteors and the atmosphere and speeds of things, etc.  Did I mention that meteors are cool?

Before we get to deep into it, if you haven't seen much footage of the actual impact, you can find a whole bunch of videos of it here:

http://say26.com/meteorite-in-russia-all-videos-in-one-place

There's a lot of information to take away from these videos, actually.  It's pretty fantastic that so many Russians have dash cameras on their cars or trucks - apparently a big part of it is simply the fact that having a documentation of your driving helps out if you find yourself in an accident or pulled over for something you may or may not have done.  Makes fighting that traffic ticket in court a whole lot easier.

A pretty good view of what's happening can be found in the first video on that page, or on YouTube here:

http://www.youtube.com/watch?feature=player_embedded&v=tkzIQ6JlZVw

One of the first things you should take note of is the fact that you see what looks like to be a pretty energetic reentry, but that you don't hear anything.  Are you thinking it's because the camera is in a car?  Well, no.

This is something that has been misrepresented for the better part of the last century - basically as long as we've been matching sound to video.  Have you ever seen video of an atomic bomb blast from back in the 40s or 50s?

You might be able to picture in your mind what this even looks like - the bright flash and then the rising mushroom cloud.  Along with the bright flash you probably also remember a pretty loud explosion.

If you've been to a track event started with a pistol and sat way up in the stands, been to a fireworks display from a distance, or watched a thunderstorm for a while, you might understand that this doesn't really make a lot of sense.  In fact, there are exceptionally few surviving clips of atomic tests with correctly matched audio - most footage uses stock explosion noises with the sight and sound of the explosion matched.  If you're interested, more information on the topic can be found here:

http://www.dailymail.co.uk/sciencetech/article-2174289/Ever-heard-sound-nuclear-bomb-going-Historian-unveils-surviving-audio-recordings-blast-1950s-Nevada-tests.html

The reason that you hear thunder some time after you see lightning is due to the differential speeds of light and sound.  Light is...pretty fast.  It's usually expressed in meters per second, and every second it actually goes a whole lot of meters.  With some rounding for simplicity (don't worry I'll use actual numbers for calculations), it's around 300,000,000 meters.  Every second.

Sound, on the other hand, is actually pretty slow.  In the same second that light can travel around the Earth seven or so times, sound doesn't even make it down to the corner store.  It takes over four seconds for sound to make it a mile - if sound was running a 5K it would put in a time of just under 15 seconds.

In that same time, light could run over 100,000 marathons.

[One important aside before we continue - the speed of light is usually given as the speed of light in a vacuum.  Light does travel slightly slower in atmosphere, but the difference is small enough to be negligible in most cases.  The speed of sound in most cases is given at sea level - these are the numbers I've used above.  The speed of sound does decrease as you travel up through the atmosphere, but - despite the fact it could be applicable here - I'm not going to go into that sort of detail.  This will be left up to the ambitious reader.]

This might have been a lot of setup for something you're already pretty aware of.  If something that produces noise happens a distance away from you, the light from the event will reach you before the sound does.  That means that you'll see the event before you hear it.  How much earlier?  Well, it depends on how far away the event is.

If you're sitting a dozen or so feet away from your television, the difference between the light and sound coming at you is small enough that you don't notice any difference.  That is, the sound seems to match the image.

If you were watching a drive-in movie screen through binoculars from a mile away and relying on sound produced at the screen itself you'd start to notice that things weren't matching up.  How far off would the audio be at that distance? 

In the extreme, if we set off an atomic bomb on the moon, how out of synch would the sound be at that distance? 

Well, the first question can be answered, and we'll do it below.  The second is a trick question because sound (unlike light) needs a medium to propagate through; like air or water.  There's no air (or water) between us and the moon (or on the moon), and so no sound waves would propagate from the explosion.    

To answer the first question we can start by taking a look at the time it takes sound and light to travel a range of distances. 


It's possible that you're asking: where's the line for light?  It's at the bottom - the red line isn't an axis, it's the time it takes light to travel these distances.  Compared to sound, the difference between light traveling 1 mile and light traveling 75 miles is fairly negligible.  Light can travel both of these distances in a fraction of a second.

It takes sound a little over 4 seconds to travel 1 mile.  If you've ever heard the old rule that you can count the seconds between seeing lightning and hearing thunder then divide by four to figure how many miles away the strike was you now see why that makes sense.  For the distances you see and hear lightning the rounding doesn't really cause any major problems. 

Why am I going on about all this when we should be talking about meteors?  Well, the fact that so many Russians have dash cameras means that we have a huge supply of data available to us.  We can even find a few examples where the incoming meteor is pretty close to directly overhead.  Here's a good example:

http://www.youtube.com/watch?feature=player_embedded&v=odKjwrjIM-k

Since we're looking at a dash camera we also have a second-by-second time stamp, which is great.  You can see that the person who uploaded this clip cut out a part of the middle - the time between seeing the meteor and actually feeling the shock wave.  We can figure out the difference here by taking note of when two events occur.

The first is the place in the video where the meteor seems to be directly overhead and most energetic - right around 43:05.  The second is when the shock wave hits and knocks some snow off the surrounding buildings.  It's seconds later in the clip as edited, but the time stamp reveals that it was just about a minute and a half later, at 44:35.

Imagine you were watching a thunderstorm and saw some lightning.  A minute and a half later you heard the accompanying thunder.  You might not even link these two events in your mind - you might associate the thunder with more recent lightning strikes that you may have missed.

Well, unless the thunder sounded like this:

http://www.youtube.com/watch?feature=player_embedded&v=w6uOzFo2MQg#!

From the numbers behind the above graph we can figure out what a minute and a half lag time means - turns out it's around 19 miles.

I can hear some of you yelling already, even through the internets.  You're using the speed of sound at sea level!  Yes, yes I am.  I told you that the speed of sound slows as you travel up in the atmosphere, and this meteor was obviously not at sea level.  This means that our estimate of 19 miles will be off, though we at least have a decent ballpark estimation. 

I can also hear a much smaller contingent of you yelling that things are a lot more complex than that and shock waves have different profiles than sound waves.  Well, yes.  I was hoping to keep this pretty simple to get across a point, but if you're so inclined you can learn a bit more here:

http://www.fas.org/sgp/othergov/doe/lanl/pubs/00326956.pdf

and here:

http://en.wikipedia.org/wiki/Shock_wave

19 miles is a bit of a distance.  The fact that damage was produced even at this distance is a testament to the amount of energy released from this particular meteor.  Current estimates have placed the energy released on the scale of nearly half a megaton of TNT (just under 500 kilotons).  Everyone is comparing that to the explosion of "Little Boy", the atomic bomb dropped on Hiroshima, which checked in at 16 kilotons. 

This brings us to some facts about meteors and the atmosphere that are a little less stats-y (not that we've been stats heavy to this point).   

Let's start with some simple stuff.  We've been using the term meteor here, and the use of that term actually carries with it some useful information. 

A piece of debris that's simply floating around in space isn't a meteor - it's a meteoroid if it's fairly small (roughly up to the size of a small car), an asteroid if it's a bit larger (up to the size of a small moon), and a planetoid if it's much larger (that's no moon!).

If any of these things is composed of ice - enough that it grows a tail - it is a comet.

Once one of these things comes in contact with the Earth's atmosphere (or any atmosphere, really) it becomes a meteor.  Thus, what was seen in the Russian sky was a meteor.  There are reports that some fragments of the meteor may have been found - if any parts of a meteor survive to the ground those fragments become meteorites.

You've also clearly seen the trail left in the sky by the meteor - a trail that persisted for some time.  I want you to think about two questions for a moment.  The first is why a meteor (or a space shuttle) heats up when it enters Earth's atmosphere.  The second is what causes a trail to be left in the sky behind a meteor such as the one filmed over Russia.  Think about both of these for a minute or so.

Okay, so what are you thinking?

Your ideas on this are probably again a bit polluted by a few sources.  Mainly movies and TV, I'd bet.

The first question is quite a bit easier, but also one of those that seems to be fairly misunderstood.  You might be thinking that meteors (or the space shuttle, etc) heats up on entry (or reentry) due to friction with the air.  Friction is actually a very small part of this process - what's really happening is that the air in front of whatever is entering the atmosphere is being compressed.  This is simply due to the fact that air can't move out of the way of an object fast enough once an object reaches certain speeds.  Since it can't get out of the way it becomes compressed.

If you've ever sat around and figured out how your refrigerator works (I would suggest it as a fun thought experiment as well) you might recognize what's happening as a bit of a reverse of that process.  As air is compressed it becomes hotter.  When a lot of air is compressed really quickly it becomes really hot.  This is what's heating meteors and space shuttles, etc. 

Looking to kill a bit more time?  Randall Munroe of XKCD has a really cool post on what would happen to a diamond meteor upon entry at different speeds here:

http://what-if.xkcd.com/20/

The space shuttle doesn't burn up on reentry due to some pretty sophisticated heat shielding, but meteors aren't so lucky.  This heat causes differential stress on parts of the meteor and it begins to burn up and break up.  This is why many meteors never become meteorites.

This leads to the second question, which I'm going admit I'm not sure that I have a solid answer on - the internets don't seem to address it.  I'm suspecting that many of you are thinking that the trail behind a meteor is a smoke trail.  I can see how this idea would get planted in our minds - movies and television have given us plenty of examples of things on fire plummeting toward the ground with smoke trailing behind them.  Nicolas Cage's stellar performance in Con Air, anyone?

Like I said, I'm having trouble figuring this out with any actual sources, but it doesn't seem that a meteor streaking through the sky is the best place for normal combustion to take place.  Moreover, you'll notice that trails behind meteors are (from what I've found) universally white - combustion of different components of different meteors would presumably lead to smoke at least occasionally displaying darker shades.  You know, different shades like you're probably familiar with from movies like Deep Impact where you see a plume of dark smoke trailing the meteor as it streaks through the atmosphere.  Example here:

http://www.top10films.co.uk/img/deep-impact-disaster-movie.jpg

What is the alternative?  Well, cloud formation.

If you remember your grade school science fairs you might be familiar with the old 'make a cloud in a bottle' experiment.'  If not, a good example is here:

http://weather.about.com/od/under10minutes/ht/cloudbottle.htm

Much of cloud formation relies on the compression and decompression of air with at least some water vapor content and dust particles.  We've already discussed how a meteor compresses air (which is free to decompress in the immediate wake of the meteor) - as long as local humidity is above zero the meteor is also producing a reasonable share of dust through the breaking/burning up process.

Similar to how airplanes in high atmosphere form contrails it seems that meteors might be leaving a trail that's nothing more than clouds formed by their fairly violent passage through the air.  Like I said, I can't find this in any of the intertubes, so it'd be interesting to have a discussion if people have other thoughts.       

One other thing before we go - in all the coverage of this meteor strike I've only seen one or two articles that discussed the angle of entry of this meteor.  You can probably figure it out from the name, but angle of entry relates to the angle that something enters the atmosphere.  The extreme ends would be directly perpendicular to the ground (think Felix Baumgartner skydiving from space) and directly parallel to the ground (which might cause an object to even deflect off the atmosphere - think satellites that are in orbits that are allowed to decay).  

This meteor was much closer to the second - you can tell pretty clearly from the videos of it that it had a shallow entry (it has been estimated at less than 20 degrees).  The angle of entry is important because it is one of the main determinants of how long something spends in atmosphere before it reaches the ground.  The numbers that I've seen seem to indicate that this particular meteor spent over 30 seconds in the atmosphere before it broke up.  The fact that it had 30 seconds of time in atmosphere was only because it was traveling at such a shallow angle - imagine if it had hit the atmosphere with an angle of entry closer to Felix Baumgartner.    

Well, to do some math on this we need to decide where the edge of space is.  As you travel up in the atmosphere, air gets thinner, and eventually there's no air.  It's not a hard edge, though, it's a slow gradient, so it's tricky to decide when a small amount of air is different from no air.  Most estimates that I've found seem to be in the 75-100 mile range.  This is good enough for some quick estimation.  

The Russian meteor entered Earth's atmosphere at a speed of around 11 miles per second.  If it was taking the shortest path through the atmosphere (straight on, perpendicular to the ground), it would reach the ground in somewhere around 9 seconds (if we're calling space 100 miles up).  If we go with 75 miles to the edge of space we're looking at closer to 7 seconds.  Sure, the air is slowing it as it descends, but this 9 seconds is a lot less than 30.  I'll also cede the fact that traveling at a steeper angle through the atmosphere creates a quicker pass through pressure gradients and might have caused breakup faster.  

Still, if this meteor had entered at such an angle there's a real chance that it may have impacted the ground before it broke up.  The energy released in atmosphere was enough to blow out windows and doors 19 miles away - if this energy were transferred all at once to a stationary object (like the ground), well, then we'd really have something like Hiroshima on our hands.  The fact that some portion of the population says 'well, at least we don't have to worry about it for another 100 years' flies right in the face of what we should actually be taking out of this.

Anyway, that seems like as good a place as any to leave you with something to think about.  Thanks for all the dash cams, Russia.