In 1862, the British philosopher Herbert Spencer used the phrase "survival of the fittest" to describe Darwin's concept of natural selection. It's not a bad phrase, really, and it doesn't do a bad job of describing natural selection - the individuals in any population that are "fittest" - best suited to reproduce - are the ones most likely to reproduce successfully. If this is correct (and it is), we can expect that "fitness" would be a very important concept in evolutionary biology. It is, of course, and John Wilkins has already provided a good explanation of the concept in general. I'm going to look at something a little more specific - how can we measure fitness.
If we want to be able to use "survival of the fittest" as a scientifically relevant statement, we're going to have to be able to figure out just who the "fittest" are in any given population. That means that we're going to have to measure fitness in some way or another. There are two main ways of doing this - we can measure fitness in absolute terms, or we can measure the fitness of a particular variant of a gene (genotype) relative to other variants of the same gene in that population. Absolute fitness and relative fitness measure different things, and are not interchangable or comparable. Each has advantages and disadvantages, which measure is more appropriate is going to depend on what it is that you want to know.
Let's start with absolute fitness. Absolute fitness generally referrs to the contribution that a particular genotype makes to the next generation. There are actually a couple of different ways to measure this, and here again each has advantages and disadvantages.
One method of measuring absolute fitness is to look at the viability of each genotype. In plain language, this means that we look at the fraction of organisms born that survive to maturity. Let's look at the example I used a couple of days ago:
|# of offspring||standard||90||90||90||90||90|
|# at age 1||standard||18||18||18||18||18|
|# at age 2||standard||9||9||9||9||9|
In this example, the standard form has an absolute fitness (measured in terms of viability) of 0.1, and the mutant form has an absolute fitness of 0.2. In verbal terms, one out of every ten standard form individuals survives to maturity, and two out of every ten mutants survive to maturity. If absolute fitness is being measured in these terms, the measurement must fall between 0 and 1, and while it is theoretically possible for the value to be exactly 0 or exactly 1, it is extrordinarily unlikely that those values will ever turn up in a real situation. (If fitness = 0, none of the individuals born will survive to maturity, and the genotype will be extinct at the end of this generation. That happens, but it typically isn't very interesting from an evolutionary perspective. If fitness = 1, every single individual born with that genotype survives to reproduce. None of them get hit by meteors, fall into lakes and drown, or are such complete dorks that nobody will mate with them. This just doesn't happen, with the possible exception of lab experiments.)
It's important to note, when looking at this type of fitness measurement, that it's impossible to know exactly how good (or bad) a particular fitness value is unless you also know something about the number of offspring born per reproductively active individual. A genotype with a fitness of 0.01 (one out of every hundred individuals born survives to reproduce) is not going to be particularly good if individuals of that species only have ten offspring each. On the other hand, if the species is one where every pair that reproduces lays thousands of eggs, a fitness of 0.01 is going to be exceptionally good.
This means that absolute fitness measured in terms of viability is good in situations where you want to know how much of the species reproductive effort is lost before reproducing, or if you want to look at the effect that a particular genotype has on that figure. It's not good if you are interested in finding out if a particular form is going to spread through the population, because it doesn't tell you how many individuals are surviving to maturity per mature individual in the previous generation. It's also not that great a measure when comparing the fitnesses of different genotypes, again because it doesn't tell you how (or if) the fitness changes the makeup of the next generation.
A second way to measure the absolute fitness of a genotype is to look at how many offspring with the genotype survive to become parents themselves per parent with the genotype in the previous generation. That's awkwardly worded, I know, and it's easier to understand if we go back to the example in the table above. In that example, the average "normal" individual in Generation X is the parent of 1 of the parents in Generation X+1. In the same example, the average "mutant" individual in Generation X is the parent of 2 of the Generation X+1 parents. This means that the "normal" genotype has an absolute fitness of 1.0, and the "mutant" genotype has a fitness of 2.0.
This measure of fitness can, in theory, be any number greater than 0. If the fitness is less than 1, we can say that the genotype in question will be found in a smaller number of individuals with every passing generation. If the fitness is greater than 1, we can say that the genotype will be found in more individuals with every passing generation. If the genotype is exactly 1, we can say that it will be found in the same number of individuals in every passing generation. Using this measure of absolute fitness is helpful in cases where we want to know if the genotype we are interested in is likely to still be around at some point in the future. It does not help in cases where we are interested in the proportion of offspring that survive to maturity, and it does not help us in cases where we are interested in how fit one genotype is in comparison to another genotype.
The two measures of absolute fitness that I just outlined both measure fitness in terms of what a genotype contributes to the next generation. Relative fitness examines something different. Instead of measuring what a genotype contributes to the next generation, relative fitness measures how one genotype's contribution to the next generation compares with another genotype's contribution to the next generation.
If we want to measure relative fitness, we are going to need to know the absolute fitness for the genotypes that we want to compare. Oh, and we need those absolute fitnesses to be measured the same way - you can't compare the absolute fitness of a genotype measured in terms of viability with the absoluted fitness of a genotype measured in terms of per capita contribution to the next generation. And, if we are going to use the viability measurement of absolute fitness, we need to be sure that the genotypes we're going to be comparing all produce about the same number of offspring - otherwise, they're not really comparable. Once we have the absolute fitnesses, all we need to do is designate one of the genotypes as the reference point, and do some simple division - the relative fitness of a genotype equals the absolute fitness of that genotype divided by the absolute fitness of the reference genotype.
The next question, of course, is how do we choose which genotype to designate as the reference genotype? The simple answer is that it doesn't really matter, as long as we are clear about which we choose. Having said that, it is most common to use the genotype with the highest absolute fitness as the reference. This has a couple of advantages. First, having a convention for which genotype to designate as the standard has the advantage of consistency - it makes things easier when you have to read a lot of different things that use relative fitness. Second, setting the fittest genotype as the reference puts all of the measures of relative fitness into the same range of possible values (0 to 1).
Going back to the example, the "normal" genotype had an absolute fitness of 0.2 as measured by viability, and 2.0 as measured by per capita contribution. The "mutant" genotype had an absolute fitness of 0.1 and 1.0 respectively. If we designate the "normal" genotype as the reference, then the "mutant" has a relative fitness of 2.0. If the "mutant" is the reference genotype, then the "normal" genotype has a relative fitness of 0.5. In either case, the "mutant" is twice as fit as the "normal."
There's one final point to keep in mind now that we've looked at three different methods for measuring fitness: the three measures are totaly independent of each other. Knowing a genotype's fitness based on one of those measures does not let you determine the fitness based on either of the other two. That means that it is important, when talking about fitness, to be clear about what method of measuring fitness is being used.