There are two features of science that I think a lot of people (myself included) find attractive.
One is that scientific representations of the world (theories and other theory-like things) give you powerful ways to organize lots of diverse phenomena and to find what unifies them. They get you explanatory principles that you can apply to different situations, set-ups, or critters.
The other is the empirical basis of much of our knowledge: by pointing your sense organs (and your mind) at a particular piece of the world, you can learn something about how that bit behaves, or about how it's put together.
Lately, I've been thinking about the way these two attractive features can pull a person in opposite directions.
The tension that gets discussed a lot -- by philosophers of science and by scientists themselves -- concerns what relationship is supposed to exist between scientific representations of the world and the empirical evidence scientists have amassed. Frederick Grinnell  explains the crux of the matter quite aptly:
Carrying out an experiment requires one to anticipate what the result will be like and to choose methods suitable to observe them. Stated in another way, the choice of methods limits the results that can be obtained. Results that appear to contradict expectations might indicate either that the hypothesis was wrong or that the choice of methods was inadequate. These alternatives have led to the commonplace adage not to give up a good hypothesis just because the data do not support it.
Henry H. Bauer  notes that many (though not all) chemists trust theory more than experiment:
A few years ago, a review article in Science listed many instances in which calculations had been right while experiments had been wrong: for the energy required to break molecules of hydrogen into atoms; for the geometry and energy content of CH2 (the unstable "molecule" in which two hydrogen atoms are linked to a carbon atom); for the energy required to replace the hydrogen atom in HF (hydrogen fluoride) by a different hydrogen atom; and for others as well. The author, H.F. Schaefer, argued that good calculations -- in other words, theory -- may quite often be more reliable than experiments ...
Some people look at claims like this and decide that scientists are horrible hypocrites, people who cling dogmatically to their pet hypotheses in the face of falsifying evidence. The reality is that experiments can be wicked-hard to set up and run. Even if you've got a good plan for getting at information about the phenomena, executing that plan successfully can take a lot of practice. Since the experiments usually have more moving parts than the theories (and are much more vulnerable to "butterfingers"), scientists are in the habit of ruling out experimental flubs before they toss out the hypothesis that the experimental result seems to contradict.
In the long run, of course, a theory needs something like experimental result. How chemists can recognize instances where calculations were right and experiments were wrong is that later experiments are in agreement with the calculations -- maybe because researchers figured out a better method of performing the necessary experiments, maybe because the experimentalists just got better at executing the same old protocols. When there's a disagreement between theory and data, scientists usually call for an explanation of the disagreement. Why are we entitled to treat the data with suspicion? What other empirical data might speak in favor of keeping the theory?
There are all manner of complications we could get into here (e.g., holism in theory-testing and Kuhnian theory-laden observations, just to name two), but scientists remains committed to the idea that their theories are accountable to the world they aim to explain.
However, this isn't quite the tension I've been mulling over. Instead, I've been thinking about how little scientific indoctrination it takes to shift someone to using theory to answer questions rather than, say, doing a quick experiment.
For example, lots of cookbooks call for boiling vegetables in lightly-salted water, and some of them go so far as to claim that you should add the salt because it raises the boiling point of the water. Adding about a teaspoon of salt (which is around 7 or 8 grams tops) to a quart of water (which is in the ballpark of 1000 g), the intro chem student can turn to the equation for the boiling point elevation of water to see if this claim makes any sense. 8 g NaCl = 0.14 mol NaCl = 0.28 mol ions in the 1 kg. of water. Multiply that by 0.52, the molal boiling point elevation constant for water, and we get a whopping 0.142 C increase in the boiling point of the cooking water. Having the water boil at 100.142 C rather than 100 C hardly seems like it should make a difference to your potatoes ... and so, we conclude, the cookbook explanation is probably bogus.
Is it even worth trying an experiment to see if the teaspoon of salt makes a measurable difference in the quart of tapwater we're boiling?
Other examples surface daily. Someone makes a comment about hot water making ice cubes faster than cold water. Someone else, familiar with thermodynamics, explains in detail why this cannot be the case. No actual ice cube trays risk harm, since none are ever deployed in resolving the dispute.
I loves me some thermodynamics. But, why not clear some space in the freezer to do a side-by-side comparison of the ice cube tray filled with hot water and that filled with cold water? Doing an experiment certainly doesn't preclude making a confident prediction of the outcome from the theory. And, in the event that the results don't turn out the way you predicted they would, it might help you notice a way that the real system departs from your assumptions about it.
I've been thinking about recourse to theory versus recourse to experiment a lot lately because I have children who ask lots of questions about how various pieces of the world work. My scientific training got me in the habit of making theory my first stop and doing back of the envelope calculations when necessary. My offsping are much less satified with answers from theory than they are with actually seeing what happens. After the experiment, they're happy to listen to a theoretical explanation of the outcome (within reason, of course -- they're still young). But before they've seen what the outcome of the experiment is, theory does not move them.
 Frederick Grinnell (1999) "Ambiguity, Trust, and the Responsible Conduct of Research," Science and Engineering Ethics, Vol. 5, Iss. 2, 205-214; p. 207.
 Henry H. Bauer (1997) "The So-called Scientific Method," in John Hatton and Paul B. Plouffe (eds.), Science and Its Ways of Knowing. Prentice-Hall, 25-37; pp. 26-27.