Science is a Process, not an Object.

Honors Chemistry


Blog Entries

Facts, Laws and Theories

Facts, Laws and Theories


How do we know things? Fundamentally, what is the basis of knowledge? This is a completely separate class. However, it is essential that we address it a bit. Science is not about certainty, but about testability. It is a system of knowledge that is based on testing one's ideas against observation. The only untestable assumptions we allow are: that there is something we call reality; and that we can interact with that reality in some meaningful way. That is the core of what an experiment is: an experience of reality from which we extract meaning.
One could fairly point out that science is therefore based ultimately on untestable assumptions. However, it is also fair to say that anyone interacting with the world at all—walking, talking to others, any interaction at all—makes these same two untestable assumptions. In science, these two untestable assumptions are the only two we make.
The Blog continues below the Padlet board.
Post comments or questions here:

Made with Padlet

Testable Assumptions:


Every other assumption can be tested by comparing the predictions of your assumptions to the reality you experience. There will be mistakes and dead ends, that's true. However, by continuing to check our predictions against reality, we can assess how well our explanations fit. You may notice that in this system, there are no "absolutes" accept for rare instances where they arise as part of a definition (as in “absolute zero” in temperature). Otherwise, we merely define and limit our uncertainty. The very idea of certainty is somewhat out of place in science. A few years back at the dedication of our Science building, Nobel Laureate David Baltimore put it like this: “in science, we move from uncertainty to less uncertainty.”
Facts:

A fact is something observed reproducibly in the physical world. Facts are our measurements. As absolute as the term "fact" may seem to you, the reliability of facts depends on the precision of the measurement. Thus, our appreciation of facts can change. You may say it is a fact that the desktop is solid matter. However, it is, like all objects you see, mostly space. Small particles pass through the desktop as though it isn't there. If your level of detection (resolution) is that of your eye, or your finger, objects do not pass through the desktop. If your level of resolution is considerably smaller (the size of a proton or electron), the is more "space" for particles to pass through than there is matter for them to hit. Anything outside your limits of resolution cannot be detected. That does not mean your measurements are meaningless. It just means that it is extremely important to know what your limit of resolution is. In a very meaningful way, the desktop is solid. Your conclusions from your interaction with the desktop were not wrong. We have just refined our understanding of what "solid" means.
Laws:
So, if there are no absolutes in science, then what about "laws?" The scientific concept of "law" is a hard one for students, sometimes. Laws amount to codification of our observations. They can be conceptual or mathematical. For example: "an object in motion will stay in motion unless acted upon by a force" is part of Newton's First Law. F=ma (Force=mass times acceleration) is also a law. Laws are subject to change as we understand things better. Newton's Law of Universal Gravitation is a darn good rule. But, after over 200 years of supremacy, Newton's Law had to be re-tooled when Einstein spotted a fundamental flaw in it. Einstein based this on another Law: nothing can travel faster than the speed of light. Could we learn that this too is an approximation? Sure. But, so far, it is holding up well. Moreover, Einstein's theory of gravity has an advantage over Newton's Law because it has a theoretical basis (yes, I am implying that a "theory" has distinct advantages over a "law." I will expand on that below). But, the General Theory of Relativity, which is Einstein’s attempt to resolve the problem of gravity, has its own shortcomings. Another example are the laws of "conservation of matter" and "conservation of energy." These laws say that neither matter nor energy can be created or destroyed. Once again, that annoying Albert Einstein found that there was a shortcoming in our understanding. While the laws are still fundamentally considered correct, Einstein and later deBroglie and Schroedinger forced us to realize that matter and energy were really aspects of the same stuff. Thus, we now allow that matter can be converted to energy and energy to matter (given by the relationship e=mc
2). The two laws have become one, and we now say that the sum of all matter/energy in the universe is constant.
Hypothesis/model:
I use these two words more or less interchangeably. These are tentative explanations of principles underlying the facts we observe. A model also may be conceptual or mathematical. Any scientific model must have two attributes: it must fit with data already observed (known facts—it may modify our interpretation of those facts);
and it must make predictions about future experiments. It is this second thing that is most important, because it allows us to test whether our tentative explanation has merit. Hypotheses are often wrong. Testable hypotheses that turn out to be wrong are still useful. Untestable explanations are worthless in science.
Theories:
Believe it or not, theories represent the height of scientific understanding. There is a tendency in the non-science world to think of a theory as some sort of weak fact, something we are less sure of than facts. This is not the case in science. Theories are as good as we get. Theories do not get "proved." They get tested, like hypotheses. A theory is sort of a grander hypothesis. A good theory offers a single explanation that links many different, often seemingly unrelated, facts into one cogent model that can be used to make predictions. It often subsumes many hypotheses. As with hypotheses, the power of a theory comes more from its predictions than from its explanations. It is through predictions that the theory obtains its usefulness. Often, the best predictions of a theory are unexpected: things that no one knew, or even suspected, before the theory. If those predictions turn out to be correct, they represent dramatic support for the theory. The Special Theory of Relativity is a great example: based on an attempt to explain some of the shortcomings of Newton’s Laws, Einstein came up with his famous equation linking mass and energy:. This predicts that a tiny amount of mass can be converted into an extraordinarily large amount of energy (“c,” the speed of light, is a huge number). Nuclear power plants confirm this prediction continuously. That represents a critical test of the theory. No one could have discovered nuclear power without the theory to predict it. When a new theory emerges, sometimes it forces us to reevaluate how we interpreted facts. Thus, a new theory can change things dramatically. We may have many facts that seem unrelated. These facts may be used in various hypotheses. Then, rarely, someone comes along and says: "all those things are actually related via this overarching theory." The old laws and theories usually are subsumed by the new one. For example, Newton’s laws now are considered a special case of Einstein’s Theories. They work fine under most conditions. Relativity tells you the conditions under which Newton’s laws will fail you. If there is a word in common use that corresponds to what we mean by theory, it is "explanation." This could be contrasted with a law, which is really just a "description" of what happens. The description (law) may be very useful. You may be able to predict not only that things will fall to Earth but also at what rate they will accelerate. Einstein’s theory of gravity does that, but it also makes a surprising prediction: space itself will be curved around massive objects (like a star) so light must follow a curved path near such an object. This prediction was tested and that’s exactly what happens.
Can a theory be said to be "True" or "False?"
Theories that are tested and are consistently good in predicting outcomes of experiments do not become absolute truths. However, we assume we are on the right track. Also, certain predictions or claims of a theory can be said to be true. If demonstrated reliably, these claims can become facts, subject to the same limitations above. For example, the central requirement of plate tectonic theory, that there are large areas of the Earth's crust that move relative to each other, has been demonstrated. Another example comes from the Theory of Evolution. It is a fact that organisms evolve and that one species can evolve into another related one. That has been observed. But, the basic structure of a theory remains just that: theoretical.
A theory can be disproved. However, when a theory that has proven extremely useful fails to work in some context, we usually don't talk about disproving it, but rather finding its limits. For example, Dalton's Atomic Theory, which we will discuss soon, is right in many ways. However, one of Dalton's claims is not correct: that atoms are indivisible. So, we modify the theory a little and say that dividing an atom makes it a fundamentally different thing. An atom is not the smallest unit of matter (as Dalton would have said) but the smallest unit of an element that still has the properties of that element.
So, What do we know?!
It may seem that science at once claims to have some authoritative knowledge and yet to know nothing for certain. So, what are we supposed to make of it? It is one of the amusing paradoxes of human existence that you cannot learn something until you admit that you don't know it. I would not want to give the impression that there are no petty scientists who guard their positions and thwart new ideas. But, we are raised in science to think that it is wrong. We all are told "fables" by our mentors that teach us to be wary of getting too set in our ways. We are taught to get used to saying "I don't know." The important thing for you to take from this discussion is that gaps in our understanding are not seen as flaws, so much as opportunities for further study. “I don’t know” is not an admission of failure; it merely defines the next question. There will likely always be gaps; we will always have uncertainty. However, in the few hundred years that this method has been in use, there have been no reversals in our understanding (that’s a bit of an opinion on my part. There are philosophers who would argue that). We continually challenge what we know. We are surprised by new directions, but we haven't, in any cases that I know of, grossly lost our way. We refine our view. We find limits to our existing ideas, and find new ideas to cover the areas we didn't even know were there. Galileo said that the Sun was the center of the universe. It isn’t, but it was the center of the universe as he could measure it. The (currently known) universe is bigger than he knew. As we refine our measurements, we appreciate new levels of detail. But, we did not, for example, discover that he was wrong and that the Earth was the center of the universe (as the Catholic Church said it knew as a matter of certainty to be true). We did not overturn our understanding. We came to see a larger picture.
Back
MathJax Font RapidWeaver Icon

Made in RapidWeaver