Although all good reasoning is of interest to logic, we will focus on reasoning—and, more specifically, on inference—that is good in a special way. Let us begin with one example of reasoning: a scientist attempting to account for a body of experimental data. The description of examples of inference in 1.1.2 used a rough distinction between two kinds of reasoning the scientist will typically employ. One kind is the extraction of information from the data. For example, the scientist may notice that nowhere in the data does a certain quantity exceed a certain value. Even though this conclusion is more than a simple restatement of the data and could well be an important observation, it is closely related to what is already given by the data. While we might admit that perceptiveness is required to see it in the data, what is seen does not go beyond the information the data provides. The same sort of close relation of conclusion to data can be found in cases where the data is qualitative rather than quantitative in form. For example, someone may notice that two properties are never found together (e.g., no one who has had disease A has also had disease B).
While conclusions like these might attract attention, the process of reasoning used to reach them would probably not be noticed; the extraction of information in such cases is too routine to call attention to itself. Conclusions reached in this way are naturally seen as aspects of the data—albeit aspects that might not be noticed—rather than as results of a process of inference.
The inferences that attract attention are ones that do more than extract information from data. And, at least in science, there usually is some attempt to go beyond data either to make a generalization that applies to other cases or to offer an explanation of the case at hand. Either way, we go beyond the data to say something more. A conclusion of one of these sorts will call attention to itself, even when it is easy to reach it from the data, so we will feel that we have done something in forming it. One reason for this sense of accomplishment is that generalization and explanation bring us closer to the goals of science than does the mere extraction of information, so an inference that generalizes or explains the data is more valuable. But another reason why generalizations and explanations call attention to themselves is that they are risky. And this distinguishes them from the extraction of information.
The information extracted from data may be no more reliable than the data it is extracted from, but it certainly will be no less reliable. On the other hand, even the generalization or explanatory hypothesis that is most strongly supported by a body of completely accurate data can still be wrong. Of course, a scientist will go on to test a generalization or hypothesis, but further testing cannot eliminate the risk. To avoid all risk of error in forming a hypothesis, we would need complete data about the subject matter investigated; and, in the rare cases where a complete set of data can be obtained, there is no further generalization to be made and the best account of the data will merely state information that can be extracted from it. So it seems that, if a generalization or explanation is something other more than the statement of information extracted from an already complete set of data—if it is genuinely hypothetical—it will involve some risk relative to this data. Extraction of information from the data, on the other hand, is completely safe.
Indeed, there is a picture of the matter—a picture that is attached to some of the language we have been using—according to which none of this should be surprising. Information that can be extracted from data is right there in the data, perhaps not for everyone to see but at least implicitly, to be extracted by anyone clever enough. If a conclusion we draw is something other than a statement of such information, it must make claims that are not even implicit in the data, claims that go beyond the data and could prove incorrect however incontrovertible the data might be. This picture also suggests a more positive way of looking at the extraction of information. Extracting information does not merely prepare us to go further; it maps out the territory that we can reach without making the leap to a generalization or explanatory hypothesis.
It is the kind of reasoning exemplified by the extraction of information from data that will be the focus of our study. This sort of reasoning appears also in mathematical proof and in some of the inferences we draw in the course of interpreting oral or written language. It is found whenever we draw conclusions that do not go beyond the content of the premises on which they are based and thus introduce no new risk. The traditional name for this study is deductive logic. Since the term deductive is associated with the features that distinguish the extraction of information from the formation of hypotheses, reasoning that consists in the extraction of information can be labeled deductive reasoning.
On the other hand, there is no very good term—other than non-deductive—for the sort of reasoning involved in inferences where we generalize or offer explanations. The term inductive inference has been used for some kinds of non-deductive reasoning. But, traditionally, inductive inference was thought of as merely the making of generalizations, but the conclusions of many non-deductive inferences are not naturally stated as generalizations. For example, the sort of inferences a detective draws will often concern particular people or events (and the interesting examples will be non-deductive in the sense in which we will use the term deduction). Because of examples like this, inductive inference would most often now be described as reasoning based on considerations of probability. While this covers inductive generalization and much more besides, it is a matter of controversy whether it covers the full range of reasoning to explanatory hypotheses.