![]() |
Metamath Proof Explorer Deduction Form and Natural Deduction |
|
Mirrors > Home > MPE Home > Deductions |
Contents of this page |
A Theorem fine is Deduction, For it allows work-reduction: To show "A implies B", Assume A and prove B; Quite often a simpler production. --Stefan Bilaniuk A Problem Course in Mathematical Logic, 2003 [external] |
When being rigorous,
it is surprisingly complicated to convert some deduction in the form
into some theorem in the form
.
The deduction says,
"if we can prove
then we can prove
,"
which is in some
sense weaker than saying
"
implies
."
In particular, there is no axiom of logic that
permits us to directly obtain the theorem given the deduction.
The conversion of a deduction to a theorem does not even hold in general
for quantum propositional calculus,
which is a weak subset of classical propositional calculus.
It has been shown that adding the Standard Deduction Theorem
to quantum propositional calculus turns it into classical
propositional calculus!
This is in contrast to going the other way; if we have the theorem, it is easy to recover the deduction using modus ponens ax-mp. An example of this is the derivation of the inference simpli from the theorem simpl.
In traditional logic books, this is often resolved using a metatheorem called the Deduction Theorem (discovered independently by Herbrand and Tarski around 1930). The deduction theorem, aka the standard deduction theorem, provides an algorithm for constructing a proof of a theorem from the proof of its corresponding deduction. See, for example, Margaris, First Order Mathematical Logic, p. 56. To construct a proof for a theorem, the deduction theorem's algorithm looks at each step in the proof of the original deduction and rewrites the step with several steps wherein the hypothesis is eliminated and becomes an antecedent. Ordinary mathematical proofs invoke this algorithm when converting a deduction into a theorem.
However, in practice, no one actually carries out this conversion algorithm in ordinary mathematics. Instead, mathematicians often stop once they believe the conversion algorithm could be done. This sharply contrasts with Metamath's philosophical goal, which is to show all proofs directly from the axiom system in its database, and not skip steps or rely on higher-level metatheorems derived outside of that axiom system.
Metamath is capable of converting a hypothesis into an antecedent. We have found that in many cases only a weakened form of the deduction theorem is necessary, called the weak deduction theorem (dedth and its variants). The weak deduction theorem is often easier to use, when it is necessary to use some version of the deduction theorem in the Metamath Proof Explorer (MPE). Earlier versions of MPE applied the deduction theorem, or a weakened form of it, in a number of situations. For more information about using the deduction theorem and the weak deduction theorem, including more details of why this can be complicated, see the deduction theorem page.
Today, the deduction theorem (including the weak variant of it) is rarely used in MPE when creating new proofs. That is because we have found much easier ways to achieve the same result. The rest of this page explains these easier ways.
In particular, there is a preference for
writing assertions in "deduction form"
instead of writing a proof that would require use of the deduction theorem.
Applying this approach is called "deduction style".
An assertion is in deduction form if the conclusion is an implication with
a wff variable as the antecedent
(usually ),
and every MPE hypothesis ($e statement)
is either a definition or is also an implication
with the same wff variable as the antecedent
(usually
).
A definition can be for a class variable
(this is a class variable followed by "=") or a wff variable
(this is a wff variable followed by
); in practice
definitions (if used) are for class variables.
In practice, a proof in deduction form will also contain many steps
that are implications where the antedent is either
that wff variable
(normally
)
or is a conjunction
(...
...)
including that wff variable
(
).
For example,
lhop (L'Hopital's rule) is in deduction form,
because
its conclusion and most of its MPE hypotheses are implications with
as the antecent.
It has one MPE hypothesis that is not an implication,
but that hypothesis is a definition of a class variable -
which is also allowed in deduction form.
Later, in the section on
Definitions in deduction form,
we'll explain why definitions are allowed in deduction form.
In deduction form the antecedent
(e.g., )
mimics the context handled in the deduction theorem.
The deduction form antecedent can also be used to represent
the context necessary to support
natural deduction systems.
For example, if the antecedent is a conjunction, the other terms can
represent the temporary assumptions that are sometimes used in
natural deduction.
Before we discuss how to do natural deduction in MPE, we will first
explain what natural deduction is.
Natural deduction (ND) systems, as such, were originally introduced in 1934 by two logicians working independently: Jaśkowski and Gentzen. ND systems are supposed to reconstruct, in a formally proper way, traditional ways of mathematical reasoning (such as conditional proof, indirect proof, and proof by cases). As reconstructions they were naturally influenced by previous work, and many specific ND systems and notations have been developed since their original work.
[Indrzejczak] pages 31-32 suggests that any natural deductive system must satisfy at least these three criteria:
There are many variations in ND systems. [Indrzejczak] page 32 believes that two kinds of variations are especially important:
The most common and recommended approach for applying ND
in MPE is to use deduction form and apply the MPE proven assertions
that are equivalent to ND rules.
For example, MPE's jca
is equivalent to ND rule
I
(and-insertion).
See MPE equivalents of typical
natural deduction (ND) rules for a list of equivalences.
This approach for applying an ND approach within MPE
relies on Metamath's wff metavariables in an essential way, and
is described in more detail in the presentation
"Natural Deductions in the
Metamath Proof Language" by Mario Carneiro, 2014.
In this style
many steps are an implication, whose antecedent mimics the context
()
of most ND systems.
To add an assumption, simply add it to the implication antecedent
(typically using simpr), and use that
new antecedent for all later claims in the same scope.
If you wish to use an assertion in an ND hypothesis scope
that is outside the current ND hypothesis scope,
modify the assertion so that the
ND hypothesis assumption is added to its antecedent
(typically using adantr).
Most proof steps will be proved using rules that have hypotheses
and results of the form
...
This is most easily understood by looking at some examples. For an annotated example where some traditional ND rules are directly applied in MPE, see ex-natded5.2, ex-natded5.3, ex-natded5.5, ex-natded5.7, ex-natded5.8, ex-natded5.13, ex-natded9.20, and ex-natded9.26.
Only using specific natural deduction rules directly can lead to very long proofs, for exactly the same reason that only using axioms directly in Hilbert-style proofs can lead to long proofs. If the goal is short and clear proofs, then it is better to reuse already-proven proven assertions in deduction form than to start from scratch each time. For example, observe that proof ex-natded5.8 - which is a line-for-line translation of directly using only common ND rules - is much longer than ex-natded5.8-2 (which directly reuses other proven statements) and ex-natded5.2i (which doesn't worry about context). Similarly, ex-natded5.3 is longer than ex-natded5.3-2 and ex-natded5.3i. Here are a few examples of proven assertions that can be useful for proofs in deduction form:
In some cases the metamath.exe "minimize_with" proof command can significantly shorten a proof if the original only uses the equivalent of basic ND inference rules. In practice, the simplest and clearest approach to creating a proof depends on what is to be proved.
Alan Sare has worked on an alternative within MPE, called virtual
deduction. Much more information is available at wvd1, and an example is at
iunconlem2vd
[retrieved 11-Apr-2018]
used to generate iunconlem2.
This approach essentially replaces
the outermost implication arrow with a different symbol
. Alan Sare believes this change makes it easier to
visualize proofs, while others disagree. The completeusersproof
tool is available which supports this alternative approach.
Frédéric Liné has developed a different metamath-based system to support natural deduction system called natural deduction [retrieved 21-Dec-2016]. He reports that his system was inspired by a slide show, "From Natural Deduction to Sequent Calculus (and back)". Practically this system is equivalent to Mario Carneiro's work.
The actual purpose of the
antecedent
is to introduce a hypothesis that may not
always be satisfiable without some assumptions (and the exact nature of
those assumptions is not known inside the theorem, so the generic
"
"
is used). If we are dealing with a true abbreviation, just a short way to
write some symbols, then a definition
"D = ..." hypothesis will either be satisfied
by a similar hypothesis, or by theorem eqid,
which will force the substitution of "..." for "D" in the rest of the theorem.
Since in any
case, the hypothesis can be discharged with no assumptions, the
"
"
is not necessary.
Sometimes, we wish to discharge these hypotheses with something that isn't just an abbreviation. For example, if "J e. ( TopOn ` X )", then X is the base set of the topology J, so X = U. J (toponuni), but since X = U. J is not always true in this situation, we can't prove it without the hypothesis "J e. ( TopOn ` X )". What you have to do here, if you want to use a theorem that has the hypothesis "|- X = U. J" (to prove some statement |- P(X)), is use eqid anyway, producing a proof of P(U. J), then use ( ph -> X = U. J ) and equality theorems to get ( ph -> ( P(X) <-> P(U. J) ) ), then use the proof of |- P(U. J) to yield |- ( ph -> P(X) ). An example of this procedure is the derivation of ishaus2 (which has X = U. J given the context) from ishaus (which assumes X = U. J with no context).
The above procedure is always possible to do, but it is not always convenient since many steps may be needed for the equality proof. In cases where we specifically intend to discharge the assumption with something other than eqid, we sometimes add ph -> to the equalities as well. A common example of this is in structure builder theorems, where we want to use a self-referential expression for the left hand side. For example, we might define a group by
$e |- ( ph -> B = my-base-set ) $e |- ( ph -> .+ = my-group-op(B) ) mygroupval $p |- ( ph -> MyGroup = { <. ( Base ` ndx ) , B >. , <. ( +g ` ndx ) , .+ >. } )
Suppose that my-base-set is a complicated expression, and my-group-op(B) has a lot of references to B, so the fully expanded my-group-op(my-base-set) is really big, and we don't want to use it too many times. In the same context, we then prove:
$e |- ( ph -> B = my-base-set ) $e |- ( ph -> .+ = my-group-op(B) ) mygroupbaslem $p |- ( ph -> ( Base ` MyGroup ) = my-base-set ) mygroupaddlem $p |- ( ph -> ( +g ` MyGroup ) = my-group-op(B) )
and now we can bootstrap the whole thing:
mygroupbas $p |- ( ph -> ( Base ` MyGroup ) = my-base-set ) $= 1::eqid |- ( ph -> my-base-set = my-base-set ) 2::eqid |- ( ph -> my-group-op = my-group-op(my-base-set) ) qed:1,2:mygroupbaslem |- ( ph -> ( Base ` MyGroup ) = my-base-set ) mygroupadd $p |- ( ph -> ( +g ` MyGroup ) = my-base-set ) $= 1::mygroupbas |- ( ph -> ( Base ` MyGroup ) = my-base-set ) 2::eqid |- ( ph -> my-group-op( ( Base ` MyGroup ) ) = my-group-op( ( Base ` MyGroup ) ) ) qed:1,2:mygroupbaslem |- ( ph -> ( +g ` MyGroup ) = my-group-op( ( Base ` MyGroup ) ) )
Note that the contained references of my-group-op are now referencing ( Base ` MyGroup ), which makes this "definition" of a component of MyGroup self-referential. But that was the goal, and we have mygroupbas to expand it as necessary. A more elaborate version of this idea is done in prdsval, with prdsbas, prdsplusg, prdsmulr, etc pulling off components and bootstrapping using the previous theorems.
Finally, there are a few places where
X = ...
definitions are
used by convention, specifically grppropd and other *propd theorems, and
isgrpd and other is*d theorems for structures.
As a rule of thumb: Use X = ... abbreviations unless you need to use a non-identity abbreviation for X later.
This page was last updated on 11-Apr-2018.
Copyright terms: Public domain |
W3C HTML validation [external] |