mmnotes.txt - Notes ------------------- These are informal notes on some of the recent proofs and other topics. (7-Dec-2020) Partial unbundling of ax-7, ax-8, ax-9 (notes by Benoit Jubin) --------------------------------------------------------------------------- This note discusses the recent partial unbundling of the axiom of equality ax-7 and the predicate axioms ax-8 and ax-9 in set.mm. The axiom of equality asserts that equality is a right-Euclidean binary relation on variables: ax-7 |- ( x = y -> ( x = z -> y = z ) ) It can be weakened by adding a DV (disjoint variable) condition on x and y: ax7v |- ( x = y -> ( x = z -> y = z ) ) , DV(x,y) and this scheme can itself be weakened by adding extra DV conditions: ax7v1 |- ( x = y -> ( x = z -> y = z ) ) , DV(x,y) , DV(x,z) ax7v2 |- ( x = y -> ( x = z -> y = z ) ) , DV(x,y) , DV(y,z) We prove, in ax7, that either ax7v or the conjunction of ax7v1 and ax7v2 (together with earlier axioms) suffices to recover ax-7. The proofs are represented in the following simplified diagram (equid is reflexivity and equcomiv is unbundled symmetry): --> ax7v1 --> equid -- / \ ax-7 --> ax7v -- --> equcomiv --> ax7 \ / --> ax7v2 ------------ The predicate axioms ax-8 and ax-9 can be similarly weakened, and the proofs are actually simpler, now that the equality predicate has been proved to be an equivalence relation on variables. This is a general result. If an n-ary predicate P is added to the langugage, then one has to add the following n predicate axioms for P: ax-P1 |- ( x = y -> ( P(x, z_2, ..., z_n) -> P(y, z_2, ..., z_n) ) ) ... ax-Pn |- ( x = y -> ( P(z_1, ..., z_{n-1}, x) -> P(z_1, ..., z_{n-1}, y) ) ) Any of these axioms can be weakened by adding the DV condition DV(x,y), and it is also sufficient to replace it by the conjunction of the two schemes: ax-Piv1 |- ( x = y -> ( P(z_1, ..., x, ..., z_n) -> P(z_1, ..., y, ..., z_n) ) ) , x fresh ax-Piv2 |- ( x = y -> ( P(z_1, ..., x, ..., z_n) -> P(z_1, ..., y, ..., z_n) ) ) , y fresh where "fresh" means "disjoint from all other variables". The proof is similar to ax8 and ax9 and simply consists in introducing a fresh variable, say t, and from |- ( x = t -> ( P(z_1, ..., x, ..., z_n) -> P(z_1, ..., t, ..., z_n) ) ) |- ( t = y -> ( P(z_1, ..., t, ..., z_n) -> P(z_1, ..., y, ..., z_n) ) ) |- ( x = y -> E. t ( x = t /\ t = y ) ) one can prove axPi. Note that ax-7 can also be seen as the first predicate axiom for the binary predicate of equality. This is why it does not appear in Tarski's FOL system, being a special case of his scheme ( x = y -> ( ph -> ps ) ) where ph is an atomic formula and ps is obtained from ph by substituting an occurrence of x for y. The above paragraphs prove that this scheme can be weakened by adding the DV condition DV(x,y). =============================================================================== (21-Dec-2017) Processing of $[ ... $] file inclusions ----------------------------------------------------- See also the following Google Group posts: Description and example: https://groups.google.com/d/msg/metamath/4B85VKSg4j4/8UrpcqR4AwAJ "Newbie questions": https://groups.google.com/d/msg/metamath/7uJBdCd9tbc/dwP2jQ3GAgAJ "Condensed version of set.mm?": https://groups.google.com/d/msg/metamath/3aSZ5D9FxZk/MfrFfGiaAAAJ Original proposal: https://groups.google.com/d/msg/metamath/eI0PE0nPOm0/8O9s1sGlAQAJ (Updated 31-Dec-2017: 1. 'write source.../no_delete' is now 'write source.../keep_includes'. 2. Added 'set home_directory' command; see 'help set home_directory'.) (Updated 1-Jan-2018: Changed 'set home_directory' to 'set root_directory'.) (Updated 1-Nov-2019: Added Google Group links above.) 1. Enhanced "write source" command ---------------------------------- The "write source" command in metamath.exe will be enhanced with a "/split" qualifier, which will write included files separately. The name of the main (starting) file will be the "write source" argument (as it is now), and the names of included files will be taken from the original file inclusions. 2. New markup-type directives related to file inclusions -------------------------------------------------------- Recall the file inclusion command, "$[ file.mm $]", given in the Metamath spec. The spec will be clarified so that, for basic .mm file verification, this command should be ignored when it occurs inside of a comment (and it should exist only at the outermost scope, as well). The metamath.exe program will perform additional actions based on special markup comments starting "Begin $[", "End $[", and "Skip $[". These are not part of the Metamath spec and can be ignored by basic verifiers. The metamath.exe program allows the .mm file to be written as a whole or to be split up into modules (with "write source ... /split"), and this markup controls how the modules will be created. In particular, the markup allows us to go back and forth seamlessly between split .mm files and a single unsplit .mm file. These markup comments are normally created automatically whenever a .mm file containing includes is written by "write source" without the "/split" qualifier. They can also be inserted by hand to delineate how the .mm file should be split into modules. They are converted back to file inclusions when "write source" is used with the "/split" qualifier. "$( Begin $[ file.mm $] $)" - indicates where an included file starts "$( End $[ file.mm $] $)" - indicates where an included file ends "$( Skip $[ file.mm $] $)" - indicates there was a file inclusion at this location in the split files, that wasn't used because file.mm was already included earlier. To summarize: Split files will have only "$[ file.mm $]" inclusions, like before. An unsplit file will have only these three special comments. (Per the Metamath spec, recall that when a file is included more than once, only the first inclusion will happen with subsequent ones ignored. This feature allows us to create subsections of a .mm file that are themselves stand-alone .mm files. We need the "Skip" directive to mark the location of ignored inclusions.) The "read" command will accept either a single file or split files or any combination (e.g. when the main file includes a file that originally also contained includes but was separately written without "/split"). Files can contain any combination of inclusion directives $[ $] and the 3 special comments, except that each "$( Begin $[..." must have a matching "$( End $[...". 3. Behavior of "read" command ----------------------------- The "read" command builds an internal buffer corresponding to an unsplit file. If "write source" does not have the "/split" qualifier, this buffer will become the new source file. When "read" encounters an inclusion command or one of the 3 special comments, the following actions are taken: Case 3.1: --------- "$[ file.mm $]" If file.mm has not already been included, this directive will be replaced with "$( Begin $[ file.mm $] $) $( End $[ file.mm $] $)" If file.mm doesn't exist, an error will be reported. If file.mm has already been included, this directive will be replaced with "$( Skip $[ file.mm $] $)". Case 3.2: --------- "$( Begin $[ file.mm $] $) $( End $[ file.mm $] $)" If file.mm has not already been included, this directive will be left alone i.e. will remain "$( Begin $[ file.mm $] $) $( End $[ file.mm $] $)" If file.mm has already been included, this directive including the will be replaced with "$( Skip $[ file.mm $] $)". Before discarding it, will be compared to the content of file.mm previously included, and if there is a mismatch, a warning will be reported. Case 3.3: --------- "$( Skip $[ file.mm $] $)" If file.mm has not already been included, this directive will be replaced with "$( Begin $[ file.mm $] $) $( End $[ file.mm $] $)". If file.mm doesn't exist, an error will be reported. If file.mm has already been included, this directive will be left alone i.e. will remain "$( Skip $[ file.mm $] $)". Error handling: --------------- Any comments that don't exactly match the 3 patterns will be silently ignored i.e. will act as regular comments. For example, "$( Skip $[ file.mm $)" and "$( skip $[ file.mm $] $)" will act as ordinary comments. In general, it is difficult to draw a line between what is a comment and what is a markup with a typo, so we take the most conservative approach of not tolerating any deviation from the patterns. This shouldn't be a major problem because most of the time the markup will be generated automatically. Error messages will be produced for "$( Begin $[.." without a matching "$( End $[..." (i.e. with the same file name) and for included files that are missing. 4. Behavior of "write source" command ------------------------------------- When "write source" is given without the "/split" qualifier, the internal buffer (as described above) is written out unchanged. When accompanied by the "/split" qualifier, the following actions are taken. Case 4.1: --------- "$[ file.mm $]" (This directive should never exist in the internal buffer unless there is a bug.) Case 4.2: --------- "$( Begin $[ file.mm $] $) $( End $[ file.mm $] $)" file.mm will be created containing "". The directive will be changed to "$[ file.mm $]" in the parent file. Case 4.3: --------- "$( Skip $[ file.mm $] $)" This directive will be changed to "$[ file.mm $]" in the parent file. 5. File creation and deletion ----------------------------- When "write source" is used with "/split", the main file and all included files (if they exist) will be overwritten. As with the existing "write source", the old versions will be renamed with a "~1" suffix (and any existing "~1" renamed to "~2" and so on through "~9", whose existing version will be deleted). When "write source" is used without "/split", the main file will be overwritten, and any existing included files will be deleted. More precisely, by "deleted" we mean that an existing included file will be renamed to "~1", any existing "~1" renamed to "~2", etc. until "~9", which will be actually deleted. The purpose of doing this is to prevent accidental edits of included files after the main file is written without "/split" and thus causing confusing diverging versions. A new qualifier, "/no_versioning", will be added to "write source" to turn off the "~n" versioning if it isn't wanted. (Personally, versioning has helped me recover from mistakes, and it's easy enough to "rm *~*" at the end of a work session.) Another new qualifier, "/keep_includes", will be added to "write source" to turn off the file deletion when "/split" is not specified. This can be useful in odd situations. For example, suppose main.mm includes abc.mm and (stand-alone) def.mm, and def.mm also includes abc.mm. When writing out def.mm without "/split", by default abc.mm will be deleted, causing main.mm to fail. (Another way to recover is to rewrite def.mm with "/split". Or recover from abc.mm~1.) 5. Comments inside of includes ------------------------------ A comment inside of a file inclusion, such as "$[ file.mm $( pred calc $) $]", will be silently deleted when it is converted to the nonsplit version e.g. "$( Skip $[ file.mm $] $)". Instead, put the comment before or after the inclusion, such as "$[ file.mm $] $( pred calc $)". 6. Directories -------------- Officially, directories aren't supported. In practice, an included file in a subdirectory can be specified by its path relative to the current working directory (the directory from which metamath.exe is launched). However, it is strongly recommended to use "-" in the file name rather than directory levels, e.g. set-mbox-nm.mm, and this will be a requirement for set.mm at this time. A new command was added to change the working directory assumed by the program. See 'help set root_directory'. Therefore, if included files are present, you shouldn't read set.mm from another directory with a command such as "read test/set.mm", because included files will _not_ be assumed to be in test/. Instead, you should either launch metamath.exe from the test/ directory, or you should 'set root_directory test' so that you can type "read set.mm". (Usually the error messages will let you know right away when your included files aren't found where expected.) The reason we don't just extract and use the "test/" prefix of set.mm automatically is that if we decide to support directories relative to the root directory, it will be legitimate to "read mbox/mbox-nm.mm", where mbox/ is a project subdirectory under the root directory. (End of "(21-Dec-2017) Proposed pocessing of $[ ... $] file inclusions") (14-May-2017) Dates in set.mm ----------------------------- Dates below proofs, such as "$( [5-Nov-2016] )$", are now ignored by metamath.exe (version 0.143, 14-May-2017). Only the dates in "(Contributed by...", "(Revised by...", and "(Proof shortened by...)" are used for the Most Recent Proofs page and elsewhere. If a "(Contributed by...)" markup tag is not present in a theorem's comment _and_ the proof is complete, then "save new_proof" or "save proof" will add "(Contributed by ?who?, dd-mmm-yyyy.)" to the theorem's comment, where dd-mmm-yyyy is today's date. You can either change the "?who?" to your name in an editor, or you can use the new command "set contributor" to specify it before "save new_proof" or "save proof". See "help set contributor". If you are manually pasting proofs into set.mm, say from mmj2, then at the end of the day you can run "save proof */compressed/fast" to add missing contributor dates, followed by a global replacement of "?who?" with your name. "verify markup" in Version 0.143 includes some additional error checking, which will cause warnings on older versions of set.mm. However, it no longer checks the dates below proofs. The dates below proofs will be deleted soon in set.mm. If someone is using them outside of the metamath program, let me know so I can postpone the deletion. The old code to check them can be re-enabled by uncommenting "#define DATE_BELOW_PROOF" in mmdata.h. For converting old .mm files to the new "(Contributed by...)" tag, the program has the following behavior: the date used is the (earlier) date below the proof if it exists, otherwise it is today's date. Thus an old .mm file can be converted with "read xxx.mm", "save proof */compressed/fast", and "write source xxx.mm". Note that if there are two dates below the proof, the second one is used, and the first one is intended for a "(Revised by...)" or "(Proof shortened by...)" tag that must be inserted by hand. Searching for "] $) $( [" will identify cases with two dates that must be handled with manual editing. Tip: if you want to revert to the old way of checking (and inserting) dates below proofs, uncomment the "#define DATE_BELOW_PROOF" in mmdata.h before compiling. (11-May-2016) New markup for isolating statements and protecting proofs ----------------------------------------------------------------------- (Updated 10-Jul-2016: changed "show restricted" to "show discouraged"; added "set discouragement off"; see below.) Two optional markup tags have been defined for use inside of statement description comments: "(Proof modification is discouraged.)" "(New usage is discouraged.)" The metamath program has been updated to discourage accidental proof modification or accidental usage of statements with these tags. These tags have been added to set.mm in the complex number construction, axiomatic studies, and obsolete sections, as well as to specific theorems that normally should be avoided or should not have their proof changed for various reasons. I also added them to some mathboxes (AS and JBN) having theorems or notation that are unconventional. Most users will never encounter the effect of "discouraged" tags since they are in areas that are normally not touched or used. "write source.../rewrap" will prevent the new tags from being broken across lines. This is intended to make editing tasks easier. For example, if you are doing a major revision to a "discouraged" section such as the complex number construction, you can change the tags temporarily (like changing "is discouraged" to "xx discouraged" throughout the section) then change them back when done. The following commands recognize "(Proof modification is discouraged.)": "prove", "save new_proof" The following commands recognize "(New usage is discouraged.)": "assign", "replace", "improve", "minimize_with" In the description below, the term "restricted" means a statement's comment has one or both of the new tags. Originally, I was going to ask an override question when encountering a restricted statement, but I decided against that because prompts become unpredictable, making the user's "flow" awkward and scripts more difficult to write. Instead, the user can specify "/override" in the command to accomplish this. A warning or error message is issued when there is a potential use or modification of a restricted statement. An error message means the requested action wasn't performed (because the user didn't specify "/override"), and a warning message means the action was performed but the user should be aware that the action is "discouraged". To make the messages more visible, they have a blank line before and after them, and they always begin with ">>>" for emphasis. The behavior of individual commands is as follows. "prove" - Without "/override", will give an error message and prevent entering the Proof Assistant when a proof is restricted i.e. when the statement's comment contains "(Proof modification is discouraged.)". With "/override", a warning is issued, but the user may enter the Proof Assistant. "save new_proof" - Without "/override", will give an error message and prevent saving. With "/override", a warning is issued, but the save is allowed. "assign", "replace" - Without /override, will give an error message and not allow an assignment with a restricted statement i.e. when the assigned statement's comment contains "(New usage is discouraged.). With /override, will give a warning message but will do the assignment. "improve", "minimize_with" - Without /override, will silently skip restricted statements during their scans. With /override, will consider all statements and will give a warning if any restricted statements are used. Here is an example of a session with idALT, which has the comment tag "(Proof modification is discouraged.)". MM> prove idALT >>> ?Error: Modification of this statement's proof is discouraged. >>> You must use PROVE ... / OVERRIDE to work on it. MM> prove idALT/override Entering the Proof Assistant. HELP PROOF_ASSISTANT for help, EXIT to exit. You will be working on statement (from "SHOW STATEMENT idALT"): 101 idALT $p |- ( ph -> ph ) $= ... $. Note: The proof you are starting with is already complete. >>> ?Warning: Modification of this statement's proof is discouraged. MM-PA> minimize_with id Bytes refer to compressed proof size, steps to uncompressed length. Proof of "idALT" decreased from 51 to 9 bytes using "id". MM-PA> save new_proof/compressed >>> ?Error: Attempt to overwrite a proof whose modification is discouraged. >>> Use SAVE NEW_PROOF ... / OVERRIDE if you really want to do this. MM-PA> save new_proof/compressed/override >>> ?Warning: You are overwriting a proof whose modification is discouraged. The new proof of "idALT" has been saved internally. Remember to use WRITE SOURCE to save changes permanently. MM-PA> Here is an example of a session trying to use re1tbw2 (ax-1 twin) which has "(New usage is discouraged.)". MM> prove r19.12 Entering the Proof Assistant. HELP PROOF_ASSISTANT for help, EXIT to exit. You will be working on statement (from "SHOW STATEMENT r19.12"): $d x y $. $d y A $. $d x B $. 7283 r19.12 $p |- ( E. x e. A A. y e. B ph -> A. y e. B E. x e. A ph ) $= ... $. Note: The proof you are starting with is already complete. MM-PA> show new_proof /from 70/to 70 70 ralrimi.2=ax-1 $a |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x e. A A. y e. B ph ) ) MM-PA> delete step 70 A 12-step subproof at step 70 was deleted. Steps 70:112 are now 59:101. 59 ralrimi.2=? $? |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x e. A A. y e. B ph ) ) MM-PA> assign last re1tbw2 >>> ?Error: Attempt to assign a statement whose usage is discouraged. >>> Use ASSIGN ... / OVERRIDE if you really want to do this. MM-PA> assign last re1tbw2/override >>> ?Warning: You are assigning a statement whose usage is discouraged. MM-PA> undo Undid: ASSIGN LAST re1tbw2 / OVERRIDE 59 ralrimi.2=? $? |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x e. A A. y e. B ph ) ) MM-PA> improve all A proof of length 12 was found for step 59. Steps 59 and above have been renumbered. CONGRATULATIONS! The proof is complete. Use SAVE NEW_PROOF to save it. Note: The Proof Assistant does not detect $d violations. After saving the proof, you should verify it with VERIFY PROOF. Note that "improve all", which scans backwards, skipped over re1tbw2 and picked up ax-1: MM-PA> show new_proof /from 70/to 70 70 ralrimi.2=ax-1 $a |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x e. A A. y e. B ph ) ) MM-PA> undo Undid: IMPROVE ALL 59 ralrimi.2=? $? |- ( E. x e. A A. y e. B ph -> ( y e. B -> E. x e. A A. y e. B ph ) ) With "/override", it does not skip re1tbw2 but assigns it since it is the first match encountered (before ax-1 in the backward scan): MM-PA> improve all/override >>> ?Warning: Overriding discouraged usage of statement "re1tbw2". A proof of length 12 was found for step 59. Steps 59 and above have been renumbered. CONGRATULATIONS! The proof is complete. Use SAVE NEW_PROOF to save it. Note: The Proof Assistant does not detect $d violations. After saving the proof, you should verify it with VERIFY PROOF. MM-PA> If you want to see which statements in a specific section have restrictions, use "search.../comment" e.g. MM> search ax-1~bitr 'is discouraged'/comment 101 idALT $p "...uted by NM, 5-Aug-1993.) (Proof modification is discouraged.)" 659 dfbi1gb $p "...ry Bush, 10-Mar-2004.) (Proof modification is discouraged.)" Program note: The new markup tags are looked up via the function getMarkupFlag() in mmdata.c. Since string searches are slow, the result of the first search in each statement comment is memoized (saved) so that subsequent searches can be effectively instant. Two commands were added primarily for database maintenance: "show discouraged" will list all of the statements with "is discouraged" restrictions and their uses in the database (in case of discouraged usage) or the number of steps (in case of a proof whose modification is discouraged). It is verbose and primarily intended to assist a script to compare a modified database with an earlier version. It will not be of interest to most users. MM> help show discouraged Syntax: SHOW DISCOURAGED This command shows the usage and proof statistics for statements with "(Proof modification is discouraged.)" and "(New usage is discouraged.)" markup tags in their description comments. The output is intended to be used by scripts that compare a modified .mm file to a previous version. MM> show discouraged ... SHOW DISCOURAGED: Proof modification of "tru2OLD" is discouraged (9 steps). SHOW DISCOURAGED: New usage of "tru2OLD" is discouraged (0 uses). SHOW DISCOURAGED: New usage of "ee22" is discouraged (2 uses). SHOW DISCOURAGED: "ee22" is used by "ee21". SHOW DISCOURAGED: "ee22" is used by "ee33". ... "set discouragement off" will turn off the blocking of commands caused to "...is discouraged" markup tags. It does the equivalent of always specifying "/override" on those commands. It is intended as a convenience during maintenance of a "discouraged" area of the database that the user is very familiar with, such as the construction of complex numbers. It is not recommended for most users. MM> help set discouragement Syntax: SET DISCOURAGEMENT OFF or SET DISCOURAGEMENT ON By default this is set to ON, which means that statements whose description comments have the markup tags "(New usage is discouraged.)" or "(Proof modification is discouraged.)" will be blocked from usage or proof modification. When this setting is OFF, those actions are no longer blocked. This setting is intended only for the convenience of advanced users who are intimately familiar with the database, for use when maintaining "discouraged" statements. SHOW SETTINGS will show you the current value. MM> set discouragement off "(...is discouraged.)" markup tags are no longer honored. >>> ?Warning: This setting is intended for advanced users only. Please turn >>> it back ON if you are not intimately familiar with this database. MM> set discouragement on "(...is discouraged.)" markup tags are now honored. (10-Mar-2016) metamath program version 0.125 -------------------------------------------- The following changes were made: 1. A new qualifier, '/fast', was added to 'save proof' and 'show proof'. See the 9-Mar-2016 entry below for an application. 2. Long formulas are no longer wrapped by 'write source.../rewrap' but should be wrapped by hand to fit in less than 80 columns. The wrapping was removed because a human can better judge where to split formulas for readability. Comments and indentation are still reformatted as before. 3. Added space between adjacent "}" and "{" in the HTML output. 4. A bug in the /explicit/packed proof format was fixed. See 'help save proof' for a list of all formats. 5. To reference a statement by statement number in 'show statement', 'show proof', etc., prefix the number with "#". For example, 'show statement #58' will show a1i. This was added to assist program debugging but may occasionally be useful for other purposes. The complete list of statement lookup formats in shown in 'help search'. (9-Mar-2016) Procedure to change a variable name in a theorem -------------------------------------------------------------- The metamath program has been updated in Version 0.125 (10-Mar-2016) with a new qualifier, '/fast', that merely changes the proof format without compressing or uncompressing the proof. This makes format conversions very fast for making database changes. The format of the entire database can be changed from /compressed to /explicit/packed, and vice-versa, in about a minute each way. The /explicit/packed format is described here: https://groups.google.com/d/msg/metamath/xCUNA2ttHew/RXSNzdovBAAJ You can also look at 'help save proof' in the metamath program. The basic rules we will be using are: 1. When the proofs are saved in explicit format, you can change $f and $e order. 2. When the proofs are saved in compressed format, you can change the name of a variable to another if the two variables have adjacent $f's. ======= The conversion procedure: ======= To retrofit the new symbol variables added to set.mm to your theorems, for example changing "P" to ".+", you can use the following procedure. First, save all proofs in explict format: ./metamath set.mm MM> save proof */explicit/packed/fast MM> write source set.mm MM> exit (Hint: 'save proof *' lists all proofs it is saving. To suppress this output, type "q" at the scrolling question after the first page. It will not really quit; instead, the proof saving will complete silently.) Next, edit set.mm to place the $f for the "P" adjacent to (either immediately before or immediately after) the $f for the ".+". Then resave the proofs in compressed format: ./metamath set.mm MM> save proof */compressed/fast MM> write source set.mm MM> exit In your text editor, substitute P for .+ in the theorem you want to change. Make sure you include the $p and any $d and $e statements associated with the theorem. If the $d and $e statements affect other theorems in the same block, you will also have to make the P to .+ substitution in those $p's as well. You are now done with the change. However, you probably want to restore the original $f order to make your database compatible with the standard set.mm. You can postpone doing this until you have finished making all of your variable name changes as above. First, save all proofs in explict format: ./metamath set.mm MM> save proof */explicit/packed/fast MM> write source set.mm MM> exit Next, edit set.mm to restore the original $f order. It may be easiest just to copy and paste the $f section from the standard set.mm. Finally, you probably want to save all proofs in compressed format since the file size will be smaller and easier to work with: ./metamath set.mm MM> save proof */compressed/fast MM> write source set.mm MM> exit (28-Feb-2015) Stefan O'Rear's notes on recent proofs ---------------------------------------------------- # New definitions df-har: The Hartogs function, which restricts to cardinal successor on initial ordinals. Since the latter is somewhat important in higher set theory, I expect this to get used a bit. One question about the math symbol: standard notation for ( har ` x ) is \aleph(x), but I didn't want to overlap the math symbol used for df-aleph, so I left this as Latin text for now. df-nzr: Nonzero rings: A number of properties of linear independence fail for the zero ring, so I gave a name to all others. df-wdom: Weak dominance, i.e. dominance considered using onto functions instead of 1-1. The starred symbol seems to be relatively standard. I'm quite pleased with how ~ wdomd turned out. df-lindf,df-linds: Definition of a linearly independent family resp. set of vectors in a module. Was initially trying to do this with just one definition but two seems to work much better with corner cases. # Highlights hsmex: The class of sets hereditarily smaller than a given set exists; a formalization of the proof in http://math.boisestate.edu/~holmes/holmes/hereditary.pdf . With AC this is simpler as it follows from the existence of arbitrarily large regular cardinals. Intermediate steps use Hartogs numbers and onto functions quite heavily, so df-har and df-wdom were added to support this; it also uses iterated unions (ituni*) and order types (otos*), but the former is not a standard concept and the latter has several definitional issues, so both are temporary definitions for now. marypha1,marypha2: P. Hall's marriage theorem, a surprisingly annoying combinatorial result which is in the dependency chain for the vector space dimension theorem. kelac2,dfac21i: Recover the axiom of choice from Tychonoff's theorem (which we don't have yet). Required a number of additional results on box-shaped subsets of cross products, such as boxriin and boxcuts. In an attempt to unify the empty and nonempty cases, we are now considering "relative intersections" of the form ( A i^i |^| B ) and ( A i^i |^|_ x e. I B ), assuming that all elements of B are subsets of A, we get the type-theoretic behavior that an empty intersection is the domain of discourse of a particular structure, and not the class _V. Several theorems are added to support this usage, and the related ( fi ` ( { A } u. B ) ). lmisfree: There have been a number of recent questions about the correctness of various definitions of basis. This hopefully helps to clarify the situation by proving my earlier conjecture that what we have is exactly what is needed to witness an isomorphism onto a free module. Notable intermediate steps here are islbs (splits our notion of basis into the spanning and independence parts), islindf4 ("no nontrivial representations of zero"), and lbslcic (only the cardinality of the index set matters). Along the way this required separating independent sets and families from the previous notion of bases; a new "independent sets and families" section contains the basic properties there. domunfican: A cancellation law for cardinal arithmetic which came up in marypha1 but may be of independent interest. # Possible future directions Probably a lot of polynomial stuff soon, with maybe a vector space dimension theorem thrown in for good measure. A rough priority order, and less final the farther you go: 1. Fraction ring/field development; rational function fields 2. Recursive decomposition of polynomials 3. AA = ( CCfld IntgRing QQ) and re-re-define _ZZ = ( CCfld IntgRing ZZ ) $. 4. Relating integral elements to finitely generated modules. 5. Hilbert basis theorem 6. Integral closures are rings; aaaddcl/aamulcl. 7. Cowen-Engeler lemma (Schechter UF2); finite choice principles from ultrafilters 8. M. Hall's marriage theorem 9. The next natural property of independent sets: the exchange theorem and the finite dimension theorem for vector spaces 10. General dimension theorem from ultrafilters 11. Gauss' lemma on polynomials; ZZ = ( _ZZ i^i QQ ) 12. Polynomial rings are UFDs (17-Feb-2015) Mario Carneiro's notes on recent proofs ----------------------------------------------------- Notes: ifbothda: Common argumentation style for dealing with if, may be good to know disjen, disjenex: When constructing the reals, we needed extra elements not in CC for +oo and -oo, and for that purpose we used ~P U. CC, ~P ~P U. CC. This works great if you only want a few new elements, but for arbitrarily many elements this approach doesn't work. So this theorem generalizes this sort of construction to show that you can build an arbitrarily large class of sets disjoint from a given base class A. domss2, domssex2, domssex: One application of disjen is that you can turn any injection F : A -1-1-> B around into G : B -1-1-onto-> C where A C_ C, and which is the identity on elements of A. I'm thinking about taking advantage of this in the field extensions for Cp, since this way you don't need a canonical injection into the extension but can actually build the extension around the original field so that it is literally a subfield of the extension. It could also be used, if desired, as a means of building compatible extensions in the construction of the reals (so that om = NN0). ghmker: the kernel of a group homomorphism is a normal subgroup crngpropd, subrgpropd, lmodpropd, lsspropd, lsppropd, assapropd: property theorems tgcnp, tgcn, subbascn: checking continuity on a subbase or basis for a topology ptval: Product topology. Important theorems are pttop, ptuni, ptbasfi, ptpjcn, ptcnp, ptcn. pt1hmeo, ptunhmeo: combining these judiciously allows you to show that indexed product topologies are homeomorphic to iterated binary product topologies, which for example can be used to prove ptcmpfi, which is basically Tychonoff for finite index sets. metdsval, metds0, metdseq0, metdscn: Properties of the function d(x,A) which gives the distance from a point to a nonempty set. lebnum, lebnumii: The Lebesgue number lemma, a nice result about open covers in a metric space and a key step in the proof of the covering map lifting theorem. df-pi1: Converted to structure builders, eliminated df-pi1b pcopt2, pcorev2: commuted versions of pcopt, pcorev. Took me a while to realize that left identity -> right identity and left inverse -> right inverse does not actually follow from the other proofs, since at this level we have only a groupoid, not a group. pi1xfr: A path induces a group isomorphism on fundamental groups. df-dv: Added a second argument to _D for the ambient space. There is a drop-in replacement from ( _D ` F ) to ( RR _D F ), and ( CC _D F ) is the complex derivative of F. The left argument can be any subset of CC, but since the derivative is not a function if S has isolated points, I restrict this to S e. { RR , CC } for the primary convenience theorems. df-lgs: The Legendre symbol, a tour-de-force of something interesting coming out of a ridiculous amount of case analysis. This definition is actually the Kronecker symbol, which extends the Jacobi symbol which extends the Legendre symbol to all integers. The main theorem to be done in this area is of course the law of quadratic reciprocity, but currently I'm stuck proving Euler's criterion which is waiting on polynomials over Z/pZ. So far the basic theorems prove that ( A /L N ) e. {-1,0,1}, ( A /L N ) =/= 0 iff A,N are coprime, and it is distributive under multiplication in both arguments. kur14: The Kuratowski closure-complement theorem, which I mentioned in another email. df-pcon, df-scon: Path connected spaces and simply connected spaces. I hope to use SCon to prove some kind of Cauchy integral theorem, but we'll see. txpcon, ptpcon: products of path-connected spaces are path-connected (ptpcon is actually an AC-equivalent) pconpi1: The fundamental groups of a path-connected space are isomorphic sconpi1: A space is simply connected iff its fundamental group is trivial cvxpcon, cvxscon: a convex subset of CC is simply connected blcvx, blscon: a disk in CC is convex and simply connected df-cvm: Definition of a covering map. cvmlift: The Path Lifting Theorem for covering maps df-rpm, df-ufd: define a prime element of a ring and a UFD psr1val: Basic theorems for univariate polynomials (9-Jan-2015) mpbi*an* hypothesis order change --------------------------------------------- At the suggestion of a couple of people, I changed the hypothesis order in mpbi*an* (7 theorems) so that the major hypothesis now occurs last instead of first, in order to make them less annoying to use. The theorems changed were: mpbiran mpbiran2 mpbir2an mpbi2and mpbir2and mpbir3an mpbir3and This change affects over 1000 proofs. The old versions are still there suffixed with "OLD" and will remain for 1 year. To update your mathbox etc., 1. Make sure your mathbox is compatible with the the set.mm just prior to this change, temporarily available here: http://us2.metamath.org:88/metamath/set.mm.2015-01-08.bz2 1. In a text editor, suffix all references to mpbi*an* in your proofs with OLD (e.g. mpbiran to mpbiranOLD). This will make your proofs compatible with the current set.mm. 2. Update the current set.mm with your mathbox. 3. For each proof using mpbi*an*OLD, do the following in the metamath program: ./metamath set.mm ... MM> prove abc MM-PA> minimize mpbi*an*/except *OLD/allow_growth MM-PA> save new_proof/compressed MM-PA> exit .., MM> write source set.mm MM> exit (11-Jun-2014) Mario Carneiro's revisions ---------------------------------------- Mario Carneiro finished a major revision of set.mm. Here are his notes: syl3anbrc, mpbir3and: simple logic stuff fvunsn: eliminate D e. _V hypothesis caoprcld, caoprassg, caoprcomg: deduction form winafpi: dedth demonstration avglt1, avglt2, avgle1, avgle2: average ordering theorem colleciton peano2fzr: recurring lemma fzfid: very common use due to deduction-form sum theorems seqcl2 thru seq1p: deduction form sercaopr2: the 'correct' general form of sercaopr seqz: absorbing element in a sequence (note that there are now theorems seq1 and seqz, since the old tokens are gone) cseq1 thru ser1add, seq1shftid, seqzm1 thru ser0p1i: deleted - all the theorems in this section have equivalents in the new seq section, although it is often a 4-5 to 1 mapping, since the deduction framework makes it easier to have ease of use and full generality at the same time df-exp: revised to include negative integer exponents. The theorems for nonnegative exponents are exactly the same as before, but for negative exponents ( A ^ -u N ) we need the assumption A=/=0 so that we don't divide by zero. Relevant new theorems are: expneg, expneg2, expnegz, expn1: definition of ( A ^ -u N ) is 1 / ( A ^ N ) expcl2lem: closure under positive and negative exponents rpexpcl: the only existing closure theorem that was changed as a result reexpclz, expclz: closure for reals and complexes to integer exponents, when the argument is nonzero 1exp, expne0i, expgt0, expm1, expsub, expordi thru expword2i: generalized to negative exponents mulexpz, expaddz: the main "hard" theorems in this section, generalizing the exponent addition laws to integers. The main reason to keep both the new versions and the old (mulexp, expadd) is because the new theorems generalize the exponent at the cost of the extra hypothesis A =/= 0. discr, discr1: deduction form, generalized to all A e. RR, shortened proof rexanuz: the new upper integer form of cvganz, and used as the basis for most other manipulations on upper integer sets to replace cau* and cvg* theorems rexfiuz: finite set generalization of rexanuz rexuz3: turn an upper integer quantifier E. j e. ZZ A. k e. (ZZ>= ` j) into a restricted upper integer quantifier E. j e. NN A. k e. (ZZ>= ` j) rexanuz2: restricted upper integer quantifier form of rexanuz r19.29uz: looks sort of like r19.29, but for upper integer quantifiers r19.2uz: sort of like r19.2z cau3lem: useful on its own for being sufficiently general for both real cauchy sequences and cauchy sequences in metric spaces cau3: convert a cauchy sequence from two-quantifier form into three-quantifier form (this last one is useful because it is compatible with rexanuz2) cau4: use cau3 to change the base of the cauchy sequence definition caubnd2: a cauchy sequence is eventually bounded. This is sufficient for most proofs, but I went ahead and proved the original form caubnd as well caurei, cauimi, ser1absdifi: deleted since they weren't being used and there wasn't a good reason to reformulate them in view of the much more general theorems to come bcval5: write out the numerator of a binomial coefficient as a sequence with arbitrary start and end bcn2: N choose 2 fz1iso: any finite ordered set (in particular, a finite set of integers) is isomorphic to a one-based finite sequence seqcoll: lemma for soundness of df-sum, stated for general sequences clim thru climcn2: deduction form addcn2, subcn2, mulcn2: the old approach was a bit backward, using a direct proof of continuity to prove that addition etc. is sequentially continuous, then using sequential continuity to prove continuity through bopcn, whose proof requires CC. To avoid this, now we prove continuity directly (writing out the definition since the other continuity theorems aren't ready yet) and use this to prove that limits are preserved in climadd etc. climadd thru serf0: deduction form df-sum: a new much more general definition of summation, allowing the index set to be either a finite set or a lower bounded subset of the integers sum2id: assume the argument to a sum is a set sumfc: change bound variable summo: a 'big' theorem, proving that the new definition is well-defined zsum, isum, fsum: the spiritual equivalents to the old dffsum, dfisum, showing the definition on upper integer subsets, upper integer sets, and nonempty finite sets, respectively. sum0: sum of the empty set is zero sumz: sum of zero on any summable index set is zero fsumf1o: finite sum is unchanged under a bijection sumss, fsumss, sumss2: add zeroes to a finite sum to enlarge the index set fsumcvg, fsumcvg2, fsumcvg3: a finite sum is convergent (useful for treating a finite sum using infinite sum theorems) fsumsers: sum over a subset of a finite sequence fsumser: sum over a finite sequence (this is the most direct equivalent to the old dffsum) fsumcl2lem: a set that is closed under addition is closed under nonempty finite sums fsumcllem: a set containing zero and closed under addition is closed under finite sums fsumcl, fsumrecl, fsumrpcl: closure under reals, complexes, positive reals sumsn, sumsns: sum over a singleton fsumm1: break off the last term (this is more general than fsump1, because this includes the case where the smaller sum is empty) isumclim, isumclim2: relation between infinite series and convergent sequences sumsplit: generalized for subsets of the integers (but not that useful in practice) fsum2d, fsumcom2: generalization of fsumcom to sum over non-rectangular regions fsumxp: sum over a cross product (this theorem has no equivalent under the old system) abscvgcvg: absolutely convergent implies convergent binom: shortened the proof divcnv: generalized reccnv for convenience arisum: now adds up 1...N instead of 0...N expcnv: shortened the proof geoser: now sums 0...N-1 since it makes the formula nicer cvgrat: shortened the proof fsum0diag: shortened the proof (corollary of fsumcom2) mertens: generalization of the old proof of efadd elcncf2: commuted arguments to elcncf cncfco: composition of continuous functions ivth, ivth2, ivthle2: shortened the proof (the 'le' version allows the value to be equal to one of the endpoints) df-tan, df-pi: revised to use shorter dummy-free expressions efcllem, efge2le3, efadd, eftlub, eirr, efcn, reeff1o: shortened the proof demoivre: swapped with *ALT version and extend to negative exponents acdc3lem thru acdcALT: deleted (use axdc* theorems) ruc: shortened the proof (yes, this is the second time I've shortened ruc. This time I followed the same approach as the original ruc, rather than the alternative proof via rpnnen.) dvdseq: simplifies some divisibility proofs bezout: imported from my mathbox dvdsgcd, mulgcd: new proof from bezout algrf thru eucalg: deduction form df-pc: imported prime count function from my mathbox 1arith: new proof using the prime count function. This proof has a considerably different statement from the original proof, so it is perhaps debatable whether this is still the "fundamental theorem of arithmetic", but this is the easiest to prove given the tools already available, and it is also easier to use in future proofs (although the more direct statements pc11 and pcprod are probably more directly applicable). dscmet, dscopn: the discrete metric generates the discrete topology cnmptid thru cnmptcom: This new set of theorems, with names cnmpt*, is designed for use in quickly building up continuous functions expressed in the mapping notation. The naming convention takes the first number to be the number of arguments to the mapping function, and the second number is the number of arguments to the function that is applied at the top level. If the function that is applied is an atomic operation, a *f is suffixed. The "base case" continuous functions are given by cnmptc, cnmptid (for constants and the identity), and cnmpt1st, cnmpt2nd for two-argument mapping functions. sincn or ipcn provide a good demonstration. df-lm: the definition has been changed to represent convergence with respect to a topology, rather than a metric space. The old definition can be recovered as ( ~~>m ` D ) = ( ~~t ` ( MetOpen ` D ) ), although the functions are now required to be partial functions on CC rather than just subsets of ( X X. CC ). df-cau: the definition has been abbreviated, and the functions are again required to be partial functions on CC. df-cmet: the definition has been abbreviated lmbr, lmbr2, lmbrf: these have the same names as the old versions, but now apply to topological convergence; the metric convergence equivalents are now called lmmbr, lmmbr2, lmmbrf. lmbr thru iscauf: deduction form lmmo: was lmuni, now applies to hausdorff spaces instead of metric spaces lmcls, lmcld: was previously part of metelcls, but this direction doesn't need choice or metric spaces, so it is now separate lmcnp, lmcn, lmcn2: continuous functions preserve limits metdcn: a metric is a continuous function in its topology addcn, subcn, mulcn: modified to use topological product fsumcn, fsum2cn: adjusted for compatibility with cnmpt* expcn, divccn, sqcn: powers and division by a constant are continuous functions (there is still no proof that division is continuous in the second argument, but we haven't needed it yet). isgrp2d: deduction form of isgrp2i ghgrp, ghablo: deduction form of ghgrpi ghsubg, ghsubablo: deduction form of ghsubgi vacn, smcn, ipcn: generalized one-argument continuity proofs to joint continuity (and shortened the proof) minvec: shortened the proof sincn, coscn: shortened the proof pilem*: shortened the proof efhalfpi, efipi, sinq12ge0, cosq14ge0: more trig theorems sineq0: combined the old sineq0 with sineq0re and sinkpi coseq1, efeq1: similar to sineq0 cosord: generalize cosh111 to [0, pi] recosf1o, resinf1o: useful for defining arcsin, arccos efif1o, eff1o: shortened the proof df-log: abbreviated definition, changed principal domain to (-pi, pi] instead of [-pi, pi) - the result is that now log(-1) = i pi rather than -i pi, in keeping with standard principal value definition relogrn, logrncn: closure for ran log logneg, logm1: log of a negative number explog, reexplog, relogexp: generalized to negative integer powers df-hlim, df-hcau: abbreviated definition hvsubass, hvsub32, his35: vector identities hlimadd: limit of a sequence of vector sums occon3: generalize chsscon2i and prove without CC shuni: generalize chocunii to any subspaces with trivial intersection pjthmo: the uniqueness part of pjth, which needs no choice occl: shortened the proof pjth: shortened the proof (from 45 lemmas to 2!) using minvec pjpreeq, pjpjpre: a weak version of pjeq that allows some usage of the projection operator without assuming the projection theorem chscl: the majority of the proof of osum is now here; this proof does not need CC even though osum does hhcno, hhcnf: relate ConOp and ConFn to the topology Cn predicate imaelshi, rnelshi: the image of a subspace under a linear function is a subspace hmopidmchi, hmopidmpji: the class abstraction { x e. ~H | ( T ` x ) = x } is the same as the range of T - My mathbox - bclbnd: deleted because it is obsolete efexple: generalized to N e. ZZ - Scott Fenton's mathbox - sinccvg: (named for the sinc function: sinc x = sin x / x) shortened the proof trirecip: shortened the proof df-bpoly: abbreviated definition and eliminated the if statement, since now you can sum the empty set bpolydif: shortened the proof - FL's mathbox - cmpbvb, fopab2ga, fopab2a, cmpran, riecb, riemtn, fopabco3, dffn5b: deleted as duplicates df-pro, df-prj: mapping definition ispr1, prmapcp2, valpr, isprj1, isprj2: changed P e. Q to P e. V prmapcp3: corollary of prmapcp2 hbcp: imported as hbixp1 cbicp: shortened the proof iserzmulc1b thru seq0p1g: deleted since seqz is gone (there are already equivalents for seq) df-prod (prod3_ token): inlined into the new df-prod (previously df-prod2), since it's not that helpful to have two definitions that are so similar (I don't really see the point of this definition to begin with - it's very similar in scope to what seq does. It's not possible to give it a finite sets definition like the new sum_ , since it is defined on arbitrary magmas, so it reduces to basically the same as seq, except that it is defined on the empty set as well. I thought about moving this definition to seq as well, say by having ( seq M ( P , F ) ` ( M - 1 ) ) = ( Id ` P ), but this makes seqfn a bit asymmetrical and it can't really be used to generalize things like seqsplit because you'd want the original version anyway, and the sethood requirement is usually a distraction anyway.) prod0: renamed from valproemset (another indication that the extension to the empty set is not that useful is that this theorem is never used) prodeq2, prodeq3: exchanged these labels for consistency with ordering in the token and with prodeq2d, prodeq3d prodeqfv thru fprod2: miscellaneous updates for the modified definition clfsebs, fincmpzer, fprodadd, isppm, seqzp2, fprodneg, fprodsub: shortened the proof df-expsg, df-mmat: these definitions are never used, but I updated them to use seq and prod_ in place of seq1 and prod3_ cmphmp, idhme, cnvhmph, hmphsyma, hmphre, hmeogrp, homcard, eqindhome: shortened the proof eltpt: imported ttcn, ttcn2: use cnmpt* instead exopcopn, topgrpsubcn, trhom, ltrhom, cntrset: shortened the proof uncon: this is a special case of iuncon - any collection of connected subsets that share a point is connected - Jeff Madsen's mathbox - acdcg, acdc3g, acdc5g: I deleted these as part of the acdc* cleanup, but the existing axdc* theorems assume the base set is a set in the hypotheses - if this becomes an issue later, these theorems should be updated, rather than making *g theorems. As it is, there is no need for them yet. sdc: shortened the proof seq1eq2: seq1 is going away, use seqfveq fsumlt, fsumltisum, fsumleisum: generalized to finite sets and imported csbrn, trirn, mettrifi: shortened the proof mettrifi2: this is basically the same as the new mettrifi geomcau, caushft, caures: deduction form, shortened the proof metdcn: imported and generalized to joint continuity cnmptre, iirevcn, iihalf1cn, iihalf2cn, iimulcn: continuity base cases for path homotopy proofs elii1, elii2: commonly used lemma in path homotopy proofs cncfco: imported cnimass: same as elsubsp2 cnres: same as elsubsp cnmpt2pc: generalized piececn to two-arg functions and adjusted for compatibility with cnmpt* ishomeo2: this is the same as the new ishomeo hmeocn: corollary of hmeobc hmeocnv: the same as cnvhmpha ctlm thru lmtlm: deleted/imported this whole section because this is essentially the same as the new topological limit relation, and all theorems here are already represented in main set.mm now txcnoprab: the same as cnmpt2t txsubsp: imported cnresoprab thru cnoprab2c: this set of theorems is very similar to the new cnmpt* section (and indeed inspired that section), so they are deleted as duplicates txmet, txcc: imported addcntx thru mulcntx: the same as the new addcn thru mulcn bfp: shortened the proof df-rrn, rrnval: use mapping notation rrnmet, rrncms: shortened the proof df-phtpy, df-phtpc: use mapping notation phtpyfval thru phtpyco: deduction form, shortened the proof isphtpc2: merge with isphtpc reparpht: shortened the proof df-pco: use mapping notation, also extend the domain to functions that may not line up at the endpoints, for simplicity of definition (it doesn't affect pi1gp, because that is restricted further to functions with the same start and end point anyway) pcofval thru pco1: deduction form pcocn, pcohtpy, pcopt, pcoass, pcorev: shortened the proof pi1fval thru pi1val: deduction form pi1gp: shortened the proof (4-Jan-2014) Wide text editor windows ------------------------------------- Some people like to work with text editor windows wider than the 79 characters per line convention for set.mm. The following commands: MM> set width 9999 MM> save proof */compressed MM> set width 120 MM> write source set.mm/rewrap will put each proof on a single line and will wrap all $a and $p descriptions at column 120. If your text editor truncates the display of lines beyond the end of screen, having each proof on a single line will reduce scrolling. (Note that 'save proof */compressed' takes 5-10 minutes depending on CPU speed.) Repeat the above with 'set width 79' (the default width) to restore the conventional widths. Note that comments outside of $a and $p descriptions, such as section headers, are not affected by 'write source.../rewrap'. The math formulas in $a and $p statements are also unchanged, since breaking and indentation may have been chosen to emphasize the structure of the formula (exception: formulas exceeding the 'set width' are wrapped, but not unwrapped with a larger 'set width'). (2-Dec-2013) Class substitution ------------------------------- Has the time come to remove the "A e. _V" restriction on uses of df-sbc? See the comments under df-sbc for the issues. So far we have been uncommitted about the behavior at proper classes so as not to conflict with Quine. To do this, we have prohibited the direct use of df-sbc and instead only permitted use of dfsbcq. However, this has become inconvenient, requiring annoying sethood justifications for theorems using df-sbc and df-csb that make proofs longer. There is no theoretical reason not to define it any way we want for proper classes, but there are two possible ways that we must choose from. We could allow the direct use of df-sbc (always false for proper classes), which means for example that sbth would still require the "A e. V" antecedent. Or we could redefine it as in sbc6g (always true for proper classes), which means some other kinds of theorems would require the antecedent. I'm not sure which is more advantageous, but I'm inclined to choose df-sbc which seems more natural. Any opinions are welcome. (1-Dec-2013) Definite description binder ---------------------------------------- I changed the symbol for the definite description binder in df-iota and df-riota to an inverted iota, which was used by Russell and seems more standard than regular iota. It is analogous to the inverted A and E for "for all" and "exists". I left the tokens "iota" and "iota_" alone for now, although they should be changed since it's no longer a true iota. Ideally it should be changed to "i." in analogy with "A." and "E.", but "i." is already used for the imaginary unit. "ii" (inverted iota) is one possibility. Or maybe "io." since we use the first two letters, like "ph", for Greek. Suggestions are welcome. (8-Oct-2013) Proof repair techniques ------------------------------------ If you discover you have been going down a wrong path while creating a proof, in some cases there are techniques to help salvage parts that are already proved so they don't have to be typed in again. Let me know if there is a useful technique you use, and I'll post it here. (1) Jeff Hankins described the following technique he used: Here's a cool trick I found useful today. The theorem dummylink can "save" people from incorrect theorem assignments. Here's what I mean. I had used the theorem rcla42ev, but Metamath said that there was a disjoint variable violation, so I would have to delete that step. However, I had a decent-size proof under the incorrect step and I did not want to lose all that hard work. Here's what I did: I took the last step under rcla42ev and assigned it to a dummylink. I improved the statement under the dummylink to save that part of the proof, then I deleted the rcla42ev and cleaned up the disjoint variable stuff. Once I got to the part which I had the proof for already, I was able to improve it because the proof was in the dummylink. After that, I deleted the use of dummylink and continued on as normal. I still had to do all the annoying technical rcla4e substitution and bound variable steps, but at least I didn't have to do all the exponential and "if N is a natural number, N-1 is a nonnegative integer" stuff all over again. Indeed, dummylink is probably underused as a tool to assist proof building. Its description in set.mm contains one suggested method for how it can be used. When the proof is complete, 'minimize_with' will automatically reformat the proof to strip out the dummylink uses. (2) I often use a quick-and-dirty script called extractproof.cmd that creates a script to reproduce a proof. extractproof.cmd contains the following: set width 9999 open log 1.tmp show new_proof close log set width 79 tools match 1.tmp $ y clean 1.tmp ber substitute 1.tmp ' ' '%^' 1 '' add 1.tmp ! '' delete 1.tmp ^ = delete 1.tmp " " "" !substitute 1.tmp obs "" a "" substitute 1.tmp '%' '%assign last ' 1 '' add 1.tmp "" " /no_unify" reverse 1.tmp substitute 1.tmp % \n a '' quit Here is an example of its use. First, we extract the proof: MM> prove a1i Entering the Proof Assistant. HELP PROOF_ASSISTANT for help, EXIT to exit. You will be working on statement (from "SHOW STATEMENT a1i"): 53 a1i.1 $e |- ph $. 54 a1i $p |- ( ps -> ph ) $= ... $. Note: The proof you are starting with is already complete. MM-PA> submit extractproof.cmd/silent The generated script 1.tmp will contain these lines: !9 assign last ax-mp /no_unify !8 assign last ax-1 /no_unify !5 assign last a1i.1 /no_unify The step number comment corresponds to the 'show new_proof' step. The /no_unify prevents interactive unification while the script is running. Now let's test the generated script: MM-PA> delete all The entire proof was deleted. 1 a1i=? $? |- ( ps -> ph ) MM-PA> submit 1.tmp/silent MM-PA> unify all/interactive No new unifications were made. MM-PA> improve all A proof of length 1 was found for step 5. A proof of length 1 was found for step 4. A proof of length 3 was found for step 2. A proof of length 1 was found for step 1. Steps 1 and above have been renumbered. CONGRATULATIONS! The proof is complete. For proof repair, I often extract pieces of the generated script to automatically reprove sections that are correct. With experience, I've learned a few tricks to deal with several problems. For example, if the proof had an unknown step when the script was generated, the script will contain 'assign last ?', which must be manually deleted from the generated script, and all subsequent 'assign last' must be changed to 'assign -1'. (5-Oct-2013) Improvements in extensible structure utility theorems ------------------------------------------------------------------ I revamped the utility theorems used for extensible structures. The large collection of general structure theorems in the "Extensible structures" section of set.mm have been reduced to just four: strfvn (normally not used), ndxarg, ndxid, and strfv. I added comments to these to assist understanding their purpose. Before, it was awkward to work with structures with more than 3 components, since the number of utility theorems was O(N^2) where N is the number of structure components. In particular, the old O(N^2) strNvM theorems have become the single theorem strfv. To achieve this, strfv has a hypothesis requiring that the _specific_ structure S be a function. Note that, as before, any particular member of say Grp (df-grp) needn't be a function as long as it has the required values under our df-fv definition. (This "opening up" of Grp to possible non-functions dramatically simplifies many extensible structure theorems and causes no theoretical problem.) So, strfv can't be used with an arbitrary member of Grp. But practically speaking, we will not use _specific_ structures that aren't functions, and this limitation (which conforms to the literature in any case) does not cause a problem. Extending a structure with a new component (such as groups to rings) is now O(N), requiring one new theorem per structure component (rngbase, rngplusg, rngmulr). In particular, the new theorem fnunsn can be used to add a component to a previous structure. Notice its use in building a ring structure from a group structure in rngfn. To prove that all structure indices are different, I now successively show each member is not in the previous set of members rather than having O(N^2) inequalities. The largest application of this so far is in phllem2 in my mathbox, used to prove phlfn with 8 components starting from lvecfn with 6 components, which in turn comes from rngfn with 3 components. This shows that an 8-component structure is now practical to work with. (8-Sep-2013) New df-seq (sequence generator) and df-sqr (square root) --------------------------------------------------------------------- Mario Carneiro has updated the sequence generator with the definition, df-seq $a |- seq M ( P , F ) = ( rec ( ( x e. _V , y e. _V |-> <. ( x + 1 ) , ( y P ( F ` ( x + 1 ) ) ) >. ) , <. M , ( F ` M ) >. ) " om ) $. This rec() is a function on all ordinals 0o, 1o, 2o,... consisting of the ordered pairs { <. 0o , <. M , ( F ` M ) >. >. , <. 1o , <. ( M + 1 ) , ( F ` ( M + 1 ) ) >. >. , <. 2o , <. ( M + 2 ) , ( F ` ( M + 2 ) ) >. >. , ... } When restricted to the finite ordinals (omega), its range is exactly the sequence we want: { <. M , ( F ` M ) >. >. , <. ( M + 1 ) , ( F ` ( M + 1 ) ) >. >. , <. ( M + 2 ) , ( F ` ( M + 2 ) ) >. >. , ... } So, we just extract the range and throw away the domain. This is much simpler than the equivalent triple of previous definitions for the same thing: df-seq1 $a |- seq1 = { <. <. f , g >. , h >. | h = { <. x , y >. | ( x e. NN /\ y = ( 2nd ` ( ( rec ( { <. z , w >. | w = <. ( ( 1st ` z ) + 1 ) , ( ( 2nd ` z ) f ( g ` ( ( 1st ` z ) + 1 ) ) ) >. } , <. 1 , ( g ` 1 ) >. ) o. `' ( rec ( { <. z , w >. | w = ( z + 1 ) } , 1 ) |` om ) ) ` x ) ) ) } } $. df-shft $a |- shift = { <. <. f , x >. , g >. | g = { <. y , z >. | ( y e. CC /\ z = ( f ` ( y - x ) ) ) } } $. df-seqz $a |- seq = { <. <. x , g >. , h >. | h = ( ( ( ( 2nd ` x ) seq1 ( g shift ( 1 - ( 1st ` x ) ) ) ) shift ( ( 1st ` x ) - 1 ) ) |` { k e. ZZ | ( 1st ` x ) <_ k } ) } $. The new seq has arguments, unlike the old which was a constant class symbol. This allows proper classes for its arguments (in particular P and F). The "M" argument is visually separated from the other two since it acts more like a parameter; it could correspond to a subscript in a textbook e.g. "seq_M(P,F)" in LaTeX. Usually I prefer to have new definitions in the form of new class constant symbols (thus requiring no new equality, hb*, etc. theorems), but we made this an exception since Mario wants to use proper classes for P and M, and also the old seq was rather awkward looking: old: ( <. M , P >. seq F ) new: seq M ( P , Q ) The relationship to the old seq0, seq1, and seq (now seqz) are given by the following theorems: seq0fval $p |- ( S seq0 F ) = seq 0 ( S , F ) seq1fval $p |- ( S seq1 F ) = seq 1 ( S , F ) seqzfval $p |- ( M e. V -> ( <. M , S >. seqz F ) = seq M ( S , F ) ) Eventually, these 3 can be phased out. Mario also extended to domain of the square root function to include all of CC. The result is still uniquely defined, and corresponds to the principal value described at http://en.wikipedia.org/wiki/Square_root#Principal_square_root_of_a_complex_number df-sqr $a |- sqr = ( x e. CC |-> ( iota_ y e. CC ( ( y ^ 2 ) = x /\ 0 <_ ( Re ` y ) /\ ( _i x. y ) e/ RR+ ) ) ) $. Many existing theorems were affected by these changes; see the list at the top of set.mm for 8-Sep-2013. (3-Jun-2013) Adding or deleting antecedents in a theorem -------------------------------------------------------- Sometimes I might not initially know that a certain antecedent is required for a proof, but discover it only when I'm deep into the proof. Other times, I may have redundant antecedents that are not needed by the proof. And, sometimes I just want to rearrange the antecedents for better appearance or easier use later on. In all these cases, it is annoying and time-consuming to have to re-enter the proof from scratch to account for a modified conjunction of antecedents. To make the task of editing antecedents easier, I use a submit script called "unlink-ant.cmd" which is listed below at the end of this note. This method has a limitation: it assumes all antecedent linkages are done via simp*, which will occur when we chain to referenced theorems via syl*anc as described in the mmnotes.txt entry below of 19-Mar-2012. In other situations, additional manual step deletion may be required, or it might be possible to enhance the script. As an example, we will use divdivdiv, which was described in the 19-Mar-2012 entry below. You can follow these steps to test the script. MM> read set.mm MM> prove divdivdiv ... If a proof is incomplete, make sure it is in the state left by 'unify all/interactive' then 'improve all'. From inside the Proof Assistant, run the script to delete all steps linking antecedents: MM-PA> submit unlink-ant.cmd/silent At this point, the proof will have the antecedent linkages stripped. Replace the original proof in set.mm with this stripped-down proof, either in an editor or with 'save new_proof' then 'write source'. MM-PA> show new_proof/normal Proof of "divdivdiv": ---------Clip out the proof below this line to put it in the source file: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? mulcom syl11anc ? ? ? ? ? ? ? ? ? ? ? ? ? ? divmuldiv syl22anc eqtrd opreq2d ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? divmuldiv syl22anc ? ? ? ? ? ? ? ? ? ? ? ? ? mulcom syl11anc opreq1d ? ? ? ? ? ? ? ? ? ? ? ? mulcl syl11anc ? ? ? ? ? ? ? ? ? ? ? ? mulne0 syl22anc ? divid syl11anc 3eqtrd opreq1d ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? ? mulass syl111anc ? ? ? ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? mulid2 syl 3eqtr3d eqtr3d ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? ? ? ? ? ? ? ? ? ? ? ? mulcl syl11anc ? ? ? ? ? ? ? ? mulcl syl11anc ? ? ? ? ? ? ? mulne0 ad2ant2lr ? ? divcl syl111anc ? ? ? ? ? ? ? ? ? ? divcl syl111anc ? ? ? ? ? divne0 adantl ? ? ? divmul syl112anc mpbird $. ---------The proof of "divdivdiv" to clip out ends above this line. MM-PA> You can now edit set.mm to rearrange, add, or delete antecedents. When done, go back into MM-PA: MM-PA> unify all/interactive MM-PA> improve all ... CONGRATULATIONS! The proof is complete. Note that the file 3.tmp produced by the script will contain a list of all antecedents that are needed. This is useful for determining whether the theorem has unused ones. (The list may be slightly redundant, as the ones with /\ below show.) MM-PA> more 3.tmp ( B e. CC /\ B =/= 0 ) ( C e. CC /\ C =/= 0 ) ( D e. CC /\ D =/= 0 ) A e. CC B =/= 0 B e. CC C =/= 0 C e. CC D =/= 0 D e. CC The file unlink-ant.cmd is listed below. ! unlink-ant.cmd - delete all steps linking antecedents to proof ! Run this at the MM-PA prompt with 'submit delant.cmd/silent' on a ! complete or incomplete proof. Make sure the starting proof is in ! the state left by 'unify all/interactive' then 'improve all'. Output: ! 1. The list of required antecedents is contained in 3.tmp. ! 2. The proof shown by 'show new_proof/normal' will have antecedent ! linkages removed, so that the antecedent conjunction can be ! edited (antecedents added, deleted, or reorganized). Later, ! 'unify all/interactive' then 'improve all' will reconnect them. ! 3. The original set.mm is saved in 2.tmp in case something goes wrong. ! Temporary files used: 1.tmp, 2.tmp, 3.tmp, 4.tmp. ! We assume all antecedent linkages are done via simp*, which are normally ! chained to a referenced theorem via syl*anc (see mmnotes.txt entry ! of 19-Mar-2012). In other situations, additional manual step deletion ! may be required. save new_proof/compressed write source 2.tmp set width 9999 open log 1.tmp show new_proof close log set width 79 tools copy 1.tmp 3.tmp ! prevent simpld, simprd deletion substitute 1.tmp '=simpld' '=ximpld' a '' substitute 1.tmp '=simprd' '=ximprd' a '' ! extract the steps to be deleted into 1.tmp match 1.tmp '=simp' '' clean 1.tmp ber delete 1.tmp ' ' '' add 1.tmp 'delete step ' '' reverse 1.tmp ! get list of all required antecedents into 3.tmp copy 3.tmp 4.tmp match 3.tmp '=simp' '' match 4.tmp '=? ' '' copy 3.tmp,4.tmp 3.tmp delete 3.tmp '' ' ->' unduplicate 3.tmp clean 3.tmp ber add 3.tmp '' '$' substitute 3.tmp ' )$' '' 1 '' quit submit 1.tmp delete floating_hypotheses !(end of unlink-ant.cmd) ------------------------------------------------------------------------------- (22-May-2013) New metamath program features ------------------------------------------- 1. A /FORBID qualifier was added to MINIMIZE_WITH. Stronger than /EXCEPT, it will also exclude any statements that _depend_ on the statements in the list (based on the algorithm for SHOW TRACE_BACK). For example, MM-PA> MINIMIZE_WITH * /FORBID ax-inf*,ax-ac will prevent statements depending on ax-inf, ax-inf2, and ax-ac from being used. 2. A /MATCH qualifier was added to SHOW_TRACEBACK. For example, MM> SHOW TRACE_BACK cp /AXIOMS/MATCH ax-ac,ax-inf* will list only "ax-inf2" instead of a long list of axioms and definitions to sort through. These changes are in Version 0.07.91 20-May-2013. (18-May-2013) Separate axioms for complex numbers ------------------------------------------------- The real and complex numbers are now derived from a set of axioms separate from their construction. This isolates them better from the construction. It also provides more meaningful information in the 'SHOW TRACE_BACK /ESSENTIAL /AXIOMS' command. The construction theorem that derives the axiom is called the same name except that the prefix "ax-" is changed to "ax". For example, the axiom for closure of addition is ax-addcl, and the theorem that derives it is axaddcl. (27-Feb-2013) *OLD cleanup by Scott Fenton ------------------------------------------ Scott Fenton did a large cleanup of set.mm by eliminating many *OLD references. The following proofs were changed: oancom ficardom alephon omsublim domtriomlem axdc3lem2 axcclem cfom lemul1i lemul2i lemul1a ltmul12a mulgt1 ltmulgt11 gt0div ge0div ltdiv2 lediv2 lediv12a ledivp1 ledivp1i flhalf nnwo infmssuzcl expord2 expmwordi exple1 sqlecan sqeqori crreczi facwordi faclbnd faclbnd6 facavg fsumabs2mul 0.999... cvgratlem1ALT cvgratlem1 erelem3 efaddlem11 efaddlem15 efaddlem16 efaddlem20 efaddlem22 eftlex ef1tllem ef01tllem1 eflti efcnlem2 sin01bndlem2 cos01bndlem2 cos2bnd sin02gt0 sin4lt0 alephsuc3 bcthlem1 minveclem27 cospi cos2pi sinq12gt0t hvsubcan hvsubcan2 bcs2 norm1exi chocunii projlem18 pjthlem10 pjthlem12 omlsilem pjoc1i shscli shsvs shsvsi shsleji shsidmi spanuni h1de2bi h1de2ctlem spansni spansnmul spansnss spanunsni hoscl hodcl osumlem2 sumspansn spansncvi pjaddii pjmulii pjss2i pjssmii pjocini hoaddcomi hodsi hoaddassi hocadddiri hocsubdiri nmopub2tALT nmfnleub2 hoddii lnophsi hmops nmcopexlem3 nmcopexlem5 nmcfnexlem3 nmcfnexlem5 cnlnadjlem2 cnlnadjlem7 nmopadjlem adjadd nmoptrii nmopcoadji leopadd leopmuli leopnmid hmopidmchlem pjsdii pjddii pjscji pjtoi strlem1 sumdmdii cdjreui cdj1i cdj3lem1 nndivsub epos intnat atcvrne atcvrj2b cvrat4 2llnm3 2llnm4 cdlema2 pmapjat1 2polcon4b paddun lhpocnle lhpmat idltrn ltrnmw trl0 ltrnnidn cdleme2 (25-Feb-2013) Sethood antecedents --------------------------------- I changed many antecedents to use e.g. "A e. V" instead of "A e. B" to indicate that A must be a set. For example, uniexg was changed from "( A e. B -> U. A e. _V )" to "( A e. V -> U. A e. _V )". I think the variable V is more suggestive of the _V that it will often be replaced with, better indicating the purpose of the antecedent. I changed only theorems whose $f order would not change, so there is no impact on any proofs. Eventually, we could change the others such as funopfvg, but it would mean changing all the proofs referencing them, so it probably won't be done soon. But I think "A e. V" is a good convention to follow for future theorems. (21-Feb-2013) Changes to syl* ----------------------------- I changed the order of the hypotheses of 76 syl* theorems for a better logical "flow", per a suggestion from Mario Carneiro. This was a big change, affecting about 6400 proofs (about 1/3 of set.mm). If you have made changes to your mathbox that aren't in set.mm, I can update your mathbox for you if you send it along with the set.mm it works with. You can also update it yourself as follows. Step 1. Copy your mathbox into a file called mathbox.mm. Step 2. Copy and paste the following lines into the MM> prompt (or put them in a SUBMIT script if you wish): tools copy mathbox.mm tmp.mm add tmp.mm '' ' ' substitute tmp.mm ' sylanOLD ' ' sylanOLDOLD ' all '' substitute tmp.mm ' sylan ' ' sylanOLD ' all '' substitute tmp.mm ' sylanb ' ' sylanbOLD ' all '' substitute tmp.mm ' sylanbr ' ' sylanbrOLD ' all '' substitute tmp.mm ' sylan2OLD ' ' sylan2OLDOLD ' all '' substitute tmp.mm ' sylan2 ' ' sylan2OLD ' all '' substitute tmp.mm ' sylan2b ' ' sylan2bOLD ' all '' substitute tmp.mm ' sylan2br ' ' sylan2brOLD ' all '' substitute tmp.mm ' syl2an ' ' syl2anOLD ' all '' substitute tmp.mm ' syl2anb ' ' syl2anbOLD ' all '' substitute tmp.mm ' syl2anbr ' ' syl2anbrOLD ' all '' substitute tmp.mm ' syland ' ' sylandOLD ' all '' substitute tmp.mm ' sylan2d ' ' sylan2dOLD ' all '' substitute tmp.mm ' syl2and ' ' syl2andOLD ' all '' substitute tmp.mm ' sylanl1 ' ' sylanl1OLD ' all '' substitute tmp.mm ' sylanl2 ' ' sylanl2OLD ' all '' substitute tmp.mm ' sylanr1 ' ' sylanr1OLD ' all '' substitute tmp.mm ' sylanr2 ' ' sylanr2OLD ' all '' substitute tmp.mm ' sylani ' ' sylaniOLD ' all '' substitute tmp.mm ' sylan2i ' ' sylan2iOLD ' all '' substitute tmp.mm ' syl2ani ' ' syl2aniOLD ' all '' substitute tmp.mm ' sylancl ' ' sylanclOLD ' all '' substitute tmp.mm ' sylancr ' ' sylancrOLD ' all '' substitute tmp.mm ' sylanbrc ' ' sylanbrcOLD ' all '' substitute tmp.mm ' sylancb ' ' sylancbOLD ' all '' substitute tmp.mm ' sylancbr ' ' sylancbrOLD ' all '' substitute tmp.mm ' syl3an1OLD ' ' syl3an1OLDOLD ' all '' substitute tmp.mm ' syl3an1 ' ' syl3an1OLD ' all '' substitute tmp.mm ' syl3an2 ' ' syl3an2OLD ' all '' substitute tmp.mm ' syl3an3 ' ' syl3an3OLD ' all '' substitute tmp.mm ' syl3an1b ' ' syl3an1bOLD ' all '' substitute tmp.mm ' syl3an2b ' ' syl3an2bOLD ' all '' substitute tmp.mm ' syl3an3b ' ' syl3an3bOLD ' all '' substitute tmp.mm ' syl3an1br ' ' syl3an1brOLD ' all '' substitute tmp.mm ' syl3an2br ' ' syl3an2brOLD ' all '' substitute tmp.mm ' syl3an3br ' ' syl3an3brOLD ' all '' substitute tmp.mm ' syl3an ' ' syl3anOLD ' all '' substitute tmp.mm ' syl3anb ' ' syl3anbOLD ' all '' substitute tmp.mm ' syl3anbr ' ' syl3anbrOLD ' all '' substitute tmp.mm ' syld3an3 ' ' syld3an3OLD ' all '' substitute tmp.mm ' syld3an1 ' ' syld3an1OLD ' all '' substitute tmp.mm ' syld3an2 ' ' syld3an2OLD ' all '' substitute tmp.mm ' syl3anl1 ' ' syl3anl1OLD ' all '' substitute tmp.mm ' syl3anl2 ' ' syl3anl2OLD ' all '' substitute tmp.mm ' syl3anl3 ' ' syl3anl3OLD ' all '' substitute tmp.mm ' syl3anl ' ' syl3anlOLD ' all '' substitute tmp.mm ' syl3anr1 ' ' syl3anr1OLD ' all '' substitute tmp.mm ' syl3anr2 ' ' syl3anr2OLD ' all '' substitute tmp.mm ' syl3anr3 ' ' syl3anr3OLD ' all '' substitute tmp.mm ' syl5OLD ' ' syl5OLDOLD ' all '' substitute tmp.mm ' syl5com ' ' syl5comOLD ' all '' substitute tmp.mm ' syl5 ' ' syl5OLD ' all '' substitute tmp.mm ' syl5d ' ' syl5dOLD ' all '' substitute tmp.mm ' syl5ib ' ' syl5ibOLD ' all '' substitute tmp.mm ' syl5ibr ' ' syl5ibrOLD ' all '' substitute tmp.mm ' syl5bi ' ' syl5biOLD ' all '' substitute tmp.mm ' syl5cbi ' ' syl5cbiOLD ' all '' substitute tmp.mm ' syl5bir ' ' syl5birOLD ' all '' substitute tmp.mm ' syl5cbir ' ' syl5cbirOLD ' all '' substitute tmp.mm ' syl5bb ' ' syl5bbOLD ' all '' substitute tmp.mm ' syl5rbb ' ' syl5rbbOLD ' all '' substitute tmp.mm ' syl5bbr ' ' syl5bbrOLD ' all '' substitute tmp.mm ' syl5rbbr ' ' syl5rbbrOLD ' all '' substitute tmp.mm ' syl5eq ' ' syl5eqOLD ' all '' substitute tmp.mm ' syl5req ' ' syl5reqOLD ' all '' substitute tmp.mm ' syl5eqr ' ' syl5eqrOLD ' all '' substitute tmp.mm ' syl5reqr ' ' syl5reqrOLD ' all '' substitute tmp.mm ' syl5eqel ' ' syl5eqelOLD ' all '' substitute tmp.mm ' syl5eqelr ' ' syl5eqelrOLD ' all '' substitute tmp.mm ' syl5eleq ' ' syl5eleqOLD ' all '' substitute tmp.mm ' syl5eleqr ' ' syl5eleqrOLD ' all '' substitute tmp.mm ' syl5eqner ' ' syl5eqnerOLD ' all '' substitute tmp.mm ' syl5ss ' ' syl5ssOLD ' all '' substitute tmp.mm ' syl5ssr ' ' syl5ssrOLD ' all '' substitute tmp.mm ' syl5eqbr ' ' syl5eqbrOLD ' all '' substitute tmp.mm ' syl5eqbrr ' ' syl5eqbrrOLD ' all '' substitute tmp.mm ' syl5breq ' ' syl5breqOLD ' all '' substitute tmp.mm ' syl5breqr ' ' syl5breqrOLD ' all '' substitute tmp.mm ' syl7OLD ' ' syl7OLDOLD ' all '' substitute tmp.mm ' syl7 ' ' syl7OLD ' all '' substitute tmp.mm ' syl7ib ' ' syl7ibOLD ' all '' substitute tmp.mm ' syl6ss ' ' syl6sseq ' all '' substitute tmp.mm ' syl6ssr ' ' syl6sseqr ' all '' clean tmp.mm e copy tmp.mm mathbox.mm exit Step 3. Place mathbox.mm into the most recent set.mm. 'Verify proof *' should show no errors related to these changes. You may want to delete the tmp.mm* files to clean up your directory. Step 4 (optional): Replace the syl*OLD references in your new set.mm with the new versions. You can find these with 'show usage syl*OLD'. For example, suppose your theorem 'xyz' uses sylanOLD. In the Proof Assistant, type in the following commands (or run them from a script): MM> prove xyz MM-PA> minimize_with syl*an*,syl5*,syl7*/except *OLD/allow_growth MM-PA> save new_proof/compressed MM-PA> exit If you don't replace the syl*OLDs, then I will do that when you next submit your mathbox using my scripts; it is an easy change for me. I will keep the syl*OLDs in set.mm for about a year. In the new syl5* names, "ib" and "bi" are swapped to reflect the swapping of implication and biconditional in the new hypothesis order. See the list at the top of set.mm for a few other renamings. (4-Nov-2012) Structures have been renamed ----------------------------------------- I made the changes of the 30-Oct-2012 proposal below. All changes can be seen at the top of set.mm in the 4-Nov-12 entries. I also changed conflicting labels so that the "NEW" suffix could be removed from the new structure theorems. There are about 50 labels with "NEW" removed, and another 50 or so older labels were renamed to prevent conflict. The old label renames generally follow this scheme, where the added "o" stands for the "Op" added to "GrpOp" etc.: *grp* becomes *grpo* *abl* becomes *ablo* *ring* becomes *ringo* *divrng* becomes *divrngo* Note that only conflicting ones were renamed. Thus, for example, isgrp2i for GrpOp was _not_ renamed to "isgrpo2i" because it didn't conflict with any "NEW". I didn't rename them in order to keep the size of the change list smaller. If people prefer that I rename all the GrpOp labels from *grp* to *grpo* for better consistency, that is no big deal for me. Let me know. (30-Oct-2012) Proposed renaming of structures ------------------------------- For a few structures in the literature, the pair that includes a base set is called a "space", such as topological space or metric space. In this case, the main name (without "space") is one of the pair's elements: the topology on a (base) set, the metric of a metric space. This is reflected in set.mm's current naming: Top vs. TopSp, Met vs. MetSp. For most other structures, the name usually refers to the entire structure. In modern literature, a group is almost alway an ordered pair of a base set and an operation. Prior to set theory, such as Cayley (1854), a group was a base set "accompanied" by an operation in a way not formally defined. Ordered pairs had not been formally defined at this point in time. Although a group member was a member of the base set, I doubt that the base set in isolation, independent of any operation, would have been called a group. For brevity, modern authors sometimes also use "group" to mean the "base set of the group", using context to disambiguate. For example, a group member is a member of the base set, since that is the only meaning that makes sense. And most of them explain this informal dual usage. But the formal meaning is still understood to be the ordered pair, which is how they (in modern literature) usually define "group". On the other hand, I don't recall ever seeing the isolated operation itself referred to as a group. I propose the following renaming to suggest, at least to some extent, the literature usage. OLD NEW Open MetOpen open sets of a metric space Grp GrpOp an operation for a group GrpNEW Grp a group SubGrp SubGrpOp the operation for a subgroup of a group Ring RingOps the pair of operations for a ring DivRing DivRingOps the pair of operations for a division ring Poset PosetRel an ordering relation for a poset PosetNEW Poset a poset Lat LatRel an ordering relation for a lattice LatNEW Lat a lattice Dir DirRel an ordering relation for a directed set Toset TosetRel an ordering relation for a totally ordered set TopSet TopOpen the topology extractor for a TopSp For structures that exist only in mathboxes, I won't do any renaming, but I may rename them if they are moved to the main set.mm. I may also rename CVec, NrmCVec, CPreHil, CBan, CHil to CVecOps, etc., although I need to figure out what to with them. In a way they are too specialized (for CC rather than general fields). I might move them to my mathbox, since I don't think anyone is using them. (17-Sep-2012) Differences between REPLACE and ASSIGN -------------------------------------- ASSIGN can only accept step numbers that haven't been assigned yet (i.e. those in SHOW NEW_PROOF/UNKNOWN). REPLACE can potentially accept any step number. The assignment of $e and $f hypotheses of the theorem being proved must be done with ASSIGN. REPLACE does not (currently) allow them. REPLACE fails if it does not find a complete subproof for the step being replaced. ASSIGN fails only if the conclusion of the assigned statement cannot be unified with the unknown step. REPLACE takes longer to run than ASSIGN. In fact, the algorithm used by REPLACE is the same as IMPROVE /3 /DEPTH 1 /SUBPROOF but using only the specified statement rather than scanning the entire database. Unlike IMPROVE, REPLACE can be used with steps having "shared" working variables ($nn). A "shared" working variable means that it is used outside of any existing subproof of the replaced step. In this case, REPLACE attempts a guess at an assignment for the working variable, and usually seems to be right. However, it is possible that this "aggressive" assignment can be wrong, and REPLACE will issue the warning, Assignments to shared working variables ($nn) are guesses. If incorrect, to undo DELETE STEP , INITIALIZE, UNIFY, then assign them manually with LET and try REPLACE again. I have also found that I can usually recover with just DELETE FLOATING, INITIALIZE, UNIFY. This is what I try first so that I can salvage what REPLACE found. (15-Sep-2012) Enhancements to label and step number arguments ----------------------------------------------- Version 0.07.81 (14-Sep-2012) of the metamath program has two minor enhancements. I added the first one primarily to make the REPLACE and ASSIGN syntax uniform. I have found it useful to try e.g. "REPLACE LAST abc" first, then if it fails use "ASSIGN LAST abc". When REPLACE is successful, it eliminates the tedium of manually assigning the hypotheses. In the future, we could have REPLACE automatically call ASSIGN when it fails, but I want to get some experience with it first. Enhancement #1: Statement numbers can now be specified with +nn, which means the nn'th unassigned step from the first in SHOW NEW_PROOF/UNKNOWN. This complements the previously existing -nn, which means the nn'th step from the last in SHOW NEW_PROOF/UNKNOWN. This change affects ASSIGN, REPLACE, IMPROVE, and LET STEP and makes their step their step specification argument uniform. For example, consider: MM-PA> show new_proof/unknown 6 mpd.1=? $? |- ( ph -> ( ph -> ph ) ) 7 mpd.2=? $? |- ( ph -> ( ( ph -> ph ) -> ph ) ) ASSIGN LAST or ASSIGN -0 refers to step 7. ASSIGN -1 refers to step 6. ASSIGN FIRST or ASSIGN +0 refers to step 6. ASSIGN +1 refers to step 7. ASSIGN 6 refers to step 6. ASSIGN +6 is illegal, because there aren't at least 7 unknown steps. ASSIGN 1 is illegal, because step 1 isn't unknown. Enhancement #2: The unique label argument needed for PROVE, ASSIGN, and REPLACE now allows wildcards, provided that there is a unique match. This can sometimes save typing. For example, "into*3" matches only the theorem "intopcoaconlem3". So instead of typing the long theorem name, you can use "into*3" as a shortcut: MM> prove into*3 Entering the Proof Assistant. HELP PROOF_ASSISTANT for help, EXIT to exit. You will be working on statement (from "SHOW STATEMENT intopcoaconlem3"): If you use too few characters for a unique match, the program will tell you: MM> prove int*3 ?This command requires a unique label, but there are 3 (allowed) matches for "int*3". The first 2 are "intmin3" and "inttop3". Use SHOW LABELS "int*3" to see all matches. Of course, you can check in advance to see if it's unique by typing SHOW LABELS into*3. It is also now easier to assign $e hypotheses. For example, MM-PA> ASSIGN LAST *.a will assign the $e hypothesis ending with ".a", because there are no $a or $p statements in set.mm ending with ".a". (Any $e hypothesis not belonging to the statement being proved are ignored by the wildcard scan.) (12-Sep-2012) New IMPROVE qualifiers and improved REPLACE command --------------------------------------------------- New IMPROVE qualifiers ---------------------- Version 0.07.80 (4-Sep-2012) of the metamath program has the additional qualifiers /2, /3, and /SUBPROOFS for the IMPROVE command. When using this version, and especially the new qualifiers, please save your work since the runtime can be unpredictable (possibly hours). There is no way to cancel except to abort the program. And, because of the new code, there is the possibility of a bug I didn't find in testing. (If you find a bug, please let me know, of course.) From HELP IMPROVE: / 1 - Use the traditional search algorithm used in earlier versions of this program. It is the default. It tries to match cut-free statements only (those having not having variables in their hypotheses that are not in the conclusion). It is the fastest method when it can find a proof. / 2 - Try to match statements with cuts. It also tries to match steps containing working ($nn) variables when they don't share working variables with the rest of the proof. It runs slower than / 1. / 3 - Attempt to find (cut-free) proofs of $e hypotheses that result from a trial match, unlike / 2, which only attempts (cut-free) proofs of $f hypotheses. It runs much slower than / 1, and you may prefer to use it with specific statements. For example, if step 456 is unknown, you may want to use IMPROVE 456 / 3 rather than IMPROVE ALL / 3. Note that / 3 respects the / DEPTH qualifier, although at the expense of additional run time. / SUBPROOFS - Look at each subproof that isn't completely known, and try to see if it can be proved independently. This qualifier is meaningful only for IMPROVE ALL / 2 or IMPROVE ALL / 3. It may take a very long time to run, especially with / 3. Note that / 2 includes the search of / 1, and / 3 includes / 2. Specifying / 1 / 2 / 3 has the same effect as specifying just / 3, so there is no need to specify more than one. Finally, since / 1 is the default, you never need to use it; it is included for completeness (or in case the default is changed in the future). Here is an example you can duplicate if you want: MM> prove eqtr3i Entering the Proof Assistant. HELP PROOF_ASSISTANT for help, EXIT to exit. You will be working on statement (from "SHOW STATEMENT eqtr3i"): 5507 eqtr3i.1 $e |- A = B $. 5508 eqtr3i.2 $e |- A = C $. 5509 eqtr3i $p |- B = C $= ... $. Note: The proof you are starting with is already complete. MM-PA> delete all The entire proof was deleted. 1 eqtr3i=? $? |- B = C MM-PA> improve all No new subproofs were found. MM-PA> improve all /2 Pass 1: Trying to match cut-free statements... Pass 2: Trying to match all statements... No new subproofs were found. MM-PA> improve all /3 Pass 1: Trying to match cut-free statements... Pass 2: Trying to match all statements, with cut-free hypothesis matches... No new subproofs were found. MM-PA> improve all /3 /depth 1 Pass 1: Trying to match cut-free statements... Pass 2: Trying to match all statements, with cut-free hypothesis matches... A proof of length 9 was found for step 1. Steps 1 and above have been renumbered. CONGRATULATIONS! The proof is complete. Use SAVE NEW_PROOF to save it. Note: The Proof Assistant does not detect $d violations. After saving the proof, you should verify it with VERIFY PROOF. MM-PA> show new_proof 6 eqcomi.1=eqtr3i.1 $e |- A = B 7 eqtri.1=eqcomi $p |- B = A 8 eqtri.2=eqtr3i.2 $e |- A = C 9 eqtr3i=eqtri $p |- B = C MM-PA> Explanation: IMPROVE ALL /1 (default i.e. old algorithm) didn't even consider eqtri because it is not cut free i.e. it has variable "A" in the hypotheses that isn't in the conclusion. IMPROVE ALL /2 did not prove it because there was no match to hypothesis eqtri.1 "|- B = A" in the hypotheses (or other parts of the proof, if there were any) of eqtr3i. IMPROVE ALL /3 alone didn't prove it because there was no statement with 0 $e hypotheses that matched eqtri.1 "|- B = A". It did not consider eqcomi, which has a $e hypothesis. IMPROVE ALL /3 /DEPTH 1 considered eqcomi, which has 1 $e hypothesis, and the hypothesis eqcomi.1 matched eqtr3i.1, proving the theorem. Each of these qualifiers takes longer to run than the previous. In addition, it is better to try depth 0 (default) first, then /depth 1, then /depth 2,... because (in addition to much greater runtime) each increasing depth may result in a less efficient proof. For example, 'improve /3 /depth 2' finds a proof, but it will have a redundant use of idi (try it). The /SUBPROOF qualifier ----------------------- The /SUBPROOF qualifier is occasionally useful if you have a proof that is a tangled mess with many unknown steps, and you want to see if something simpler will prove parts of it. Originally I had /SUBPROOF as the default for /2 and /3, but later I made it a separate qualifier because it can take a huge amount of runtime, especially with /3. The /SUBPROOF qualifier found a proof for the following (somewhat contrived) example that you can duplicate if you want. MM> prove dalem22 Entering the Proof Assistant. HELP PROOF_ASSISTANT for help, EXIT to exit. You will be working on statement (from "SHOW STATEMENT dalem22"): 70869 dalem.ph $e |- ( ph <-> ( ( ( K e. HL /\ C e. A ) /\ ( P e. A /\ Q e. A /\ R e. A ) /\ ( S e. A /\ T e. A /\ U e. A ) ) /\ ( Y e. O /\ Z e. O ) /\ ( ( -. C L ( P J Q ) /\ -. C L ( Q J R ) /\ -. C L ( R J P ) ) /\ ( -. C L ( S J T ) /\ -. C L ( T J U ) /\ -. C L ( U J S ) ) /\ ( C L ( P J S ) /\ C L ( Q J T ) /\ C L ( R J U ) ) ) ) ) $. 70870 dalem.l $e |- L = ( le ` K ) $. 70871 dalem.j $e |- J = ( join ` K ) $. 70872 dalem.a $e |- A = ( Atoms ` K ) $. 70873 dalem.ps $e |- ( ps <-> ( ( c e. A /\ d e. A ) /\ -. c L Y /\ ( d =/= c /\ -. d L Y /\ C L ( c J d ) ) ) ) $. 70905 dalem22.o $e |- O = ( LPlanes ` K ) $. 70906 dalem22.y $e |- Y = ( ( P J Q ) J R ) $. 70907 dalem22.z $e |- Z = ( ( S J T ) J U ) $. 70908 dalem22 $p |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) J ( P J S ) ) e. O ) $= ... $. Note: The proof you are starting with is already complete. MM-PA> delete step 70 A 34-step subproof at step 70 was deleted. Steps 70:276 are now 37:243. 37 mpbid.min=? $? |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) ( meet ` K ) ( P J S ) ) e. A ) MM-PA> assign last dalem21 To undo the assignment, DELETE STEP 65 and if needed INITIALIZE, UNIFY. 56 dalem.ph=? $? |- ( ph <-> ( ( ( $12 e. HL /\ $13 e. A ) /\ ( P e. A /\ $14 e. A /\ $15 e. A ) /\ ( S e. A /\ $16 e. A /\ $17 e. A ) ) /\ ( Y e. $18 /\ Z e. $18 ) /\ ( ( -. $13 $19 ( P J $14 ) /\ -. $13 $19 ( $14 J $15 ) /\ -. $13 $19 ( $15 J P ) ) /\ ( -. $13 $19 ( S J $16 ) /\ -. $13 $19 ( $16 J $17 ) /\ -. $13 $19 ( $17 J S ) ) /\ ( $13 $19 ( P J S ) /\ $13 $19 ( $14 J $16 ) /\ $13 $19 ( $15 J $17 ) ) ) ) ) 57 dalem.l=? $? |- $19 = ( le ` $12 ) 58 dalem.j=? $? |- J = ( join ` $12 ) 59 dalem.a=? $? |- A = ( Atoms ` $12 ) 60 dalem.ps=? $? |- ( ps <-> ( ( c e. A /\ d e. A ) /\ -. c $19 Y /\ ( d =/= c /\ -. d $19 Y /\ $13 $19 ( c J d ) ) ) ) 61 dalem21.m=? $? |- ( meet ` K ) = ( meet ` $12 ) 62 dalem21.o=? $? |- $18 = ( LPlanes ` $12 ) 63 dalem21.y=? $? |- Y = ( ( P J $14 ) J $15 ) 64 dalem21.z=? $? |- Z = ( ( S J $16 ) J $17 ) MM-PA> improve all /2 /subproof Pass 1: Trying to match cut-free statements... A proof of length 1 was found for step 55. A proof of length 1 was found for step 54. A proof of length 1 was found for step 53. A proof of length 1 was found for step 52. A proof of length 3 was found for step 50. A proof of length 1 was found for step 47. A proof of length 1 was found for step 44. A proof of length 1 was found for step 41. A proof of length 1 was found for step 39. A proof of length 1 was found for step 38. A proof of length 1 was found for step 37. Pass 2: Trying to match all statements... Pass 3: Trying to replace incomplete subproofs... A proof of length 34 was found for step 67. Steps 37 and above have been renumbered. CONGRATULATIONS! The proof is complete. Use SAVE NEW_PROOF to save it. Note: The Proof Assistant does not detect $d violations. After saving the proof, you should verify it with VERIFY PROOF. MM-PA> In this case, just before "IMPROVE ALL /2/SUBPROOF", step 37 (which became step 65 after the last assignment) had the known content "$p |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) ( meet ` K ) ( P J S ) ) e. A )". However, the subproof ending at dalem21 contained a hopeless mess of $nn work variables. The /subproof algorithm did the following. It scanned the proof and saw that step 65 had an incomplete subproof. So it then tried to prove step 65 (using the /2 algorithm) completely independently of the existing incomplete subproof and found a new proof. It then deleted the existing subproof and replaced it with the one it found.. It didn't matter what the existing subproof had: it could have been total "garbage" with nothing at all to do with the final proof that was found. I don't yet have a good feel for when /subproof will help or not. However, sometimes it can take a very long time to run, so you should "save new_proof" then "write source" before you try it, in case you have to abort it. Improved REPLACE command ------------------------ Recall the first part of the previous example: MM> prove dalem22 Entering the Proof Assistant. HELP PROOF_ASSISTANT for help, EXIT to exit. You will be working on statement (from "SHOW STATEMENT dalem22"): 70869 dalem.ph $e |- ( ph <-> ( ( ( K e. HL /\ C e. A ) /\ ( P e. A /\ Q e. A /\ R e. A ) /\ ( S e. A /\ T e. A /\ U e. A ) ) /\ ( Y e. O /\ Z e. O ) /\ ( ( -. C L ( P J Q ) /\ -. C L ( Q J R ) /\ -. C L ( R J P ) ) /\ ( -. C L ( S J T ) /\ -. C L ( T J U ) /\ -. C L ( U J S ) ) /\ ( C L ( P J S ) /\ C L ( Q J T ) /\ C L ( R J U ) ) ) ) ) $. 70870 dalem.l $e |- L = ( le ` K ) $. 70871 dalem.j $e |- J = ( join ` K ) $. 70872 dalem.a $e |- A = ( Atoms ` K ) $. 70873 dalem.ps $e |- ( ps <-> ( ( c e. A /\ d e. A ) /\ -. c L Y /\ ( d =/= c /\ -. d L Y /\ C L ( c J d ) ) ) ) $. 70905 dalem22.o $e |- O = ( LPlanes ` K ) $. 70906 dalem22.y $e |- Y = ( ( P J Q ) J R ) $. 70907 dalem22.z $e |- Z = ( ( S J T ) J U ) $. 70908 dalem22 $p |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) J ( P J S ) ) e. O ) $= ... $. Note: The proof you are starting with is already complete. MM-PA> delete step 70 A 34-step subproof at step 70 was deleted. Steps 70:276 are now 37:243. 37 mpbid.min=? $? |- ( ( ph /\ Y = Z /\ ps ) -> ( ( c J d ) ( meet ` K ) ( P J S ) ) e. A ) Suppose we know in advance that dalem21 is the correct statement to assign to step 37. If we type "ASSIGN LAST dalem21", we get the 9 hypotheses, as shown in the previous section, that have to be assigned by hand; even worse, there are a lot of $nn work variables in those hypotheses that we have to figure out. The REPLACE command has been enhanced so that it can "replace" any step at all, even unknown steps that have not been assigned yet, provided that it can find a match to all of the hypotheses of the REPLACE label argument. So above, instead of "ASSIGN LAST dalem21", we can type "REPLACE LAST dalem21". We will see: MM-PA> replace last dalem21 CONGRATULATIONS! The proof is complete. Use SAVE NEW_PROOF to save it. Note: The Proof Assistant does not detect $d violations. After saving the proof, you should verify it with VERIFY PROOF. Thus we have eliminated the tedium of having to assign by hand the 9 hypotheses resulting from ASSIGN. If REPLACE cannot find matches for all the hypotheses, it will say so, and in that case you will have to go back to ASSIGN to complete the proof. But in a lot of cases, it may be worth a try. Note that you can also REPLACE a step that has been assigned, even with the same label it has been assigned to. For example, if we typed "ASSIGN LAST dalem21", then step 37 would become step 65, and we can then try "REPLACE 65 dalem21". (24-Mar-2012) MINIMIZE_WITH usage suggestions ------------------------------- Recently, the metamath program was changed so the the scan order in the MINIMIZE_WITH command is now bottom to top instead of top to bottom. I chose this because empirically, slightly shorter proofs often result because more "advanced" theorems get used first. For large proofs, it can be useful to run the original order using the new /REVERSE qualifier, to see which order results in the smallest proof. The script that I almost always use is the following: MINIMIZE_WITH * /BRIEF /EXCEPT 3*tr*,3syl,*OLD,ee* MINIMIZE_WITH 3*tr* /BRIEF /EXCEPT *OLD SHOW NEW_PROOF /COMPRESSED Running the 3*tr*'s last sometimes results in a more optimal use of 3bitr*, 3eqtr*, etc. The ee* exclusion forces standard theorems to be used in place of identical ones in Alan Sare's utility set. You may need to use /NO_DISTINCT with some theorems to avoid distinct variable conflicts. (At some future point hopefully this won't be required.) I've noticed that minimizing with 3syl often increases rather than decreases the size of the compressed proof. If I suspect 3syl could be useful, I save the proof, then MINIMIZE_WITH 3syl; if the compressed proof becomes shorter, I use it, otherwise I discard it. On larger proofs, I save the source before minimization into separate file (SAVE NEW_PROOF then WRITE SOURCE xxx), then run the following script on the saved file: MINIMIZE_WITH * /BRIEF /EXCEPT 3*tr*,3syl,*OLD,ee* /REVERSE MINIMIZE_WITH 3*tr* /BRIEF /EXCEPT *OLD /REVERSE SHOW NEW_PROOF /COMPRESSED I then pick whichever compressed proof is the shortest. Finally, recall that mathboxes are excluded unless you use the /INCLUDE_MATHBOXES (/I) qualifier. Right now, your own mathbox is also excluded, unfortuately; I hope to fix this in a future release. (19-Mar-2012) A new approach to antecedent manipulation ----------------------------------------- In the 18-Aug-2011 entry below, I described the problem of having to "waste" proof steps to manipulate antecedents in order to match the antecedents of referenced theorems. My initial proposal, which was to adopt "canonical" parenthesizations, had a number of problems described in the 31-Aug-2011 entry below, and I gave up on that approach. Here I will described a new approach I have been using in my mathbox for a while and seems to work reasonably well. I think it is much easier to use than the sequence of antecedent commuting (such as ancom1s), re-parenthesizations (such as anassrs), and adding antecendents (such as ad2ant2r) that has "traditionally" been used in set.mm and which can be tedious and frustrating to do optimally. Moreover, it often results in a shorter compressed proof. The uncompressed proof tends to be longer, but usually this isn't a concern since we ordinarily store compressed proofs. Note that the new method is not a requirement for new proofs but is simply a suggested method available to the user. At the end of this entry I compare an example proof (divdivdiv) using the old and new methods. New utility theorems -------------------- For use by the new method, I added to set.mm 117 simp* theorems (simplification to a single conjunct, simpll1 through simp333) and 28 syl*anc theorems (syllogism with contraction, syl11anc through syl333anc). The simp* theorems will handle all possible simplifications of double and triple conjunctions nested up to 2 levels deep. The syl*anc theorems will handle all possible target antecedents with double and triple conjunctions nested up to 1 level deep. Even though this is a lot of theorems, I think that over time they will result in a set.mm size reduction via shorter proofs that will result. They are consistently named. For example, in the simp* example simp31l $p |- ( ( ta /\ et /\ ( ( ph /\ ps ) /\ ch /\ th ) ) -> ph ) $= the "3" means the 3rd member of the outermost triple conjunction, "1" means the 1st member of the next-nested triple conjunction, and "l" means the left member of the next-nested double conjunction. In general, "1", "2", "3" refer to triple conjunctions and "l", "r" to double conjunctions. The following example shows the syl*anc naming convention: syl231anc.1 $e |- ( ph -> ps ) $. syl231anc.2 $e |- ( ph -> ch ) $. syl231anc.3 $e |- ( ph -> th ) $. syl231anc.4 $e |- ( ph -> ta ) $. syl231anc.5 $e |- ( ph -> et ) $. syl231anc.6 $e |- ( ph -> ze ) $. syl231anc.7 $e |- ( ( ( ps /\ ch ) /\ ( th /\ ta /\ et ) /\ ze ) -> si ) $. syl231anc $p |- ( ph -> si ) $= ... $. the "231" (a string of length 3) means there are 3 outer conjuncts composed of a double ("2"), a triple ("3"), and a "unary" ("1") conjunction respectively. In a proof, the simp* and syl*anc theorems would be used in the following way: ... sylxxxanc.1=simpxxx |- (main antecedent) -> (piece of ref'd antecdnt) sylxxxanc.2=simpxxx |- (main antecedent) -> (piece of ref'd antecdnt) sylxxxanc.3=simpxxx |- (main antecedent) -> (piece of ref'd antecdnt) sylxxxanc.4=(ref'd thm) |- (ref'd antecent) -> (ref'd theorem) (target)=sylxxxanc |- (main antecedent) -> (ref'd theorem) ... The new procedure ----------------- You can use the following procedure is used to create this kind of proof, without having to remember the names of the simp* and syl*anc theorems. (See divdivdiv below for an example.) 1. Use syl to connect the main antecedent to the reference theorem. 2. Assign the referenced theorem to syl.2. 3. Apply 3jca and jca as many times as necessary to break up the expression now assigned to syl.1 into individual conjuncts on the right-hand side. (Hint: if in doubt whether the outermost conjunction is double or triple, try 3jca first.) 4. Use IMPROVE ALL to assign the simp* theorems to the jca and 3jca hypotheses automatically. When the proof is done, use 'MINIMIZE_WITH syl*anc / BRIEF'. The syl*anc theorems will automatically replace the syl and shorten the proof. Sometimes the jca and 3jca breakups won't be proved automatically with MINIMIZE_WITH, such as when the closure of a compound operation is needed. In that case, repeat the above 4 steps recursively: apply syl, assign the closure theorem to syl.2, and break up syl.1 with 3jca and jca. Guidelines for new theorems --------------------------- In order for new theorems to be compatible with uses of syl*anc in theorems referencing them, the important thing (for up to 9 antecedent conjuncts) is that 1. In the theorem's statement, the conjunct nesting level in the antecedent should be no greater than one. For example, ( ( ph /\ ps ) /\ ( ch /\ th ) ) is acceptable, whereas ( ( ( ph /\ ps ) /\ ch ) /\ th ) is not. (For more than 9 conjuncts, which is rare, I don't have a guideline.) The simp* theorems provide for all possibilities up to 2 nesting levels. Thus additional antecedent conjuncts can be present as needed inside of the proof (as would be the case, for example, before applying pm2.61dan). The main guideline for antecedents inside of proofs is that 2. In a proof step, the conjunct nesting level in the antecedent should be no greater than two. The maximum number of antecedent conjuncts inside of a proof to which simp* can apply is 27, achieved with triple conjunctions nested 2 levels deep. Obsolete theorems ----------------- I plan to make the following changes within a week or so. The following 4 existing theorems will be replaced by new ones with a different hypothesis order. (The new ones are in set.mm already.) Obsolete Will be replaced by | Obsolete Will be replaced by -------- ------------------- | -------- ------------------- | sylanc syl11anc | syl3anc syl111anc syl2anc syl22anc | syl3an2c syl13anc The following 49 existing theorems will be renamed for naming consistency. Old name New name will be | Old name New name will be -------- ---------------- | -------- ---------------- | sylan31c syl21anc | 3simp1i simp1i sylan32c syl12anc | 3simp2i simp2i pm3.26im simplim | 3simp3i simp3i pm3.27im simprim | 3simp1d simp1d pm3.26 simpl | 3simp2d simp2d pm3.26i simpli | 3simp3d simp3d pm3.26d simplld | 3simp1bi simp1bi pm3.26bi simplbi | 3simp2bi simp2bi pm3.27 simpr | 3simp3bi simp3bi pm3.27i simpri | 3simp1l simp1l pm3.27d simprd | 3simp1r simp1r pm3.27bi simprbi | 3simp2l simp2l pm3.26bda simprbda | 3simp2r simp2r pm3.27bda simplbda | 3simp3l simp3l pm3.26bi2 simplbi2 | 3simp3r simp3r pm3.26bi2VD simplbi2VD | 3simp11 simp11 3simp1 simp1 | 3simp12 simp12 3simp2 simp2 | 3simp13 simp13 3simp3 simp3 | 3simp21 simp21 3simpl1 simpl1 | 3simp22 simp22 3simpl2 simpl2 | 3simp23 simp23 3simpl3 simpl3 | 3simp31 simp31 3simpr1 simpr1 | 3simp32 simp32 3simpr2 simpr2 | 3simp33 simp33 3simpr3 simpr3 | Example - proof of theorem divdivdiv ------------------------------------ Compare: http://us2.metamath.org:88/mpegif/divdivdiv.html - new http://us2.metamath.org:88/mpegif/divdivdivOLD.html - old As an example, I re-proved the existing divdivdiv (which I remember as being somewhat tedious to achieve a short proof for using the traditional method). You can compare them as divdivdiv vs. divdivdivOLD. The uncompressed proof of divdivdiv is about 25% larger than that of divdivdivOLD, but the compressed proof is slightly smaller and there are fewer steps on the web page. As an example in this proof, one of the steps needed was: 555 eqtrd.2=? $? |- ( ( ( A e. CC /\ ( B e. CC /\ B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> ( ( A / B ) x. ( D / C ) ) = ( ( A x. D ) / ( B x. C ) ) ) The consequent matches divmuldiv. To connect the antecedent, I performed the following steps: assign last syl assign last divmuldiv obtaining: 558 syl.1=? $? |- ( ( ( A e. CC /\ ( B e. CC /\ B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> ( ( A e. CC /\ D e. CC ) /\ ( ( B e. CC /\ B =/= 0 ) /\ ( C e. CC /\ C =/= 0 ) ) ) ) Next I broke up the consequent with jca's: assign last jca improve all assign last jca improve all ... until no conjunctions remained in the consequent. Finally, I did "MINIMIZE_WITH * /BRIEF" resulting in: 648 sylXanc.1=simpll $p |- ( ( ( A e. CC /\ ( B e. CC /\ B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> A e. CC ) 673 sylXanc.2=simprrl $p |- ( ( ( A e. CC /\ ( B e. CC /\ B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> D e. CC ) 699 sylXanc.3=simplr $p |- ( ( ( A e. CC /\ ( B e. CC /\ B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> ( B e. CC /\ B =/= 0 ) ) 725 sylXanc.4=simprl $p |- ( ( ( A e. CC /\ ( B e. CC /\ B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> ( C e. CC /\ C =/= 0 ) ) 730 syl22anc.5=divmuldiv $p |- ( ( ( A e. CC /\ D e. CC ) /\ Press for more, Q quit, S scroll, B back up... ( ( B e. CC /\ B =/= 0 ) /\ ( C e. CC /\ C =/= 0 ) ) ) -> ( ( A / B ) x. ( D / C ) ) = ( ( A x. D ) / ( B x. C ) ) ) 731 eqtrd.2=syl22anc $p |- ( ( ( A e. CC /\ ( B e. CC /\ B =/= 0 ) ) /\ ( ( C e. CC /\ C =/= 0 ) /\ ( D e. CC /\ D =/= 0 ) ) ) -> ( ( A / B ) x. ( D / C ) ) = ( ( A x. D ) / ( B x. C ) ) ) Note that syl, jca, and 3jca were the only theorems I had to know the names of. I didn't need to remember the names of simprrl, syl22anc, etc. since they will happen automatically. (End of 19-Mar-2012 A new approach to antecedent manipulation) ------------------------------------------------------------------------- ------------------------------------------------------------------------- (15-Sep-2011) Partial functions and restricted iota ------------------------------------- This is a proposal to add definition df-riota below. As usual, any comments are welcome. The current iota definition is df-iota $a |- ( iota x ph ) = U. { y | { x | ph } = { y } } Consider a poset (partially ordered set) with a base set B and a relation R. The supremum (least upper bound) of a subset S (of the base set B) is the unique member of B (if there is one) such that A. y e. S y R x /\ A. z e. B ( A. y e. S y R z -> x R z ) The LUB is a partial function on the subsets of B: for some subsets it may "exist" (in the textbook sense of this verb), and for others it may not. There are several ways to deal with partial functions. One useful way is to define "the value exists" as meaning "the value is a member of B" i.e. "E. x e. B ...", since the domain of discourse is the base set B. This is analogous to the set-theoretical meaning of "exists" meaning "is a member of the domain of discourse _V." Now, it turns out that the set ~P U. B is not a member of B, which can be proved without invoking the Axiom of Regularity (see theorem pwuninel). We can define an "undefined value" function: ( Undef ` B ) = ~P U. B so that ( Undef ` B ) e/ B This device can be used to work with partial functions generally, the case of posets being one example. The problem with using the standard iota for defining the LUB is that the iota returns the empty set (/) when it is not meaningful, and (/) could be a member of B. In order to get a guaranteed non-member of B when the LUB doesn't "exist", we can't (easily) use the standard iota but need the following awkward and non-intuitive definition of LUB: ( lub ` S ) = { t | E. u ( u = { x e. B | ( A. y e. S y R x /\ A. z e. B ( A. y e. S y R z -> x R z ) ) } /\ t = if ( E. v u = { v } , U. u , ( Undef ` B ) ) ) } (The purpose of the "{ t | E. u ... }" is to avoid having to repeat "A. y e. S y R x /\ A. z e. B ( A. y e. S y R z -> x R z )" twice, which would be required if we used df-iota.) We can introduce a "restricted iota" as follows: df-riota $a |- ( iota x e. A ph ) = if ( E! x e. A ph , ( iota x ( x e. A /\ ph ) ) , ( Undef ` A ) ) Using df-riota, the LUB becomes ( lub ` S ) = ( iota x e. B ( A. y e. S y R x /\ A. z e. B ( A. y e. S y R z -> x R z ) ) ) Thus df-riota above, even though somewhat awkward, seems to provide the most useful tool. Some properties we would have are: iotariota $p |- ( iota x ph ) = ( iota x e. _V ph ) riotaiota $p |- ( E! x e. A ph -> ( iota x e. A ph ) = ( iota x ( x e. A /\ ph ) ) ) Most importantly, the "closure" of restricted iota is equivalent to its "existence" in the textbook sense: riotaclb.1 $e |- A e. _V $. riotaclb $p |- ( E! x e. A ph <-> ( iota x e. A ph ) e. A ) thus making it very useful for working with partial functions. (End of 15-Sep-2011 restricted iota proposal) --------------------------------------------- (6-Sep-2011) cnaddablNEW vs. cnaddabl2NEW (continuation of 5-Sep-2011) --------------------------------------------------------- Yesterday, I wrote: "For some things like proving cnaddablxNEW (the new version of cnaddabl), it seems we have to reference the actual finite-sequence structure." Today I added a function to construct an explicit member of a structure class given its components, called StrBldr (df-strbldr), and I proved cnaddablxNEW with a "scaffold-independent" notation, mainly to demonstrate a way to do it. An advantage of using StrBldr is that we can later change the definition of df-struct (e.g. to use a different ordered n-tuple definition, such as a sequence on ordinals) without changing theorems. A disadvantage of using StrBldr is longer expressions (at least for this example). It also hides the actual "thing" that the complex addition group "is" in an abstract (and non-standard) way. cnaddablxNEW is an example of the "scaffold-dependent" method: cnaddablx.1NEW $e |- G = { <. 1 , CC >. , <. 2 , + >. } $. cnaddablxNEW $p |- G e. AbelNEW $= ... $. Note that the explicit structure { <. 1 , CC >. , <. 2 , + >. }, using only elementary set-theory symbols, is shown. In other words, the "scaffold" is a 2-member finite sequence on NN, which we show explicitly. This would change if we adopted a different n-tuple definition. cnaddablNEW is an example of the "scaffold-independent" method: cnaddabl.1NEW $e |- G = StrBldr ( 2 , g , ( ( base ` g ) = CC /\ ( +g ` g ) = + ) ) $. cnaddablNEW $p |- G e. AbelNEW $= ... $. In cnaddablNEW, there is no reference to structure holding the group components (base and operation). If later we changed the definition of Struct to use a different n-tuple definition, the theorem cnaddabl2NEW would not change. I don't know which might be better for long-term use. In any case, these theorems (in my mathbox) are experimental, and I don't plan to put them into the main part of set.mm in the near future. As usual, any comments are welcome. (5-Sep-2011) Structures for groups, rings, etc. ---------------------------------- In general, the way we extend structures in the present set.mm is awkward. For example, an inner product space is not a vector space in the strict formal sense, but we must undergo a mapping to apply vector space theorems to inner product spaces. For small structures like groups, the "trick" of specifying the group completely by its operation makes the statement of some theorems shorter. We extract the base set of the group from the range of this operation. But as structures get more complex (more specialized), we have no uniform way to extend them. With a view towards more flexibility in the future, I propose a new format for structures using what I call "extensible structure builders". The structures will be finite sequences, i.e. functions on ( 1 ... N ) where N e. NN. For example, a group consists of any finite sequence with at least 2 members, a base and the group operation. A ring is any (Abelian) group specialized with a a multiplication operation as the 3rd member. Thus any ring is also a group, and any theorem about groups automatically applies to rings. This kind of extensible specialization becomes important with things like the transition from lattices to ortholattices, vector spaces to inner product spaces, etc. In other words, "every Boolean algebra is a lattice" becomes a literal, rigorous statement, not just an informal way of expressing a homomorphism. I propose to use finite sequences instead of ordered n-tuples because the theory of functions is well-developed, whereas ordered n-tuples become awkward with more than 3 or 4 members, involving e.g. ( 1st ` ( 2nd ` ( 1st...))) to extract a member. I made them sequences on NN rather than om (omega) because we have a richer set of theorems for NN. But in principle om could also be used, and it would be closer to the ZFC axioms, and the Axiom of Infinity would not be needed to prove say simple theorems of group theory. In my mathbox, I have shown a proposed definition of the extensible structure builder Struct along with some theorems about groups and rings "translated" to use it. For the most part, the translations were straightforward and mechanical, as can be seen by comparing theorems with "NEW" after them with to their originals already in set.mm. In df-struct, we define Struct(N,f,ph), where N is an integer and ph is a wff containing f as a free variable. In Struct(N,f,ph), f is a bound variable. df-struct $a |- Struct ( N , f , ph ) = { f | ( E. m e. NN ( N <_ m /\ f Fn ( 1 ... m ) ) /\ ph ) } $. In other words, a structure with say 2 members (such as a group) is any sequence with _at least_ 2 members whose first 2 members satisfy the requirements specified by wff ph. This means we can extend (specialize) a group to become a ring without destroying its property of being a group. When possible, it desirable not to reference members of a structure directly via the sequence values (which is highly dependent on the definition of Struct(N,f,ph)). Instead, we can define "extractors" of the components. This way, if in the future we decide on a different "scaffold" (such as a sequence on finite ordinals instead of NN) most theorems will remain unchanged. For example, for groups and rings, we define df-base $a |- base = ( f e. _V |-> ( f ` 1 ) ) $. df-plusg $a |- +g = ( f e. _V |-> ( f ` 2 ) ) $. df-mulr $a |- .r = ( f e. _V |-> ( f ` 3 ) ) $. Then we can reference "(base ` G)" rather than "(G ` 1)". I put df-struct in my mathbox. To illustrate it, I added the corresponding new definitions df-grpNEW, df-ablNEW, and df-ringNEW (which define classes GrpNEW, AbelNEW, and RingNEW) along with several dozen theorems taken from the main set.mm, with NEW appended to their names. We have the nice inclusions GrpNEW C_ AbelNEW C_ RingNEW, making it easy to reuse theorems for more general structures with less general structures. These inclusions can be continued through division rings, vectors spaces, and inner product spaces. You may want to compare the "scaffold-independent" isgrpiNEW to the old isgrpi. There is also a "scaffold-dependent" version isgrpixNEW (the "x" after "isgrpi" means "explicit" i.e. scaffold-dependent). For some things like proving cnaddablxNEW (the new version of cnaddabl), it seems we have to reference the actual finite-sequence structure. I have some ideas for avoiding that, but so far they seem to make things more complex rather than simpler, and I didn't put them in. (Later - I did put them in; see df-strbldr and related theorems and note of 6-Sep-2011.) Also note the analogous zaddablNEW showing the integers are a group: we do not have to restrict the "+" operation to ZZ. Comments are welcome. (31-Aug-2011) Canonical conjunctions - followup --------------------------------- It looks like there is no strong consensus on any of the proposed methods, so for the time being we'll continue to use whatever ad hoc parenthesization seems to fit the need at hand. Perhaps this is best anyway. (18-Aug-2011) Canonical conjunctions ---------------------- Background ---------- A significant portion of many proofs consists of manipulating antecendents to rearrange the parenthesization of conjuncts, change the order of conjuncts, etc. One of the problems has to do with the many ways that a conjunction can be parenthesized. In the case of df-an (binary conjuncts), the number of parenthesizations for 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 conjuncts are 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, and 58786 respectively (Catalan numbers). For mixed df-an and df-3an it is of course even larger. It is impractical to have utility theorems for all possible transformations from one parenthesization to another; e.g. we would need 42 * 41 = 1722 utility theorems, analogous to anasss, to handle all possible transformations of 6-conjunct antecedents with df-an parenthesizations. So we compromise and use a sequence of manipulations with a much smaller set of transformations in order to achieve the desired transformation. To reduce the number of such antecedent manipulation steps as well as the number of special-purpose utility theorems, I propose that for all new theorems, we adopt a canonical parenthesization for all conjunctions (whether in antecedents or not). Of course there would be certain early exceptions such as anass that are intrinsically about the parenthesization. Over time, existing theorems and their proofs can also be retrofitted to shorten them. I believe this can significantly reduce the size of some proofs as well as make them easier to follow, since the reader doesn't have to mentally skip over increasingly nested steps doing nothing but boring antecedent transformations. Summary of proposals -------------------- I will describe three possible methods. (A fifth method from a Metamath Google Groups posting by FL on 9-Aug-2011, is a hybrid of Method #1 and Method #2 and not presented here.) Method #1: Same nesting level Advantages: Arguably most "natural"; fewest changes to retrofit set.mm Disadvantages: Neither minimum parentheses nor minimum nesting Method #2: Recursive grouping Advantages: Minimal parentheses; simple algorithm Disadvantages: No minimum nesting Method #3: Symmetry Advantages: Minimal parentheses; minimal nesting under constraint of symmetry Disadvantages: Apparently not "natural" (rarely used in current set.mm); no simple algorithm to derive n+1st case from nth case; nesting is not theoretical minimum. Method #4: Attempt at minimal nesting Advantages: Appears to have minimum parentheses and minimum nesting. Disadvantages: Not inuitive; somewhat complex algorithm. Method #1: Same nesting level ------------------------------ I collected all antecedents using 2an and/or 3an, with the results summarized in Table 1 below. The empirically most frequent patterns are arguably the "most natural". For 2 through 6 conjuncts, these are: 2210 (ph/\ps) 1005 (ph/\ps/\ch) 239 ((ph/\ps)/\(ch/\th)) 58 ((ph/\ps/\ch)/\(th/\ta)) 24 ((ph/\ps/\ch)/\(th/\ta/\et)) (For 7 through 12 conjuncts there isn't enough data for a clear pattern to emerge.) The pattern here is that all conjuncts are at the same nesting level, at the expense of both minimum nesting and minimum parentheses. The rules would be: 1. Using 3an as much as possible, put all conjuncts at the same nesting level. 2. At a given level, 2an's go to the right of 3an's. For 2 through 12 conjuncts, these rules give: (ph/\ps) (ph/\ps/\ch) ((ph/\ps)/\(ch/\th)) ((ph/\ps/\ch)/\(th/\ta)) ((ph/\ps/\ch)/\(th/\ta/\et)) ((ph/\ps/\ch)/\(th/\ta)/\(et/\ze)) ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si)) ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh)) (((ph/\ps/\ch)/\(th/\ta/\et))/\((ze/\si)/\(rh/\la))) (((ph/\ps/\ch)/\(th/\ta/\et))/\((ze/\si/\rh)/\(la/\ka))) (((ph/\ps/\ch)/\(th/\ta/\et))/\((ze/\si/\rh)/\(la/\ka/\mu))) Table 1: Frequency of parenthesization patterns in set.mm #cases parenthesization ------ ---------------- 2210 (ph/\ps) 1005 (ph/\ps/\ch) 62 (ph/\(ps/\ch)) 74 ((ph/\ps)/\ch) 8 ((ph/\ps)/\ch/\th) 13 (ph/\(ps/\ch)/\th) 50 (ph/\ps/\(ch/\th)) 101 ((ph/\ps/\ch)/\th) 128 (ph/\(ps/\ch/\th)) 6 (((ph/\ps)/\ch)/\th) 6 (ph/\((ps/\ch)/\th)) 7 (ph/\(ps/\(ch/\th))) 10 ((ph/\(ps/\ch))/\th) 239 ((ph/\ps)/\(ch/\th)) 1 (ph/\ps/\(ch/\th)/\ta) 4 (ph/\ps/\(ch/\th/\ta)) 5 ((ph/\ps/\ch)/\th/\ta) 1 (((ph/\ps)/\ch/\th)/\ta) 1 ((ph/\ps)/\ch/\(th/\ta)) 2 ((ph/\ps/\(ch/\th))/\ta) 2 (ph/\((ps/\ch/\th)/\ta)) 4 ((ph/\(ps/\ch/\th))/\ta) 6 ((ph/\ps)/\(ch/\th)/\ta) 19 (ph/\(ps/\ch)/\(th/\ta)) 21 ((ph/\ps)/\(ch/\th/\ta)) 58 ((ph/\ps/\ch)/\(th/\ta)) 1 (((ph/\ps)/\ch)/\(th/\ta)) 1 ((ph/\(ps/\(ch/\th)))/\ta) 1 ((ph/\ps)/\((ch/\th)/\ta)) 1 (ph/\(((ps/\ch)/\th)/\ta)) 1 (ph/\((ps/\(ch/\th))/\ta)) 2 ((ph/\(ps/\ch))/\(th/\ta)) 4 (ph/\((ps/\ch)/\(th/\ta))) 5 (((ph/\ps)/\(ch/\th))/\ta) 1 (ph/\(ps/\ch/\th)/\(ta/\et)) 2 (((ph/\ps/\ch)/\th/\ta)/\et) 24 ((ph/\ps/\ch)/\(th/\ta/\et)) 1 (((ph/\ps/\ch)/\th)/\(ta/\et)) 13 ((ph/\ps)/\(ch/\th)/\(ta/\et)) 1 (((ph/\ps/\ch)/\th/\ta)/\et/\ze) 1 ((((ph/\ps)/\ch)/\(th/\ta))/\et) 1 (((ph/\ps)/\ch)/\(th/\(ta/\et))) 1 ((ph/\(ps/\ch))/\((th/\ta)/\et)) 1 (ph/\(ps/\((ch/\th)/\(ta/\et)))) 2 ((ph/\(ps/\ch))/\(th/\(ta/\et))) 2 ((ph/\ps)/\((ch/\(th/\ta))/\et)) 3 (((ph/\ps)/\(ch/\th))/\(ta/\et)) 3 (((ph/\ps)/\ch)/\((th/\ta)/\et)) 7 ((ph/\ps)/\((ch/\th)/\(ta/\et))) 1 (ph/\((ps/\ch/\th)/\(ta/\et)/\ze)) 3 (((ph/\ps)/\(ch/\th)/\(ta/\et))/\ze) 4 ((ph/\ps)/\(ch/\th/\ta)/\(et/\ze/\si)) 2 (((ph/\ps)/\(ch/\th))/\((ta/\et)/\ze)) 2 ((ph/\(ps/\ch))/\((th/\ta)/\(et/\ze))) 2 (ph/\((ps/\ch)/\((th/\ta)/\(et/\ze)))) 1 (ph/\((ps/\ch/\th)/\(ta/\et)/\(ze/\si))) 1 (ph/\(ps/\ch/\th)/\((ta/\et)/\(ze/\si))) 1 ((ph/\ps/\ch)/\th/\(ta/\et/\(ze/\si/\rh))) 1 (((ph/\ps)/\(ch/\th)/\(ta/\et))/\(ze/\si)) 2 ((((ph/\ps)/\(ch/\th))/\(ta/\et))/\(ze/\si)) 4 ((ph/\ps)/\((ch/\((th/\ta)/\(et/\ze)))/\si)) 8 (((ph/\ps)/\(ch/\th))/\((ta/\et)/\(ze/\si))) 1 ((((ph/\ps)/\(ch/\th))/\((ta/\et)/\(ze/\si)))/\((rh/\la)/\ka)) 1 ((((ph/\ps)/\ch)/\((th/\ta)/\et))/\(((ze/\si)/\rh)/\((la/\ka)/\mu))) Method #2: Recursive grouping ------------------------------ This parenthesization was first proposed by FL. It is described by an algorithm given by Andrew Salmon. 1. From left to right, group by three as many subexpressions as possible. Repeat until no more grouping occurs. 2. If there are two subexpressions, group them. 3. Done. Example: 1 2 3 4 5 6 7 8 9 A B (1 2 3) (4 5 6) (7 8 9) A B Rule 1 ( (1 2 3) (4 5 6) (7 8 9) ) A B Rule 1 ( ( (1 2 3) (4 5 6) (7 8 9) ) A B) Rule 1 Example: 1 2 3 4 (1 2 3) 4 Rule 1 ((1 2 3) 4) Rule 2 For 2 through 12 conjuncts, we would have: (ph/\ps) (ph/\ps/\ch) ((ph/\ps/\ch)/\th) ((ph/\ps/\ch)/\th/\ta) ((ph/\ps/\ch)/\(th/\ta/\et)) ((ph/\ps/\ch)/\(th/\ta/\et)/\ze) ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si)) ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh)) (((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))/\la) (((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))/\la/\ka) (((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh))/\(la/\ka/\mu)) Method #3: Symmetry -------------------- This method has 3 rules: 1. Minimum nesting. 2. Minimum parentheses. 3. Symmetrical parenthesization. For n = 2 through 12 conjunctions, it appears that the following parenthesizations are the only ones that satisfy these rules: nesting sum (ph/\ps) 2 (ph/\ps/\ch) 3 (ph/\(ps/\ch)/\th) 6 (ph/\(ps/\ch/\th)/\ta) 8 ((ph/\ps/\ch)/\(th/\ta/\et)) 12 ((ph/\ps/\ch)/\th/\(ta/\et/\ze)) 13 ((ph/\ps/\ch)/\(th/\ta)/\(et/\ze/\si)) 16 ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh)) 18 ((ph/\ps/\ch)/\(th/\(ta/\et)/\ze)/\(si/\rh/\la)) 22 ((ph/\ps/\ch)/\(th/\(ta/\et/\ze)/\si)/\(rh/\la/\ka)) 25 ((ph/\ps/\ch)/\((th/\ta/\et)/\(ze/\si/\rh))/\(la/\ka/\mu)) 30 An algorithm for method #3: --------------------------- Here is an algorithm. I don't know if there is a better one. The groupings for 0 through 5 conjuncts must be memorized, although they have justifications that aren't too hard. For >= 6 conjuncts, there is a recursive algorithm starting from 0 through 5. (The 0 and 1 cases aren't really conjuncts but help define the algorithm. Alternately, we could start with 2 conjuncts and start the recursion with n >= 8, but then cases 6 and 7 need to be memorized.) 0 through 5 conjuncts: Justification: n = 0 [null] only 1 symmetric possibility n = 1 ph only 1 symmetric possibility n = 2 (ph/\ps) only 1 symmetric possibility n = 3 (ph/\ps/\ch) only 1 symmetric possibility n = 4 (ph/\(ps/\ch)/\th) the other symmetric possibility ((ph/\ps)/\(ch/\th) is longer n = 5 (ph/\(ps/\ch/\th)/\ta) the other symmetric possibility ((ph/\ps)/\ch/\(th/\ta)) is longer For n >= 6 conjuncts, conjoin (./\./\.) and (./\./\.) around n-6. Another way to look at it: start the with the case (n mod 6) from above, then successively wrap in (./\./\.)...(./\./\.) until n conjuncts are achieved. ((ph/\ps/\ch) /\ (th/\ta/\et)) ((ph/\ps/\ch)/\ th /\(ta/\et/\ze)) ((ph/\ps/\ch)/\ (th/\ta) /\(et/\ze/\si)) ((ph/\ps/\ch)/\ (th/\ta/\et) /\(ze/\si/\rh)) ((ph/\ps/\ch)/\ (th/\(ta/\et)/\ze) /\(si/\rh/\la)) ((ph/\ps/\ch)/\ (th/\(ta/\et/\ze)/\si) /\(rh/\la/\ka)) ((ph/\ps/\ch)/\ ((th/\ta/\et) /\ (ze/\si/\rh)) /\(la/\ka/\mu)) Notes for method #3: -------------------- 1. I conjecture that the algorithm above always results in minimum nesting given the symmetry requirement, but I don't have a proof. 2. Unfortunately, in some cases a grouping with minimal nesting does not have symmetry. For example, ((ph/\ps/\ch)/\(th/\ta/\et)) has nesting sum 12, whereas the non-symmetrical ((ph/\ps)/\ch/\(th/\ta/\et)) has nesting sum 11. 3. Minimum nesting + symmetry by themselves don't imply minimum parentheses. For example, the following groupings for 6 conjuncts each have minimum nesting sum of 12, but only the first has minimum parentheses: ((ph/\ps/\ch)/\(th/\ta/\et)) ((ph/\ps)/\(ch/\th)/\(ta/\et)) 4. Minimum parentheses + symmetry by themselves do not necessarily imply minimum nesting. For example, for 11 conjuncts, the nesting sum from the above algorithm is 25: ((ph/\ps/\ch)/\(th/\(ta/\et/\ze)/\si)/\(rh/\la/\ka)) 25 But there exist 3 other symmetrical groupings with same parentheses but more nesting: (((ph/\ps/\ch)/\th/\ta)/\et/\(ze/\si/\(rh/\la/\ka))) 27 ((ph/\(ps/\ch/\th)/\ta)/\et/\(ze/\(si/\rh/\la)/\ka)) 27 ((ph/\ps/\(ch/\th/\ta))/\et/\((ze/\si/\rh)/\la/\ka)) 27 5. I don't know if (say for some larger n) there exist other symmetric patterns with both minimum nesting and minimum parentheses. If so, then the algorithm would become the definition of the grouping, not just the 3 rules. (End of method #3 discussion) Method #4 --------- nesting sum (ph/\ps) 2 (ph/\ps/\ch) 3 ((ph/\ps)/\ch/\th) 6 ((ph/\ps/\ch)/\th/\ta) 8 ((ph/\ps/\ch)/\(th/\ta)/\et) 11 ((ph/\ps/\ch)/\(th/\ta/\et)/\ze) 13 ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si)) 16 ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh)) 18 (((ph/\ps)/\ch/\th)/\(ta/\et/\ze)/\(si/\rh/\la)) 22 (((ph/\ps/\ch)/\th/\ta)/\(et/\ze/\si)/\(rh/\la/\ka)) 25 (((ph/\ps/\ch)/\(th/\ta)/\et)/\(ze/\si/\rh)/\(la/\ka/\mu)) 30 Algorithm for method #4 ------------------------ From grouping n to grouping n+1: 1. Let m = max nesting depth of grouping n. 2. If there is a wff at depth m-1 that is not a triple conjunction, let m = m-1 3. Find first wff (call it w) at depth m that is not a triple conjunction. 4. If w is a wff variable, change it to a conjunction of two wff variables. If w is a conjunction of two wff variables, change it to a conjunction of three wff variables. Examples of algorithm for method #4 ----------------------------------- Example: start with ((ph/\ps/\ch)/\th/\ta). 1. m = max nesting depth = 2. 2. Non-triple-conjunct th is at depth m-1 = 1, so m = m-1 = 1. 3. w = th 2. Change w to (th/\ta). Result is ((ph/\ps/\ch)/\(th/\ta)/\et). Example: start with ((ph/\ps/\ch)/\(th/\ta)/\et). 1. m = max nesting depth = 2. 2. Non-triple-conjunct (th/\ta) is at depth m-1 = 1, so m = m-1 = 1. 3. w = (th/\ta) 2. Change w to (th/\ta/\et). Result is ((ph/\ps/\ch)/\(th/\ta/\et)/\ze). Example: start with ((ph/\ps/\ch)/\(th/\ta/\et)/\(ze/\si/\rh)). 1. m = max nesting depth = 2. 2. There is no non-triple conjunct at depth m-1 = 1, so m stays at 2. 3. w = ph 2. Change w to (ph/\ps). Result is (((ph/\ps)/\ch/\th)/\(ta/\et/\ze)/\(si/\rh/\la)). Example: start with (ph/\ps). 1. m = max nesting depth = 1. 2. Non-triple-conjunct (ph/\ps) is at depth m-1 = 0, so m = m-1 = 0. 3. w = (ph/\ps) 2. Change w to (ph/\ps/\ch). Result is (ph/\ps/\ch). Example: start with (ph/\ps/\ch). 1. m = max nesting depth = 1. 2. There is no non-triple conjunct at depth m-1 = 0, so m stays at 1. 3. w = ph 2. Change w to (ph/\ps). Result is ((ph/\ps)/\ch/\th). (End of method #4 discussion) Examples -------- (The examples use method #3 above and are analogous for the others) As an example, to swap the 1st and 3rd conjuncts in a 4-conjunct antecedent, we could have, in place of several ancomxxs forms, just: 4com13.1 $e ( ( ph /\ ( ps /\ ch ) /\ th ) -> ta ) $. 4com13 $p ( ( ch /\ ( ps /\ ph ) /\ th ) -> ta ) $= To add an antecedent in the 3rd position to a 4-conjunct antecendent, in place of the numerous adantlrl, adantlrr, etc. we would have just: 4adant3.1 $e ( ( ph /\ ps /\ ch ) -> th ) $. 4adant3 $p ( ( ( ph /\ ( ps /\ ta ) /\ ch ) -> th ) $= There would be only one "import" statement for each conjunct count, rather than e.g. imp41 through imp45 (plus others for the 3an cases) for 4 conjuncts: imp4.1 $e |- ( ph -> ( ps -> ( ch -> ( th -> ta ) ) ) ) $. imp4 $p |- ( ( ph /\ ( ps /\ ch ) /\ th ) -> ta ) $= Some things may become problematic. For example, suppose we have ( ( ta /\ et ) -> ps ) that we would normally inject into an antecedent with a sylan-like theorem. We would need multiple cases for various conjunct counts in both hypotheses, to ensure that the antecedent of the result stays in canonical form. I don't know how big a problem this will be. syl32an1.1 $e |- ( ( ph /\ ps /\ ch ) -> th ) $. syl32an1.2 $e |- ( ( ta /\ et ) -> ph ) $. syl32an1 $p |- ( ta /\ ( et /\ ps ) /\ ch ) -> th ) $= Additional comments ------------------- It is interesting that (with methods 2 and 3 above) df-3an usually halves the number of parentheses in the canonical forms: conjuncts: 2 3 4 5 6 7 8 9 10 11 12 ---------- ---------------------------- parens w/ df-an only: 2 4 6 8 10 12 14 16 18 20 22 parens w/ df-an + df-3an: 2 2 4 4 6 6 8 8 10 10 12 This is an argument in favor of keeping df-3an rather than having the simpler syntax with df-an only. If a theorem and its proof uses only conjuncts in canonical form, it might be relatively straightforward to retrofit a possible future df-4an (or even revert to just df-an) just by changing the canonical forms and the utility theorems handling them. Jonathan Ben-Naim has df-4an (called df-bnj17) in his mathbox. I would want to ponder whether its benefits outweigh its drawbacks before moving it into the main set.mm. My initial objection was the large number of additional utility theorems that would be needed for transformations back and forth to df-an and df-3an. That might become less important if we start using canonical parenthesizations. If we add df-4an, I think the table above would be: conjuncts: 2 3 4 5 6 7 8 9 10 11 12 ---------- ---------------------------- parens w/ df-an only: 2 4 6 8 10 12 14 16 18 20 22 parens w/ df-an + df-3an: 2 2 4 4 6 6 8 8 10 10 12 parens w/ df-an + df-3an + df-4an: 2 2 2 4 4 4 6 6 6 8 8 so the parenthesis savings, while nice, have less impact than with adding just df-3an. We could also go to the extreme and add df-5an, ..., df-12an so we always have just 2 parentheses, but (among other things) the run time of the current "improve all" algorithm in MM-PA would grow exponentially; already the default search limit times out when there are too many nested df-3ans. It is possible to improve that algorithm but it would take some work. (End of 18-Aug-2011 canonical conjunctions proposal) ------------------------------------------------------------------------- (22-Jun-2009) Gerard Lang's proof that ax-groth implies ax-pow was apparently unrecognized.* The page http://en.wikipedia.org/wiki/Tarski-Grothendieck_set_theory (which calls ax-groth "Tarski's axiom") mentions only that "Tarski's axiom also implies the axioms of Infinity and Choice." Perhaps someone should update that page. :) (In the interest of objectivity I do not personally edit Wikipedia references to Metamath.) * Later (24-Jun-2009): Gerard pointed out that this was mentioned by Bob Solovay here: http://www.cs.nyu.edu/pipermail/fom/2008-March/012783.html I updated the Wikipedia entry to cite that. (11-Dec-2008) I resynchronized Jeff Hoffman's nicod.mm to the recent label changes and made it an official part of set.mm. In connection with this, the NAND connective (Sheffer stroke) was added to set.mm. Jeff's original nicod.mm can be found at http://groups.google.com/group/metamath (18-Nov-2008) Some stragglers I missed in yesterday's change were updated, so if you downloaded yesterday's set.mm, you should refresh it. Old New cdavalt cdaval cdaval cdavali unbnnt unbnn3 frsuct frsuc fr0t fr0g rdg0t rdg0g ssonunit ssonuni ssonuni ssonunii eqtr3t eqtr3 eqtr2t eqtr2 eqtrt eqtr 3eqtr4r 3eqtr4ri 3eqtr3r 3eqtr3ri 3eqtr2r 3eqtr2ri 3eqtr4r 3eqtr4ri 3eqtr3r 3eqtr3ri 3bitr3r 3bitr3ri 3bitr4r 3bitr4ri 3bitr2r 3bitr2ri biimpr biimpri biimp biimpi impbi impbii (17-Nov-2008) Many labels in set.mm were changed to conform to the following convention: an inference version of a theorem is now always suffixed with "i", whereas the closed theorem version has no suffix. For example, "subcli" and "subcl" are the new names for the old "subcl" and "subclt" respectively. Also, the inference versions of the various transitive laws now have an "i" suffix, such as "eqtr3i" (for the old "eqtr3"), "bitr4i" (for the old "bitr4"), etc. This will make them consistent with the "i"/"d" convention for inferences and deductions ("eqtr3i"/"eqtr3d", etc.) There were 1548 labels involved in this change. They are documented at the top of the set.mm file. For people used to the old labels, it will take some practice to get used to this change, but I think it will be better in the long term since the database now conforms to a single standard. If anyone is using set.mm for something not in their sandboxes, you can contact me for a script that updates the labels with these changes. (14-Nov-2008) The following frequently-used theorems were renamed for better consistency and to avoid confusion with negative numbers (thanks to Stefan Allan for the suggestion): Old New nega notnot2 negai notnotri negb notnot1 negbi notnoti negbii notbii negbid notbid pm4.13 notnot pm4.11 notbi (8-Sep-2008) Although at first glance expimpd and expdimp seem rather specialized, they actually amazingly shorten over 80 proofs, so with them the net size of set.mm is reduced. (1-Sep-2008) I am phasing in "A e. Fin" in place of the current "E. x e. om { A } ~~ x" to express "A is finite". The latter idiom is now used frequently enough so that the net size of set.mm will hopefully be reduced as a result. (31-Aug-2008) I did a number of revisions to the Unicode font characters so that all symbols now display in the Opera browser as well as Firefox. (22-May-2008) Yesterday's derivation of axiom ax-4 from the others required new versions of axioms ax-5 and ax-6. The old versions were renamed ax-5o and ax-6o. Theorems ax5o and ax6o derive axioms ax-5o/ax-6o from the new ax-5/ax-6; theorems ax5 and ax6 show the reverse derivations. The organization of the axioms in set.mm has been changed. The new complete set of non-redundant axioms is now introduced in a single place in set.mm in a new section called "Predicate calculus axiomatization". (Before, they were scattered throughout, introduced as they were needed.) We immediately derive ax-4 and the old ax-5 and old ax-6 (now called ax-5o and ax-6o) as theorems ax4, ax5o, and ax6o. The next section in set.mm, "Predicate calculus without distinct variables", has the original gentle derivations from ax-4, ax-5o, and ax-6o, and eventually the equality theorems not needing ax-17. The idea here is that as long as an inexperienced reader accepts ax-4 a priori, there is no need to go through the advanced, $d-using proof of ax4. This also provides us with more meaningful "proved from axioms" lists for the section without distinct variables, without mention of the ax-17 etc. used to prove ax4. We finally introduce the "normal" use of ax-17 in the section "Predicate calculus with distinct variables" with essentially the same organization as before. The reason for proving ax4 at the beginning, and not after say the old place where ax-17 used to be, is to conform to the following convention, mentioned in the comment of ax4: Note: All predicate calculus axioms introduced from this point forward are redundant. Immediately before their introduction, we prove them from earlier axioms to demonstrate their redundancy. Specifically, redundant axioms ~ ax-4 , ~ ax-5o , ~ ax-6o , ~ ax-9o , ~ ax-10o , ~ ax-11o , ~ ax-15 , and ~ ax-16 are proved by theorems ~ ax4 , ~ ax5o , ~ ax6o , ~ ax9o , ~ ax10o , ~ ax11o , ~ ax15 , and ~ ax16 . so that the proof of theorem ax4 can't have an accidental circular reference to axiom ax-4 (which would be possible if we put the ax4 proof later in the development). The Metamath Proof Explorer Home Page has been updated with the new set of non-redundant (as far as we know) predicate calculus axioms that eliminates axiom ax-4. With ax-4 omitted from the official list of non-redundant axioms, we no longer have the former "pure" predicate calculus subsystem, that used to be ax-4/ax-5o/ax-6o, as part of the non-redundant list. Therefore it no longer makes sense to subdivide the axioms into separate groups on the MPE Home Page, and I combined them into one big table. I moved the description of the "pure" predicate calculus subsystem to the last entry of the subsystem table http://us2.metamath.org:8888/mpeuni/mmset.html#subsys ----- On another matter, the user sandboxes have been moved to the end of set.mm as suggested by O'Cat. Unfortunately, this means the software thinks they are in the "Hilbert Space Explorer" section during the web page generation. This will be a minor cosmetic inconvenience until I address this. (21-May-2008) With slightly modified ax-5 and ax-6, ax-4 becomes redundant as shown by theorem ax4. The ax-5 and ax-6 modifications have the same total length as the old ones, renamed to ax-5o and ax-6o. (17-May-2008) Axiom ax-10 was shortened. The previous version was renamed ax-10o. Theorem ax10o shows that the previous version can be derived from the new ax-10. The Metamath Proof Explorer Home Page has been updated to use the shortened axiom. (14-May-2008) I am hoping that the supremum df-spw for weak orderings will end up being easier to use in general than df-sup, because it doesn't need a separate hypothesis to show that the supremum existence condition is met. Instead, the supremum exists iff the supw value belongs to the relation's field. If this turns out to be useful, I may rethink the definition of df-sup as well. (12-May-2008) The following frequently-used labels have been changed to be slightly less cryptic and more consistent: old new 12-May-08 a4w1 a4eiv 12-May-08 a4w a4imev 12-May-08 a4c1 a4imed 12-May-08 a4c a4ime 12-May-08 a4b1 a4v 12-May-08 a4b a4imv 12-May-08 a4at a4imt 12-May-08 a4a a4im For the new labels, "a4" means related to ax-4, "im" means the implicit substitution hypothesis needs to be satisfied in only one direction, "i" means inference, "e" means existential quantifier version, "v" means distinct variables eliminate a bound-variable hypothesis, "d" means deduction, and "t" means closed theorem. (6-May-2008) The definitions of +oo and -oo (df-pnf and df-mnf) were changed so that the Axiom of Regularity is not required for their justification. Instead, we use Cantor's theorem, as shown in pnfnre, mnfnre, and pnfnemnf. A standard version of the Axiom of Infinity, ax-inf2, has been added to set.mm. It is derived from our version as theorem axinf2, using ax-inf and ax-reg. I broke out ax-inf2 as a separate axiom so that we can more easily identify "normal" uses of Regularity. Before, this was hard to do because any reference to omex would automatically include Regularity as one of the axioms used. (21-Apr-2008) Paul Chapman has replaced the real log with the more general complex log. The earlier real log theorems by Steve Rodriguez have been revised to use the new definition. Steve's original theorems can temporarily be found under the same name suffixed with "OLD", using the token "logOLD" rather than "log". (10-Mar-2008) The complex number axioms use a different naming convention than their corresponding theorems, e.g. we have axaddrcl rather than readdclt, sometimes causing confusion for people entering proofs. Therefore, I added aliases for their names using 1-step proofs, as follows: Axiom Alias axaddcl addclt axaddrcl readdclt axmulcl mulclt axmulrcl remulclt axaddcom addcomt axmulcom mulcomt axaddass addasst axmulass mulasst axdistr adddit ax0id addid1t ax1id mulid1t axlttrn lttrt axmulgt0 mulgt0t (6-Mar-2008) pm3.26, pm3.27, and pm3.28 were erroneously given with logical OR expanded into negation and implication. pm3.26OLD, pm3.27OLD, and pm3.28OLD, which will eventually be deleted, are the erroneous versions of these. This error also found its way into pmproofs.txt http://us2.metamath.org:8888/mmsolitaire/pmproofs.txt which has also been corrected. Going through my backups, I found that this error dates back to pre-Metamath in the early 90's when I converted my manually typed list of 193 Principia Mathematica theorems into the condensed detachment notation of pmproofs.txt. Fortunately, this has no effect on the pmproofs.txt proof itself. I checked against the original typed list, and only these 3 theorems had the mistake. (11-Feb-2008) Theorems whose description begins with "Lemma for" have their math symbols suppressed in the Statement (Theorem) List in order to reduce the bulk of the list for faster web page loading. Sometimes, though, it is useful to have the lemma displayed. As an informal standard, I will change "Lemma for" to "- Lemma for" when we want the lemma displayed. The first one is fsumcllem, requested by Paul Chapman, since it will be used for multiple purposes and may make sense to someday call a "theorem". If there are lemmas you would like to have displayed, let me know. (3-Feb-2008) topbas provides a simpler definition of a basis when we know its topology in advance. It is interesting that the very complex expansion of "( B e. Bases /\ ( topGen ` B ) = J )" simplifies to "A. x e. J E. y ( y (_ B /\ x = U. y )" when J is known. Proving it was trickier than I thought it would be, although the final proof is relatively short. I updated the Description of istopg to explain why the variable name "J" is used for topologies. (16-Jan-2008) ax-12 is the longest predicate calculus axiom, and an open problem is whether it can be shortened or even proved from the others. After 15 years of on-and-off work on this problem with no success, today's a12study finally gives us a first hint, showing that it is possible to represent ax-12 with two shorter formulas. While the shortening of the starting formulas is modest, and of course their combined length is much longer than ax-12, the result is still significant: before, it wasn't clear whether ax-12 had some intrinsic property preventing it from being "broken up" into smaller pieces. It is curious that the hypotheses of a12study have similar forms. I don't know how they might be related. Note that by detaching ax-9 from the second one, they can also be written: a12study.1 $e |- ( -. A. z z = y -> ( A. z ( z = x -> z = y ) -> x = y ) ) $. a12study.2 $e |- ( -. A. z -. z = y -> ( A. z ( z = x -> -. z = y ) -> -. x = y ) $. (12-Jan-2008) cnnvg is designed to match hypotheses of the form "$e |- G = ( +v ` U )" such as in nvass. When nvass is applied to the vector space of complex numbers, cnnvba and cnnvg will change X to CC and G to + with no other manipulations, immediately producing the standard associative law for addition of complex numbers (after "U e. CVec" is detached with cnnv). This method will allow us to make efficient use of complex number theorems, such as when working with linear functionals that map to complex numbers. cnnvdemo shows how this is done. While U is substituted with "<. <. + , x. >. , abs >." in cnnvdemo, we keep the U separate in cnnv, cnnvg, and cnnvba to allow simplifying the display of proof steps (and reducing the web page size) in lemmas for long proofs, to avoid having to repeat "<. <. + , x. >. , abs >." over and over. Analogous cnnv* theorems will be added for other vector space functions. (21-Dec-2007) cofunex2g has a somewhat longer proof than might be expected because A and B are not required to be relations but may be any classes whatsoever. In particular, B may be any proper class. The recent hlxxx theorems are meant to complete the list of "(future)" theorems referenced in the comment of ax-hilex. These theorems will allow us to eliminate the Hilbert Space Explorer axioms in special cases (i.e. for concrete Hilbert spaces like CC), in order to use the Hilbert Space Explorer theorems as part of a ZFC-only theory. (17-Nov-2007) df-pm (with value theorem pmvalg) introduces the notion of partial functions. Although partial functions are ubiquitous in the theory of operators in functional analysis, there seems to be no symbol in the literature for them. The closest I've seen is an occasional "F : dom F --> B" in place of of the total function "F : A --> B", with dom F subset A implicit. But to do operator theory in set.mm, not having a formal notation for for partial functions would make the theory of operators clumsy to work with. There are two ways to do this. One way would be to define an analog of df-f: df-fp $a |- ( F : A -|-> B <-> ( Fun F /\ F (_ ( X X. Y ) ) $. or equivalently (by funssxp) df-fp $a |- ( F : A -|-> B <-> ( F : dom F --> B /\ dom F (_ A ) ) $. Here, the standard mapping arrow with a vertical bar in the middle is used by the Z language to denote a partial function, and it is the only published symbol for it I've seen, although the Z language isn't really "textbook mathematics." I like this symbol because of its similarity to the familiar "F : A --> B" of df-f, and I was very tempted to use it. The drawback is that it defines a new syntactical structure, not just a new symbol, so we would need a whole mini-development of equality theorems, bound variable hypothesis builders, etc. as we do with df-f. Such a new structure is unavoidable when the arguments could be proper classes, as in the case of many uses of df-f. But in the case of the intended uses of partial functions, the domain and range will always be sets (at least I've never seen a requirement for them in set theory where proper classes sometimes arise). This means that we can define a constant symbol for an operation similar to df-map, making all of the theorems relating to operations immediately available. With that in mind, I chose "^pm" as a generalization of "^m" of df-map. I am not thrilled with it because it doesn't seem intuitive or suggestive of its meaning, but I didn't have any better ideas. I am open to suggestions for a better symbol to use in place of "^pm", and in the meantime I'll continue to use "^pm" for lack of a better alternative. (15-Nov-2007) Baire's Category Theorem bcth was unexpectedly hard to prove. A big problem is that initially I didn't know that acdc5 (Dependent Choice) would be required to prove the existence of g. The textbook proof simply says we conclude the existence of g "by induction," which certainly stretches the meaning of that word. (2-Nov-2007) 0.999... is now proved, so the volunteer request of 30-Sep-2007 below is no longer applicable, although I appreciate the attempts of individuals such as rpenner on the physorg.com forum. The proof was more involved than I thought it would be, requiring new theorems serzmulc1, isummulc1a, and geoisum1. For the proof of 0.999... itself, quantifiers were avoided except for the implicitly quantified summation variable k. Hopefully this will make it possible for more non-mathematicians to follow the proof. (22-Oct-2007) Note that pm3.26bd, pm3.27bd were renamed pm3.26bi, pm3.27bi. (12-Oct-2007) Some of the kmlem* proofs were shortened by restating the lemmas and using yesterday's eldifsn. (30-Sep-2007) 0.999...=1 has been debated for many years on Usenet and elsewhere on the Internet. Example: http://forum.physorg.com/index.php?showtopic=13177 from March 2007 with 267(!) pages of discussion still on-going as of today. Includes a poll where 41% of people disbelieve 0.999...=1. There is even a brief reference to Metamath somewhere in the mess. Example: http://groups.google.com/group/sci.math/browse_frm/thread/3186915e0766f1ca from May 2007, whose last post was September 26. Does someone wish to volunteer to prove $( The repeating decimal 0.999... equals 1. $) 0.999... $p |- sum_k e. NN ( 9 / ( 10 ^ k ) ) = 1 $= ? $. to put an end to it once and for all? (Wishful thinking of course.) At least you'll make a name for yourself. :) Theorem geoisum may be useful for the proof. (28-Sep-2007) The symbol for floor was changed from "floor" to "|_" (L-shaped left bracket) at the suggestion of Paul Chapman. (21-Sep-2007) iccf was moved out of FL's sandbox to make it "official". It was also renamed from the earlier "icof". Compared to the old bl2iooOLD, the bl2ioo proof is shorter because it incorporates Paul Chapman's recent absdifltt. (17-Sep-2007) Perhaps a reader will volunteer to create Metamath proofs for one or more of the following. I hope I have stated them correctly. They should be fun puzzles, and in the unlikely event that two people submit the same one, the shortest proof will win. :) The tricks provided by these theorems may simplify the use of theorem cnco and relatives, because they have no dummy variables to deal with, unlike class builder representations. If no one responds, I'll prove them myself eventually when I have time. For fpar, note that each operand of i^i is not a function by itself - the intersection cuts them down so that the final set of ordered pairs is single-valued. This should make it interesting to prove. :) I think the other two are relatively straightforward, involving mainly expansions of the definitions. It may be possible to use a special case of fpar for the proof of opr2f, using fconstg, but I'm not sure it would help. In order of increasing difficulty, I would guess fsplit, opr2f, and fpar. If anyone finds a simpler expression for the left-hand side of the equality, let me know. So, paste the below at the end of your set.mm and fire up mmj2... (Note added 2/4/09: fpar has been added.) ${ $d x y z A $. $( etc. $) $( Merge two functions in parallel. Use as the second argument of a composition with a (2-place) operation to build compound operations such as ` z = ( ( sqr ` x ) + ( abs ` y ) ) ` . $) fpar $p |- ( ( F Fn A /\ G Fn B ) -> ( ( `' 1st o. ( F o. 1st ) ) i^i ( `' 2nd o. ( G o. 2nd ) ) ) = { <. <. x , y >. , z >. | ( ( x e. A /\ y e. B ) /\ z = <. ( F ` x ) , ( G ` y ) >. ) } ) $= ?$. $} (Note added 2/4/09: I will be completing fsplit soon.) ${ $d x y $. $( A function that can be used to feed a common value to both operands of an operation. Use as the second argument of a composition with the function of ~ fpar in order to build compound functions such as ` y = ( ( sqr ` x ) + ( abs ` x ) ) ` . $) fsplit $p |- `' ( 1st |` I ) = { <. x , y >. | y = <. x , x >. } $= ?$. $} (Note added 12/16/08: opr2f is no longer needed; this will become curry2.) ${ $d x y A $. $( etc. $) $( Turn an operation with a constant second operand into a function of the first operand only, such as ` y = ( x + 5 ) ` . $) opr2f $p |- ( ( F Fn ( A X. B ) /\ C e. B ) -> ( F o. `' ( 1st |` ( V X. { C } ) ) ) = { <. x , y >. | ( x e. A /\ y = ( x F C ) ) } ) $= ?$. $} (13-Sep-2007) The astute reader will notice that df-ims was changed to a more compact version (compare df-imsOLD). imsval3 replaces imsvalOLD for use in reproving the related *OLD theorems, although imsval3 may be phased out with shorter direct proofs from the new imsval. A clever technique was used in Paul Chapman's reret (of 8-Sep-2007) to eliminate a hypothesis by using the if() function directly, without invoking dedth. (8-Sep-2007) hlcom is part of an eventual derivation of the Hilbert Space Explorer axioms using ZFC only. A small change in the Hilbert Space Explorer axiomatization will then allow us to convert all theorems to pure ZFC theorems, with no changes to the theorems themselves, whenever we are dealing with a fixed Hilbert space (such as complex numbers). This axiomatization change is described in the comment of ax-hilex http://us2.metamath.org:8888/mpegif/ax-hilex.html . I probably will not actually make this change in axiomatization but will only describe it. It is very simple to do for anyone interested. I still think it is useful to have the axioms separated out - it makes the Hilbert Space Explorer Home Page easier to describe and it also allows us to see what axioms are used to prove specific theorems. The Hilbert Space Explorer Home Page http://us2.metamath.org:8888/mpegif/mmhil.html was updated to mention this alternate approach (the first 3 paragraphs of "The Axioms" section). (7-Sep-2007) The new cnmet (with Met) that will replace cnms (with MetSp) also replaces the distance function "{ <. <. x , y >. , z >. | ( ( x e. CC /\ y e. CC ) /\ z = ( abs ` ( x - y ) ) ) }" with "( abs o. - )", which I think is nicer. A more compact version of cnmet could read simply "( abs o. - ) e. Met", but the D is separated out to integrate more smoothly with other theorems. It also makes the proof a little easier to read. By the way the "Base" extractor (df-ba) for normed metric spaces is capitalized because, once it is fixed for a particular vector space U, it is not a function, unlike e.g. the "norm" extractor (df-nm). This is usually our convention when there is no literature standard. Another example is the set closed subsets "Clsd" (df-clsd) vs. the closure "cls" (df-cls). (4-Sep-2007) The following major changes have been made to set.mm. 1. The token Met (metric space) has been changed to MetSp. A new token called Met is defined as the class of all metrics (df-met), and a metric space (df-ms) is defined as the pair of a base set and metric. To extract the base set X from a metric D, we will usually use "dom dom D". Note that this is consistent with what we now do for topologies (df-top and df-topsp), with "U. J" for the base set of topology J. It is also consistent with groups, which are defined using only the group operation. The advantages of the new convention is that proofs will be often be shorter, and theorems will be shorter to state, e.g. OLD: msf.1 $e |- X = ( 1st ` M ) $. msf.2 $e |- D = ( 2nd ` M ) $. mscl $p |- ( ( M e. MetSp /\ A e. X /\ B e. X ) -> ( A D B ) e. RR ) $= NEW: metf.1 $e |- X = dom dom D $. metcl $p |- ( ( D e. Met /\ A e. X /\ B e. X ) -> ( A D B ) e. RR ) $= 2. Eventually, the theorems involving the old MetSp will be phased out and replaced with equivalent theorems involving the new Met. Note that in topology, the TopSp definition has had little real value since everything can be done more easily with Top, and the same should be true with metric spaces. 3. The definitions making use of the old MetSp have been replaced with ones using Met. The old definitions have been renamed *OLD, e.g. df-bl vs. df-blOLD. You can see the changed ones with 'show statement df-*OLD'. 4. All theorems making use of a df-*OLD will eventually have their labels suffixed with OLD, in the next few days. Some of this has already happened. They will eventually be replaced with non-OLD versions. 5. Based on a suggestion of Frederic Line (see the 16-Apr-2007 comment in http://planetx.cc.vt.edu/AsteroidMeta/set.mm_discussion_replacement ), the cryptic "( 1st ` ( 2nd ` U ) )" etc. will go away in normed vector spaces (including pre-Hilbert spaces, Banach spaces, and Hilbert spaces). Instead, we will phase in the use of the named components df-va, df-sm, df-nm and df-ba to make the theorems more readable as well as shorter to state. In addition, the theorems will become independent of the details of the ordered pairs in the vector space definition. E.g. nvge0 will be changed from ${ nvge0OLD.1 $e |- W = ( 1st ` U ) $. nvge0OLD.2 $e |- G = ( 1st ` W ) $. nvge0OLD.3 $e |- N = ( 2nd ` U ) $. nvge0OLD.4 $e |- X = ran G $. $( The norm of a normed complex vector space is nonnegative. $) nvge0OLD $p |- ( ( U e. NrmCVec /\ A e. X ) -> 0 <_ ( N ` A ) ) $=... $} to the new ${ nvge0.1 $e |- X = ( Base ` U ) $. nvge0.2 $e |- N = ( norm ` U ) $. $( The norm of a normed complex vector space is nonnegative. $) nvge0 $p |- ( ( U e. NrmCVec /\ A e. X ) -> 0 <_ ( N ` A ) ) $=... $} Again, the original versions will be renamed to *OLD. Some of them already have, and this renaming should be completed in a few days. (In the future, I may extended this use of named components to metric spaces, etc. For now I am limiting it to normed vector spaces, which in a way is a "final" application of topologies, metric spaces, groups, and non-normed vector spaces.) Over the next few days, the labels in the current set.mm will unstable, with frequent changes, starting at df-ms, and individual label changes there will not be documented in the "Recent label changes" at the top of set.mm. The labels _before_ df-ms are stable, and any changes will be documented in "Recent label changes" as usual. If you are working with set.mm, it will be safe (and preferred) to use the latest version provided you are using things above df-ms. The last version of set.mm before these changes are available in us.metamath.org/downloads/metamath.zip, for a week or so. (3-Sep-2007) Tomorrow there will be a major change in the notational conventions for metric and vector spaces. Today's version of set.mm is the last version prior to this change. If you are working from the current set.mm, you may want to archive today's version for reference, to compare against the new version if needed. (22-Aug-2007) Interestingly, hbxfr shortens 40 proofs and "pays" for itself several times over in terms of set.mm size reduction. (2-Aug-2007) I wouldn't have guessed a priori that proving addition is continuous (plcn) would be so tedious. Part of the problem might be that we have defined continuity in the very general context of topologies, but in the long run this should pay off. I didn't use the epsilon-delta method, but instead obtained a slightly shorter proof (I think) by using the already available climadd together with cnmet4. This is exactly the method used by Gleason, although his one sentence to that effect expands to a very long proof. (10-Jun-2007) The symbol "Cls" was changed to "Clsd". See the discussion at http://planetx.cc.vt.edu/AsteroidMeta/closed_and_closure (24-May-2007) axnegex and axrecex are now no longer used by any proof, and were renamed to axnegexOLD and axrecexOLD for eventual deletion. The axiom list at http://us2.metamath.org:8888/mpegif/mmcomplex.html#axioms was updated. A note on theorem names like msqgt0: a theorem name such as "msqgt0" with "msq" (m=multiplication) means "A x. A", while a name such as "sqgt0" with just "sq" means "A ^ 2". Since we are working directly with the axioms, we use A x. A rather than A ^ 2 because exponentiation is developed much later. (23-May-2007) Eric Schmidt has solved the long-standing open problem (first posted to Usenet on Apr. 25, 1997) of whether any of the ax*ex axioms for complex numbers are redundant. Here are his proofs: For axnegex: One thing to notice is that both 0re and 1re depend on axnegex for their proofs, potentially a problem if we will need to invoke these statements. However, the proof of 0re only incidentally uses axnegex, mainly because it relies on 1re. Instead, we note that the existence of any complex number implies by axcnre the existence of a real number, from which 0 in R follows from the (by now) usual inverse argument. [So (R, +) is a group.] To prove axnegex, given a complex number a + bi, we would like to find the additive inverse as (-a) + (-b)i. However, proving that this is an additive inverse requires us to know that 0i = 0, which itself depends on axnegex. We can get by with a weaker statement, namely that xi is real for some real x. For there exist x, y in R such that 0 = y + xi, or xi = -y. Having such an x, we know there exists c in R such that b + c = x. Then a + bi + ci is real, and hence has an additive inverse d. Then ci + d is an additive inverse of a + bi, which proves axnegex. We can then prove 1 in R using the current Metamath proof, in case we will need it. For axrecex: For axrecex, (a + bi) * (a - bi)/(a^2 + b^2) = 1 ought now to be provable without any hoops to jump through. The two main points are proving (a + bi) * (a - bi) = a^2 + b^2 and that a^2 + b^2 != 0 if a + bi != 0 (from which, using the now provable 0i = 0, we readily obtain a != 0 or b != 0). I formalized his axnegex proof, which was posted yesterday as negext. The axrecex proof will need a reorganization of set.mm so that some of the ordering theorems come before the reciprocal/division theorems, so it may take a couple of days to formalize. These kinds of proofs tend to be somewhat long, because we can't make use of future theorems that depend on the axioms we are trying to prove. Eventually axnegex and axrecex will be eliminated from the official set of complex number axioms at http://us2.metamath.org:8888/mpegif/mmcomplex.html, reducing the number of axioms from 27 to 25. (19-May-2007) As you can see from its "referenced by" list, 3expia ends up shortening 40 proofs, which was a suprise to me, and shrinks the size of set.mm accordingly. (18-May-2007) Paul Chapman's relatively sophisticated bcxmas was done entirely with mmj2. He writes, "using mmj2, I don't have to remember the names of theorems. What I do with steps like this is try something and see if mmj2 finds a theorem which fits. When I don't, I usually add another step (or very occasionally try a different sequence). For more complex steps I tend to search set.mm for text fragments I expect to find in theorems which I think might fit the problem, eg 'A + B ) e.'." (17-May-2007) ssimaexg and subtop were taken from FL's "sandbox" and made official, with slightly shorter proofs. The originals were renamed ssimaexbOLD, topsublem1OLD, topsublem2OLD, and topsubOLD, and will be deleted eventually. (30-Apr-2007) The definitions of +v, etc. of 26-Apr-2007 have been retired and replaced with new ones. See df-va and the statements following it. (26-Apr-2007) It seems the new symbols +v, etc., described in the 23-Apr-2007 note below, are not a good idea after all. It quadruples the proof size of ncvgcl (compared to ncvgclOLD), and in general is going to lead to longer proofs, especially for theorems brought over from more general theories (like ncvgcl is, from vcgcl). I have several other ideas I'm considering but need to think them over carefully. In the meantime, I'll probably continue to develop new theorems with the "W = ( 1st ` U )" etc. hypotheses, for retrofitting later. (23-Apr-2007) The symbols +v, .s, 0v, -v, norm, and .i were taken from the Hilbert Space Explorer for use by new definitions df-va, df-sm, df-0v, df-vs, df-nm, and df-ip. This will allow us to use the less cryptic "( +v ` U )" for vector addition in a normed complex vector space U (and Banach and Hilbert spaces), instead of the old "( 1st ` ( 1st ` U ) )". This was brought up by fl and discussed in the 16-Apr-2007 entries at http://planetx.cc.vt.edu/AsteroidMeta/set.mm_discussion_replacement . The new definitions will also provide more "generic" theorems in case we decide to change the ordered pair structure of df-ncv, etc. The new definitions df-va and df-ba serve the purpose of fl's proposed df-ahf and df-hilf in http://planetx.cc.vt.edu/AsteroidMeta/fl's_sandbox . The symbols in the Hilbert Space Explorer have been replaced with +h, .h, 0h, -h, .ih, and normh. (18-Apr-2007) The old Hilbert Space Explorer axioms ax-hvaddcl and ax-hvmulcl will be replaced by ax-hfvadd and ax-hfvmul so that the operations can be used with our group, vector space, and metric space theorems. (12-Apr-2007) Eric Schmidt discovered that the old ax1re, 1 e. RR, can be weakened to ax1cn, 1 e. CC. I updated the mmcomplex.html page accordingly. (27-Mar-2007) Maybe this is REALLY REALLY the end of shortening grothprim. At least we broke through the 200 symbol barrier. axgroth3 was used to shorten the previous grothprim. Unfortunately, that one (grothprim-8OLD) is now obsolete, so I'll probably delete axgroth3. (23-Mar-2007) grothprim was shortened a little more by exploiting the Axiom of Choice (via fodom and fodomb). As for shortening grothprim further, this may REALLY be the end of what I am capable of doing. (21-Mar-2007) Paul Chapman revised the proof of 0nn0 (compare 0nn0OLD) to use olci, which he feels is more natural than the old one's use of olc, "which seems to make a complicated wff out of a simple one." (20-Mar-2007) Unlike df-f, dff2 avoids direct or indirect references to df-id, df-rel, df-dm, df-rn, df-co, df-cnv, df-fun, and df-fn (all of which are used when df-f is expanded to primitives) but is still almost as short as df-f. I was surprised at how long and difficult the proof was, given the vast number of theorems about functions that we already have. Perhaps a shorter proof is possible that I'm not seeing. (17-Mar-2007) df-hl was changed to an equivalent one that is slightly easier to use. Compare the old one, df-hlOLD. (15-Mar-2007) dfid2 is the only theorem that makes use of the fact that x and y don't have to be distinct in df-opab. I doubt that dfid2 will be used for anything, but I thought it was interesting to demonstrate this. (12-Mar-2007) This may be it for grothprim for a while. I have stared at this thing for a long time and can't see any way to shorten it further. If anyone has any ideas let me know. (7-Mar-2007) impbid1 and impbid2 occupy 570 bytes in set.mm but reduce other proofs by 1557 bytes, with a 987 byte net size reduction of set.mm. (5-Mar-2007) In spite of its apparent simplicity, abexex is quite powerful and makes essential use of the Axiom of Replacement (and is probably equivalent to it, not sure). Chaining abexex can let us prove the existence of such things as { x | E. y E. z E. w...} that arise from non-trivial class builders (e.g. other than just the subsets of cross products) corresponding to ordered pair abstraction classes, etc. and which can be quite difficult to prove directly. (4-Mar-2007) I found shorter proofs for elnei, neips, ssnei2, innei, and neissex. The previous proofs are in elneiOLD, neipsOLD, ssnei2OLD, inneiOLD, and neissexOLD (which will be deleted in a few days). (1-Mar-2007) The contributions by Frederic Line are new versions provided by him, using the new definition df-nei (see the notes of 15-Feb-2007 below). Compare the *OLD ones starting at df-neiOLD. Most have been renamed, as well, and description for each *OLD version gives the corresponding new name. (20-Feb-2007) I have incorporated new sections at the end of the set theory part of set.mm (before the Hilbert space part), called "sandboxes," that will hold user contributions that are too specialized for the "official" set.mm or that I haven't yet reviewed for official inclusion. Here are the notes in set.mm about these sections. And, to prevent any future misunderstandings, some dire warnings. :) "Sandboxes" are user-contributed sections that are not officially part of set.mm. They are included in the set.mm file in order to ensure that they are kept synchronized with label, definition, and theorem changes in set.mm. Eventually they may be broken out as separate modules, particularly in conjunction with future Ghilbert translations. Notes: 1. I (N. Megill) have not necessarily reviewed definitions for soundness or agreement with the literature. 2. Over time I may decide to make certain definitions and theorems "official," in which case they will be moved to the appropriate section of set.mm and author acknowledgments added to their descriptions. 3. I may rename statement labels and constants at any time. 4. I may revise definitions, theorems, proofs, and statement descriptions at any time. 5. I may add or delete theorems and/or definitions at any time. 6. I may decide to delete part or all of a sandbox at any time, if I feel it will not ultimately be useful or for any other reason. If you want to preserve your original contribution, keep your own copy of it along with the version of set.mm that works with it. Do not depend on set.mm as its permanent archive. Syntax guideline: if at all possible, please use only 0-ary class constants for new definitions, to make soundness checking easier. By making a contribution, you agree to release it into the public domain, according to the statement at the beginning of this file. Today I added sandboxes for Fred Line and Steve Rodriguez. The contents of their sandboxes appear in the Theorem List, at the end of the "Metamath Proof Explorer" part. (15-Feb-2007) The old definition of neighborhood was somewhat awkward to work in some situations. In particular, "the set of all neighborhoods of a point," which occurs when working with limit points, needed a class abstraction. So I have revised the definition of neighborhood to be a function that maps each subset to all of its neighborhoods, rather than a binary relation. This also fits more consistently with some other definitions, I think. The neighborhood theorems will be revised so that N e. ( ( nei ` J ) ` S ) is used instead of N ( nei ` J ) S to mean "N is a neighborhood of subset S". Even though this seems longer, I believe it will make certain future theorems more natural and even have shorter proofs in some cases. For example, "the set of all neighborhoods of S" just becomes ( ( nei ` J ) ` S ) instead of { x | x ( nei ` J ) S } so that working with a dummy variable becomes unnecessary. (We could also use ( ( `' ( nei ` J ) ) " { S } ) to avoid a dummy variable with the old definition, but I don't think many people would enjoy deciphering that!) The old neighborhood is called "neiOLD", with its theorems renamed to *OLD, as in df-neiOLD, etc. These will be deleted once the conversion is complete. (5-Feb-2007) df-10 was added to the database, and the comments under df-2 were revised. Since we don't have an explicit decimal representation of numbers, df-10 will allow more reasonable representations as powers of 10 than just having the digits defined. E.g. (omitting parentheses): old: 456 = 4*(9+1)^2 + 5*(9+1) + 6 new: 456 = 4*10^2 + 5*10 + 6 Previously, I avoided defining 10 since a presumed future decimal representation might have juxtaposed 1 and 0. But such a representation seems far off and low priority at this time, so an explicit definition of 10 will be helpful in the interim. A sample theorem 7p3e10 was added to "test" the new definition; additional simple theorems for the number 10 will be added shortly. (5-Feb-2007) (cont.) A new version of ax-11 was added. The original ax-11 was renamed ax-11o, and all uses of it were replaced with references to the new theorem ax11o (proved from the new ax-11). A new axiomatization was placed on the mmset.html page, and a new table was added that summarizes what is known about various possible subsystems. Theorem ax11a (mentioned yesterday and earlier) was renamed ax11. (2-Feb-2007) ax11a2 proves that ax11a can replace ax-11. I have been wondering off and on for over 10 years whether this is the case, so I am pleased to see it proved. This answers the open question of 22-Jan-2007 below: "I don't know if ax-11 can be recovered from it (that would be nice)..." This now means we can replace ax-11 with the shorter equivalent ( x = y -> ( A. y ph -> A. x ( x = y -> ph ) ) ) which I am taking under consideration. However, it would be nicer if ax-11 could be proved from ax11a without relying on ax-16 and ax-17, so that the "predicate calculus without distinct variables" portion ax-1 through ax-15 (+ ax-mp + ax-gen) would have the same metalogical power of proof. Even if we can't prove ax-11 from ax11a without ax-16 and ax-17, the axiom set ax-1 through ax-15 would still be logically complete in the sense described at http://us2.metamath.org:8888/mpegif/mmzfcnd.html#distinctors . The deficiency would be that more theorems would have dummy variables in their distinctor antecedents, in particular the old ax-11 proved as a theorem. However, in a way this is only of cosmetic importance, since no matter how many axioms without distinct variables we have, Andreka's theorem tells us there will always be some theorems with dummy variables in their antecedents. Now, if we could just simplify the long and ugly ax-12... I have attempted that off and on also, trying to find a shorter axiom that captures its "essence" in the presence of the others, but without success. (I don't care that much about ax-15, since it is redundant in the presence of ax-17, as theorem ax15 shows.) The basic statement it makes is an atomic case of ax-17 using distinctors, just like ax-15, and that basic statement should be provable in the same way as theorem ax15 if we have the right support theorems. The problem so far is that those support theorems seem to need ax-12 in a different role. ax11a2 also shows that if we wish we can "weaken" ax11a2 by making $d x y and $d x ph if we wish, and still have completeness. Some people might prefer this as part of an alternate axiomatization that tries to reduce double binding in the axioms by having all set variables distinct. (1-Feb-2007) Interestingly, 3anidm23 will shorten 13 proofs, and adding will result in a net decrease in the size of set.mm. (31-Jan-2007) To prove that ipval has the inner product property ( C x. ( A ( ip ` U ) B ) ) = ( ( C S A ) ( ip ` U ) B ), i.e. C. = in standard notation, for all complex C (in the presence of the parallelogram law) is nonelementary: it involves an induction to show it holds for C e. NN, then we extend it to QQ, then to RR using continuity and the fact that QQ is dense in RR (qbtwnre), then to CC. I think this was proved by Jordan and von Neumann in 1935. The difficulty of the proof may be why most (all?) books define a Hilbert space as not just a special normed space but as having a "new" operation of inner product, from which a norm is derived. I had some misgivings because of the difficulty of the proof, but I think it will pay off: our definition has the nice property that CHil (_ CBan (_ CNrmVec which the standard definition doesn't. This will allow these spaces to share theorems trivially, which isn't the case with the "standard" textbook definition. (Analogous to this is our NN (_ ZZ (_ QQ (_ RR (_ CC. The standard textbook definition of CC as ordered pairs from RR doesn't have this property formally.) We will needed some additional theory about continuous functions for the proof, but that should be useful for other things as well. Anyway, it will be some time before all the inner product properties are proved. I may add pre-Hilbert spaces CPreHil, which is CNrmVec in which the parallelogram law holds. Then we would also have CHil (_ CPreHil (_ CNrmVec. However, CBan and CPreHil are not comparable as subclasses (one is complete; the other has the parallelogram law). CHil would have the trivial definition CHil = ( CBan i^i CPreHil ). (24-Jan-2007) I finally was able to prove a single theorem ax11inda that covers all cases of the quantification induction step simultaneously. It has no restrictions on z and needs no "tricks" to use it (so there no associated uncertainty that some special case hasn't been overlooked). This makes all the other versions obsolete, which have been renamed to *OLD. Part of the problem before is that I didn't even know what it should end up looking like, much less how to prove it. While the previous two evenings of effort were thus wasted, perhaps subconsciously they helped lead me towards this final solution. Although it is unnecessary now, I reproved yesterday's ax11demo (whose old proof is called ax11demoOLD) to show how simple its proof becomes with the new ax11inda. This completes, therefore, all the basis and induction steps needed to derive any wff-variable-free instance of ax-11 without relying on ax-11, thus showing that ax-11 is not logically independent of the other axioms (even though it is metalogically independent). (23-Jan-2007) I was unhappy with yesterday's ax11inda (now ax11indaOLD) because it was deceptively difficult to use for actual examples I tried, and it wasn't clear to me that it could handle all possible cases theoretically (e.g. it wasn't clear that I could derive today's ax11demo with it). The new ax11inda is simple to use, but it only works when z and y are distinct. I added the more powerful ax11inda2 that can be used otherwise. I think ax11inda2 can cover all possible cases, although I'm still working on a convincing argument for that. ax11inda2 is still not as easy to use - the variable renaming to eliminate the 2nd hypothesis can be very tricky. I added ax11demo to show how to use it. ax11inda3 is really a lemma for ax11inda, but I thought it was interesting in its own right because it has no distinct variable restrictions at all, and I made it a separate theorem for now. I might rename it to ax11indalem, though. (22-Jan-2007) The following email excerpt describes the new theorems related to ax-11. Hi Raph, > 4. How important is ax-11? > > Clearly, all theorems of PA can be proved using your axioms, but it's > quite possible that ax-11 makes the statement of certain theorems more > general in a useful way, and thus the resulting proof files would be > shorter and clearer. I'm particularly interested in the quantitative > question: how _much_ shorter? This is more a question for Norm than > for Rob, but in any case it's entirely plausible that the only real > way to answer it would be to try to prove a corpus of nontrivial > theorems both ways. I don't know if I have the answer you seek, but I'll recap what I know about ax-11: ( -. A. x x = y -> ( x = y -> ( ph -> A. x ( x = y -> ph ) ) ) ) http://us2.metamath.org:8888/mpegif/ax-11.html (You may already know some of this.) Before Juha proved its _metalogical_ independence, I spent some time in the other direction, trying to prove it from the others. My main result was proving, without ax-11, the "distinct variable elimination theorem" dvelimf2 (which pleased me at the time): http://us2.metamath.org:8888/mpegif/dvelimf2.html that provides a method for converting "$d x y" to the antecedent "-. A. x x = y ->" in some cases. This theorem can be used to derive, without ax-11, certain instances of ax-11. Theorem ax11el shows an example of the use of dvelimf2 for this. In the remark under ax-11, I say: Interestingly, if the wff expression substituted for ph contains no wff variables, the resulting statement can be proved without invoking this axiom. This means that even though this axiom is metalogically independent from the others, it is not logically independent. See ax11el for a simple example of how this can be done. The general case can be shown by induction on formula length. Yesterday I added the theorems needed to make this remark rigorous. For the basis, we have for atomic formulas with equality and membership predicates: http://us2.metamath.org:8888/mpegif/ax11eq.html http://us2.metamath.org:8888/mpegif/ax11el.html (These were tedious to prove. ax11el is the general case that replaces older, more restricted demo example also called ax11el, now obsolete and temporarily renamed ax11elOLD.) As a bonus, we also have the special-case basis for any wff in which x is not free: http://us2.metamath.org:8888/mpegif/ax11f.html For the induction steps, we have for negation, implication, and quantification http://us2.metamath.org:8888/mpegif/ax11indn.html http://us2.metamath.org:8888/mpegif/ax11indi.html http://us2.metamath.org:8888/mpegif/ax11inda.html respectively. I wanted the last one to be prettier (without the implied substitution and dummy variable) but wasn't successful in proving it that way; nonetheless it is hopefully apparent how it would be used for the induction. The "distinctor" antecedent of ax-11 can be eliminated if we assume x and y are distinct: ( x = y -> ( ph -> A. x ( x = y -> ph ) ) ) where $d x y http://us2.metamath.org:8888/mpegif/ax11v.html I didn't try to recover ax-11 from this, but my guess is that we can. We can also eliminate the "distinctor" antecedent like this: ( x = y -> ( A. y ph -> A. x ( x = y -> ph ) ) ) http://us2.metamath.org:8888/mpegif/ax11a.html which has no distinct variable restriction. This is a curious theorem; I don't know if ax-11 can be recovered from it (that would be nice) or if it can be proved without relying on ax-11. Norm (20-Jan-2007) enrefg, sbthlem10, and sbth have been re-proved so that the Axiom of Replacement is no longer needed. (18-Jan-2007) The replacements for the clim* and climcvg* families are complete. In a few days, the old theorems will be made obsolete, with their replacements indicated in the following list, which will be added to the "Recent Label Changes" section of set.mm. Date Old New Notes 18-Jan-07 climcvgc1 --- obsolete; use clmi1 18-Jan-07 climcvg1 --- obsolete; use clmi2 18-Jan-07 clim1 --- obsolete; use clm2 18-Jan-07 clim1a --- obsolete; use clm3 18-Jan-07 clim2a --- obsolete; use clm2 18-Jan-07 clim2 --- obsolete; use clm4 18-Jan-07 climcvg2 --- obsolete; use clmi2 18-Jan-07 climcvg2z --- obsolete; use clmi2 18-Jan-07 climcvgc2z --- obsolete; use clmi1 18-Jan-07 climcvg2zb --- obsolete; use clmi2 18-Jan-07 clim2az --- obsolete; use clm3 18-Jan-07 clim3az --- obsolete; use clm3 18-Jan-07 clim3a --- obsolete; use clm3 18-Jan-07 clim3 --- obsolete; use clm4 18-Jan-07 clim3b --- obsolete; use clm2 18-Jan-07 climcvg3 --- obsolete; use clmi2 18-Jan-07 climcvg3z --- obsolete; use clmi2 18-Jan-07 clim4a --- obsolete; use clm3 18-Jan-07 clim4 --- obsolete; use clm4 18-Jan-07 climcvg4 --- obsolete; use clmi2 18-Jan-07 climcvgc4z --- obsolete; use clmi1 18-Jan-07 climcvg4z --- obsolete; use clmi2 18-Jan-07 clim0cvg4z --- obsolete; use clm0i 18-Jan-07 climcvgc5z --- obsolete; use clmi1 18-Jan-07 climcvg5z --- obsolete; use clmi2 18-Jan-07 clim0cvg5z --- obsolete; use clm0i 18-Jan-07 climnn0 --- obsolete; use clm4 18-Jan-07 climnn --- obsolete; use clm4 18-Jan-07 clim0nn --- obsolete; use clm0 18-Jan-07 climcvgnn --- obsolete; use clmi2 18-Jan-07 climcvgnn0 --- obsolete; use clmi2 18-Jan-07 clim0cvgnn0 --- obsolete; use clm0i 18-Jan-07 climcvg2nn0 --- obsolete; use clmi2 18-Jan-07 clim0cvg2nn0 --- obsolete; use clm0i 18-Jan-07 climnn0le --- obsolete; use clm4le 18-Jan-07 clim0nn0le --- obsolete; use clm4le and clm0 (14-Jan-2007) The purpose of resiexg is to allow us to re-prove (eventually) the Schroeder-Berstein theorem sbth without invoking the Axiom of Replacement. (11-Jan-2007) Right now there is a confusing mess of about 3 dozen theorems in the clim* and climcvg* families. It appears that these can all be replaced by around 7 theorems that cover all possible cases, and clm1 is the first in this new family. These should allow us to get rid of the old ones, which will probably happen soon. (8-Dec-2006) In the comment of 17-Nov-2006 below, I mentioned "ra4sbcgfOLD used some clever tricks to convert the hypothesis of ra4sbcfOLD to an antecedent." Since ra4sbcgfOLD will soon be deleted, I extracted the "trick" into a neat stand-alone theorem, dedhb. I shortened the proof of ra4sbcfOLD with it to show how dedhb is used. (6-Dec-2006) I put a detailed comment about the hypotheses in imsmslem because it uses them all in one place. I am making note of it here for future reference. I've been roughly trying to keep the variable names consistent. There are a few changes from one theory to the next, e.g. the group theory unit U is changed to Z (zero) in normed vector space because it seems more natural. Even though all these hypotheses are getting cumbersome to drag around, that is what happens when the implicit assumptions of analysis books are made explicit. Fortunately, many of them tend to disappear in final applications, such as imsms or ccims. While it would be theoretically nicer to allow general division rings for the scalar product of vector spaces, I think that restricting it to CC is a reasonable compromise from a practical point of view, since otherwise we'd need up to 5 additional hypotheses to specify the division ring components. In any case, most proofs would be essentially the same if we need that generality in the future, so much of the hard work would already be done. There may even be an additional advantage to doing it with CC first: the CC proofs would tell us the minimal number of ring theorems we would need for the more general development, so that we could get there more quickly. Steve Rodriguez sent in his ncvcn of 4-Dec-2006 at a fortuitous time, because it provided the special case needed for the weak deduction theorem dedth used in the imsms proof. (4-Dec-2006) vcm shows that we can equivalently define the inverse of the underlying group in a complex vector space as either the group inverse or minus 1 times a vector. This shows that the requirement of an underlying Abelian group is not necessary; it could be instead an Abelian monoid (which generalizes an Abelian group by omitting the requirement for inverse elements), although I didn't see any mention of that in the literature. In any case, for future theorems I am thinking of using mostly minus 1 times a vector in order to be compatible with the Hilbert Space Explorer, which does not postulate a negative vector as part of its axioms, since it can be derived from the scalar product in the same way as vcm does. We can use vcm to obtain the other approach. (1-Dec-2006) dvdemo1 and dvdemo2 are discussed at: http://planetx.cc.vt.edu/AsteroidMeta/U2ProofVerificationEngine (17-Nov-2006) ra4sbc eliminates the hypothesis of ra4sbcf, making the latter obsolete (and it will be deleted eventually). It will also make ra4sbcgf - renamed to ra4sbcgfOLD - obsolete, since its first antecedent is now redundant. (Kind of sad, because ra4sbcgfOLD used some clever tricks to convert the hypothesis of ra4sbcfOLD to an antecedent; looking at it again, I don't know if I could ever figure it out again. Oh well.) ra4sbc will also eliminate the distinct variable restriction x,A in ra4sbca and ra4csbela (the preveious versions of which have been renamed to ra4sbcaOLD and ra4csbelaOLD). (15-Nov-2006) The redundant Separation, Empty Set, and Pairing axioms of ZF set theory were separated out so that their uses can be identified more easily. After each one is derived, it is duplicated as a new axiom: Immediately after axsep ($p), we introduce ax-sep ($a) Immediately after axnul ($p), we introduce ax-nul ($a) Immediately after axpr ($p), we introduce ax-pr ($a) To go back to the "old way" that minimizes the number of axioms, we would just delete each $a and replace all references to it with the $p immediately above it. Thus we can easily go back and forth between two approaches, as our preference dictates: a minimal ZF axiomatization or a more traditional one that includes the redundant axioms. (9-Nov-2006) An interesting curiosity: I updated the longest path in the "2+2 trivia" section on the Metamath Proof Explorer home page, and the longest path changed dramatically. The path length increased from 132 to 137 - an occasional increase is to be expected, as over time new theorems (common subproofs) are found that shorten multiple proofs. The curious thing is that in the old path, not a single theorem of predicate calculus was in the list: it jumped over predicate calculus completely with the path: eqeq1 (set theory) <- bibi1d (prop. calc.). However, the new path has 22 theorems of predicate calculus, mostly uniqueness and substitution stuff. This was caused by the change of 12-Sep-2006 (see notes for that date below) that provided a different path for proving 0ex. Here is the old path for comparison: The maximum path length is 132. A longest path is: 2p2e4 <- 2cn <- 2re <- readdcl <- axaddrcl <- addresr <- 0idsr <- addsrpr <- enrer <- addcanpr <- ltapr <- ltaprlem <- ltexpri <- ltexprlem7 <- ltaddpr <- addclpr <- addclprlem2 <- addclprlem1 <- ltrpq <- recclpq <- recidpq <- recmulpq <- mulcompq <- dmmulpq <- mulclpq <- mulpipq <- enqer <- mulasspi <- nnmass <- omass <- odi <- om00el <- om00 <- omword1 <- omwordi <- omword <- omord2 <- omordi <- oaword1 <- oaword <- oacan <- oaord <- oaordi <- oalim <- rdglim2a <- rdglim2 <- rdglimt <- rdglim <- rdgfnon <- tfr1 <- tfrlem13 <- tfrlem12 <- tfrlem11 <- tfrlem9 <- tfrlem7 <- tfrlem5 <- tfrlem2 <- tfrlem1 <- tfis2 <- tfis2f <- tfis <- tfi <- onsst <- ordsson <- ordeleqon <- onprc <- ordon <- ordtri3or <- ordsseleq <- ordelssne <- tz7.7 <- tz7.5 <- wefrc <- epfrc <- epel <- epelc <- brab <- brabg <- opelopabg <- opabid <- opex <- prex <- zfpair <- 0inp0 <- 0nep0 <- snnz <- snid <- snidb <- snidg <- elsncg <- dfsn2 <- unidm <- uneqri <- elun <- elab2g <- elabg <- elabgf <- vtoclgf <- hbeleq <- hbel <- hbeq <- hblem <- eleq1 <- eqeq2 <- eqeq1 <- bibi1d <- bibi2d <- imbi1d <- imbi2d <- pm5.74d <- pm5.74 <- anim12d <- prth <- imp4b <- imp4a <- impexp <- imbi1i <- impbi <- bi3 <- expi <- expt <- pm3.2im <- con2d <- con2 <- nega <- pm2.18 <- pm2.43i <- pm2.43 <- pm2.27 <- id <- mpd <- a2i <- ax-1 (8-Nov-2006) The fact that dtru (and thus ax-16) can be proved without using ax-16 came as something of a surprise. Still open is whether ax-16 can be derived from ax-1 through ax-15 and ax-17. (Later...) Well, it turns out ax-16 can be derived from ax-1 through ax-15 and ax-17! That is a complete surprise. The "secret" lies in aev, which is a nice little theorem in itself. I've updated the mmset.html page - it's not very often that a new result is found about the axiom system. Perhaps I'll still leave in the dtruALT proof since it is an interesting exercise in predicate logic without the luxury of definitions, although I might delete it since it is not very important anymore. (4-Nov-2006) To simplify the notation - which is still quite awkward - I decided specialize vector spaces to complex fields, instead of defining vector spaces on arbitrary division rings, since that is what I expect we will use most frequently. If we need to generalize later, most proofs should be nearly the same. (3-Nov-2006) isgrp and grplidinv replace the older versions, but use the hypothesis "X = ran G" instead of "X = dom dom G". This allows us to eliminate the 5 theorems with the "X = dom dom G" hypothesis, and all theorems with that hypothesis have now been deleted from the database. (31-Oct-2006) All group theory theorems (except the first 5 leading up to grprn) were re-proved with "X = ran G" instead of "X = dom dom G" as the hypothesis. (29-Oct-2006) Steve Rodriguez provided a shorter proof (by 190 bytes in the compressed proof size and by 39377 bytes in the HTML page size) for efnn0valtlem (the lemma for his efnn0valt). (26-Oct-2006) See http://planetx.cc.vt.edu/AsteroidMeta/Translation_Systems for discussion related to isarep1 and isarep2. (25-Oct-2006) Most books (at least the ones I looked at) that define a group with only left identity/inverse elements appear to implicitly assume uniqueness when they derive the right identity/inverse elements, but you need the right identity/inverse elements to prove uniqueness. This makes our proof, which involves careful quantifier manipulations to circumvent circular reasoning, much more complicated than the ones in textbooks. I don't know of a simpler way to do it. (22-Oct-2006) I remind the reader of the entry from (17-May-2006) below called "Dirac bra-ket notation deciphered." kbass6t completes the associative law series kbass1t-kbass6t. I moved them to one place in the database for easier comparison: http://us2.metamath.org:8888/mpegif/mmtheorems80.html#kbass1t (19-Oct-2006) The mmnotes.txt entry of (4-Sep-2006) describes the general philosophy I have settled on for structures like metric spaces, which seems to be working out well: hyp.1 $e |- X = ( 1st ` M ) $. hyp.2 $e |- D = ( 2nd ` M ) $. xxx $p |- (metric space theorem involving M, X, D) $=... For topologies, the "pure" approach analogous to metric spaces would be to work with topological spaces df-topsp, which defines topological structures as ordered pairs S = <. X , J >.. We would then have e.g. (hypothetical example not in the database): 1openA.1 $e |- X = ( 1st ` S ) $. 1openA.2 $e |- J = ( 2nd ` S ) $. 1openA $p |- ( S e. TopSp -> X e. J ) $=... However, I am treating topological spaces in a different way because it is easy to recover the underlying set from the topology on it (just take the union). So theorems can be shortened as follows, still separating the topology from the underlying set in the theorem itself: 1open.1 $e |- X = U. J $. 1open $p |- ( J e. Top -> X e. J ) $=... This last is the standard I am adopting for the special case of topologies. It saves a little bit of space in set.mm. Switching to the "pure" approach in the hypothetical 1openA would be trivial if we ever wanted to do that for aesthetic consistency or whatever. I bring this up because yesterday's grpass shows that we are taking a similar approach for group theory, where the underlying set can be recovered from the domain of the group operation: X = dom dom Grp. Again, it would be trivial to convert all theorems to the "pure" approach if for some reason we wanted to do that in the future. (1-Oct-2006) Note the parsing of ac9s. The infinite Cartesian product X_ x e. A ... takes a class (B in this case) and produces another class (X_ x e. A B). Restricted quantification A. x e. A ..., on the other hand, takes a wff (B =/= (/)) and produces another wff (A. x e. A B =/= (/)). ac9s $p |- ( A. x e. A B =/= (/) <-> X_ x e. A B =/= (/) ) <-------> <---------> wff class <-----------------> <-----------------> wff wff If we were to use additional parenthesis (which are unnecessary for unambiguous parsing), ac9s would read: ac9s $p |- ( A. x e. A ( B =/= (/) ) <-> ( X_ x e. A B ) =/= (/) ) So far in the database, the following definitions with "restricted" bound variables take a class and produce a class: df-iun U_ x e. A B df-iin |^|_ x e. A B df-ixp X_ x e. A B df-sum sum_ x e. A B If we wanted, we could define these surrounded by parentheses to eliminate any possible confusion. No proofs would have to be changed, only the theorem statements. However, there are already too many parentheses in a lot of theorems. Since the parenthesis-free notation for these is unambiguous, I thought it would be best in the long run. It's just a matter of getting used to it, and if in doubt one can always consult the syntax hints or use "show proof ... /all". A different example of this kind of possible confusion is sbcel1g: ( [ A / x ] B e. C <-> [_ A / x ]_ B e. C ) <----> <-----------> wff class <--------------> <----------------> wff wff which is never ambiguous because of the different brackets: [ A / x ] takes a wff as an argument, and [_ A / x ]_ takes a class as an argument. (29-Sep-2006) eluniima allows us to reduce alephfp from 72 steps to 62 steps. Compare the older version still at http://us.metamath.org/mpegif/alephfp.html . (I revisited alephfp after the discussion on http://planetx.cc.vt.edu/AsteroidMeta/metamath ). eluniima is interesting because there aren't any restrictions on A, which can be completely unrelated to the domain of F. rankxplim and rankxpsuc provide the answer to part of Exercise 14 of Kunen, which asks the reader to "compute" the rank of a cross product. (Some of the other ones can almost be "computed" - you take the previous answer and add 1 - but it is a stretch to call this proof a "computation".) This is a very difficult and rather unfriendly problem to give as a "homework exercise" - at least the answer should have been provided as a clue to work out the proof, which is already hard enough (especially since the answer has two parts, or three if we count the empty cross product). I wasted a lot of time on it, because I had to prove something that I had no clue of what it would be. I wonder how many people have actually worked this out: no one in sci.logic seemed able to answer it. http://groups.google.com/group/sci.logic/browse_frm/thread/41fad0ba18a9dce1 df-ixp is new. I'm somewhat torn about the bold X - a capital Pi is used in many books, but as the comment says I'd prefer to reserve that for products of numbers. I'm open to comments on the notation. (28-Sep-2006) I decided to restore the ancient (12-year-old) proof of pwpw0 for "historical" reasons (see discussion at http://planetx.cc.vt.edu/AsteroidMeta/metamath ). It has actually been modernized slightly, to remove the requirement that the empty set exist. This eliminates the need for the Axiom of Replacement, from which empty set existence is derived. The original can be seen at http://de2.metamath.org/metamath/set.mm . rankuni improves rankuniOLD of 17-Sep by eliminating the unnecessary hypothesis A e. V. Although this will shorten future proofs, I don't know know if such shortening will end up "paying" for the extra 16 steps of overhead needed to eliminate A e. V. But at least rankuni will be easier to use than rankuniOLD, having one less condition to satisfy. (17-Sep-2006) foprab2 is a new version (of foprab2OLD) that no longer requires the "C e. V" hypothesis. The new proof, using the 1st and 2nd functions, is very different from that of foprab2OLD and the other *oprab* theorems. (16-Sep-2006) Steve Rodriguez says about his efnn0valtlem/efnn0valt proof, "It's not short, but it took far less time than I expected, and the result seemed so obvious that I felt nagged to prove it somehow." (15-Sep-2006) opntop is an important theorem, because it connects metric spaces to our earlier work on topology, by showing that a metric space is a special case of a topology. This lets us apply the theorems we have already developed for topologies to metric spaces. (It took some work to get there; many of the theorems in the last few days where towards the goal of proving opntop.) The members of a topology J are called its "open sets" in textbooks, and this theorem provides a motivation for that term. (We do not have a separate definition for the open sets of a topology, since to say that A is an open set of topology J we just say "A e. J".) (12-Sep-2006) A number of proofs (some not shown in the Most Recent list) were modified to better separate the various uses of the Axiom of Replacement, and in particular to show where the Axiom of Extensionality is needed. The old zfaus was renamed to zfauscl, and the current zfaus is new. Some proofs in the axrep1 through axrep5 sequence were modified to remove uses of Extensionality, so that zfaus now uses only Replacement for its derivation. The empty set existence zfnul now uses only zfaus (and thus only Replacement) for its derivation. The new zfnuleu then shows how Extensionality leads to uniqueness (via the very useful bm1.1, which uses only Extensionality for its derivation). Finally, 0ex was changed (with a slightly longer proof) so that it is now derived directly from zfnuleu, to illustrate the path: ax-rep -> zfaus -> zfnul -> zfnuleu -> 0ex ^ | ax-ext Some books try to postpone or avoid the Replacement Axiom when possible, using only the weaker Separation (a.k.a. Subset, a.k.a. Aussonderung). This can now be done in our database, if we wish, by changing zfaus and zfpair from theorems to axioms. (See the new last paragraph in the ax-rep description.) (7-Sep-2006) The set.mm database was reorganized so that the ZFC axioms are introduced more or less as required, as you can see in the new Table of Contents http://us2.metamath.org:8888/mpegif/mmtheorems.html#mmtc . This lets you see what it is possible to prove by omitting certain axioms. For example, we prove almost all of elementary set theory (that covered by Venn diagrams, etc.) using only the Axiom of Extensionality, i.e. without any of the existence axioms. And quite a bit is proved without Infinity - for example, Peano's postulates, finite recursion, and the Schroeder-Bernstein theorem (all of which are proved assuming Infinity in many or most textbooks). (4-Sep-2006) I will be changing the way that the theorems about metric spaces are expressed to address some inconveniences. Consider the following two ways of expressing "the distance function of a metric space is symmetric". In the present database, we use both methods for various theorems. (These examples, though, are hypothetical, except that mssym1v1 = mssymt). (1) mssymv1.1 $e |- D e. V $. mssymv1 $p |- ( ( <. X , D >. e. Met /\ A e. X /\ B e. X ) -> ( A D B ) = ( B D A ) ) $= (2) mssymv2 $p |- ( ( M e. Met /\ A e. ( 1st ` M ) /\ B e. ( 1st ` M ) ) -> ( A ( 2nd ` M ) B ) = ( B ( 2nd ` M ) A ) ) $= The first way, mssymv1, shows the base set and the distance function explicitly with the helpful letters X and D. But often we want to say things about the metric space as a whole, not just its components. The second way, mssymv2, accomplishes that goal, at the expense of readability: it is less reader-friendly and more verbose to say ( 2nd ` M ) rather than just D. Although it is possible to convert from one to the other, it can be awkward, especially converting from mssymv1 to mssymv2. So practically speaking, we will end up creating two versions of the same theorem, neither of which is ideal. A solution to this is provided by the following third version: (3) mssymv3.1 $e |- X = ( 1st ` M ) $. mssymv3.2 $e |- D = ( 2nd ` M ) $. mssymv3 $p |- ( ( M e. Met /\ A e. X /\ B e. X ) -> ( A D B ) = ( B D A ) ) $= The conclusion is simpler than either of the first two versions and clearly indicates the intended meaning of an object with letters M, X, and D. Although the hypotheses are more complex, in the database they will typically be resused by several theorems. To obtain mssymv1 from mssymv3 is trivial: we replace M with <. X , D >., then we use op1st and op2nd to eliminate the hypotheses. To obtain mssymv2 from mssymv3 is trivial: we use eqid to eliminate the hypotheses. So, with this approach, we should never need to prove mssymv1 and mssymv2 separately, since the conversion to either one in a proof is immediate. My plan is to convert everything to this approach and make most of the existing theorems obsolete. msf is the first one using this approach, and it will replace the existing msft. As always, comments are welcome. (3-Sep-2006) Although ax16b is utterly trivial, its purpose is simply to support the statement made in the 7th paragraph of http://us2.metamath.org:8888/mpegif/mmzfcnd.html (29-Aug-2006) The value of the ball function is a two-place function, i.e. it takes in two arguments, a point and a radius, and returns a set of points. I define it as an "operation" in order to make use of the large collection of operation theorems, and also to avoid introducing a new syntactical form. However, we have two choices for expressing "The value of a ball of radius R around a point P". Note that M is a metric space, X is the underlying space of a metric space, and D is a distance function. Operation value notation: ( P ( ball ` M ) R ) ( P ( ball ` <. X , D >. ) R ) when M = <. X , D >. Function value notation: ( ( ball ` M ) ` <. P , R >. ) ( ( ball ` <. X , D >. ) ` <. P , R >. ) when M = <. X , D >. The former is shorter and will result in shorter proofs in general, but I'm not sure that using infix notation like we would for an operation like "+" is the most natural or familiar. There is no standard notation in the literature, which uses English and also does not make the metric space explicit. I am open to comments or suggestions. Since ( ball ` <. X , D >. ) acts like an operation value, we also have a third choice and could say, equivalently, ( P ( X ball D ) R ) when M = <. X , D >. for an even shorter notation, although I'm not sure how odd it looks. But maybe I should use it for efficiency. Note that I am using <. X , D >. e. Met instead of X Met D (via df-br), even though the latter results in shorter proofs (and is shorter to state). It seems that Met feels more like a collection of structures that happen to be ordered pairs of objects, than it does a relation, even though those concepts are technically identical. (28-Aug-2006) Two of the standard axioms for a metric space, that the distance function is nonnegative and that the distance function is reflexive, are redundant, so they have been taken out of the definition df-ms to simplify it. (We will prove the redundant axioms later as theorems.) The first part of the ismsg proof (through step 18) is used to get rid of the antecedent X e. V that occurs in step 43. (If either side of the ismsg biconditional is true, it will imply X e. V, making it redundant as an antecendent.) (26-Aug-2006) Some small items related to yesterday's unctb (with a new version today) were cleaned up: 1. The unused hypothesis B e. V was removed from unictb. 2. unpr was renamed to unipr for naming consistency. 3. The hypotheses A e. V and B e. V eliminated from unctb, with the help of the new theorem uniprg. (22-Aug-2006) csbopeq1a will help make the '1st' and '2nd' function stuff worthwhile; it lets us avoid the existential quantifiers that are used in e.g. copsexg. It is often easier to work with direct computations rather than having to mess around with quantifiers. I was surprised that there are no distinct variable, class existence, or any other restrictions on csbopeq1a. There is no restriction on what B may contain, which could be any random usage of x and y, not just ordered pairs of them; but what we are doing is the logical equivalent of substitution for ordered pairs in B as if it actually contained them. (21-Aug-2006) I made some subtle changes to the little colored numbers. Although this may seem like a trivial topic, and probably is, the problems of obtaining a spectrum of colors with uniform brightness and maximum distinguishability vs. hue changes aren't as easy as they might first appear. Perhaps someone has done it before, but all of the spectrum mappings I could find have brightness that varies with color and aren't suitable for fonts, in particular because yellow is hard to read. They also do not change color (as perceived by the eye) at a uniform rate as you go through the spectrum, such as the color changes crowded into the transition from red to green. This is OK if you want an accurate representation of color vs. wavelength, but our goal is to be able to distinguish different colors visually in an optimal way. Anyway I thought it was an interesting problem, so I thought I'd say something about it. Even though it pales in importance compared to today's announcement of the evidence that dark matter exists. :) The new "rainbow" color scheme now has the following properties: 1. All colors now have 50% grayscale levels. Specifically, all colors now have an L (level) value of 53 in the L*a*b color model (see http://en.wikipedia.org/wiki/Lab_color). This means if you convert a screenshot to grayscale, the colored numbers will all have the same shade of gray. Yellow becomes brown at L=53, with an RGB value of 131-131-0. 2. Within the constraint of the 50% grayscale level, each color has maximum saturation. This is not as simple as it seems: the RGB color for the brightest pure blue, 0-0-255, has an L value of only 30 (because the eye is not as sensitive to blue), so we have to use the 57% saturated 110-110-255 to get L=53. Green, on the other hand, just requires 0-148-0 for L=53, which is 100% saturated. 3. The hues are not equally spaced numerically, but according to how the eye is able to distinguish them. I determined this empirically by comparing the distinguishability of adjacent hues on an LCD monitor, using a program I wrote for that purpose. For example, the eye can distinguish more hues between red and green than between green and blue. This was a problem with the old colors, which seemed to have too many undistinguishable blue-greens. Now, as experimentally determined, the transition from green to blue represents only 21% of the color values. You can see the new color spectrum at the top of a theorem list page such as http://us2.metamath.org:8888/mpegif/mmtheorems.html. The spectrum position to RGB conversion is done by the function spectrumToRGB in mmwtex.c of Metamath version 0.07.21 (20-Aug-2006). (16-Aug-2006) abfii5 is the last in the series, and just chains together the others to obtain the final connection between verion used by subbas and the version of the left-hand side that does not involve equinumerosity. This can allow us to express subbas in more elementary terms, if we wish. (14-Aug-2006) abfii4 is an interesting brainteaser: we show the that |^| { x | ( A (_ x /\ A. y ( ( y (_ x /\ -. y = (/) /\ E. z e. om y ~~ z ) -> |^| y e. x ) ) } is equal to |^| { x | ( A (_ x /\ A. y ( ( y (_ A /\ -. y = (/) /\ E. z e. om y ~~ z ) -> |^| y e. x ) ) } The second differs from the first by only one symbol: the "x" is changed to "A". At first, it superficially looked like this would be an elementary theorem involving set intersection. But the proof turned out quite difficult, not only the proof itself but the fact that it needs some subtheorems that are somewhat difficult or nonintuitive in themselves (fiint, intab via abfii2, abfii3, abexssex [based on the Axiom of Replacement], intmin3), involving the theories of ordinals, equinumerosity, and finite sets. abfii2 and abfii3 are each used twice, for different purposes, in different parts of the proof. I found this quite a challenge to prove and would be most interested if anyone sees a more direct way of proving this. (Note that "E. z e. om y ~~ z" just means "y is finite". A simpler way of stating this is "y ~< om", but that requires the Axiom of Infinity, which I wanted - and was able - to avoid for this proof.) abfii4 arose while looking at equivalent ways to express the collection of finite intersections of a set, which determines a basis for a topology (see theorem subbas). Textbooks never seem to mention the exact formal ways of saying this, but just say, informally, "the collection of finite intersections." (11-Aug-2006) With today's 3bitr4rd, we now have the complete set of all 8 possibilities for each of the 3-chained-biconditional/equality series 3bitr*, 3bitr*d, 3eqtr*, and 3eqtr*d. (9-Aug-2006) If you are wondering why it is "topGen" and not "TopGen", a standard I've been loosely following is to use lowercase for the first letter of classes that are ordinarily used as functions, such as sin, cos, rank, etc. Of course you will see many exceptions, mainly because I try to match the literature when the literature gives something a name. (7-Aug-2006) intab is one of the oddest "elementary" set theory theorems I've seen. It took a while to convince myself it was true before I started to work out the proof, and the proof ended up rather long and difficult for this kind of theorem. Yet the only set theory axiom used is Extensionality. Perhaps I just have a mental block - if anyone sees a simpler proof let me know. I haven't seen this theorem in a book; it arose as part of something else I'm working on. Two new definitions were added to Hilbert space, df-0op and df-iop. The expressions ( H~ X. 0H ) and ( I |` H~ ), which are equivalent to them, have been used frequently enough to justify these definitions. (4-Aug-2006) 3eqtrr, 3eqtr2, 3eqtr2r complete the 8 possibilities for 3 chained equalities, supplementing the existing 3eqtr, 3eqtr3, 3eqtr3r, 3eqtr4, 3eqtr4r. I was disappointed that the 3 new ones reduce the size of only 33 (compressed) proofs, taking away a total of 185 characters from them, whereas the new theorems by themselves increase the database size by 602 characters. So, the database will grow by a net 417 characters, and the new ones don't "pay" for themselves. Nonetheless, O'Cat convinced me that they should be added anyway for completeness. He wrote: I would vote for adding them even though the net change is PLUS 400 some bytes. It just makes unification via utilities like mmj2 much easier -- input the formula and let the program find a matching assertion. Esp. now that you've done the work to analyze them, it is illogical not to make the change: eventually enough theorems will be added to save the added bytes. And within 10 years when we have bigger, faster memories people would think it quaint to be so stingy with theorems that serve a valid purpose. We're going to have PC's with a minimum of 1GB RAM next year as standard, and that will just grow as the new nanotech advances. So, I guess I should also complete the 3eqtr*d, 3bitr*, and 3bitr*d families at some point. (2-Aug-2006) fiint replaces fiintOLD. (24-Jul-2006) Note that the description of today's sucxpdom says it has "a proof using AC", and you can see in the list of axioms below the proof that ax-ac was used. Exercise: How does ax-ac end up getting used? Metamath has no direct command to show you this, but it is usually easy to find by examining the outputs of the following two commands: 1. show trace_back sucxpdom ... numth2 fodom fodomg fnrndomg ccrd($a) df-card($a) cardval cardon cardid cardne carden carddomi carddom cardsdom domtri entri entri2 sucdom unxpdomlem unxpdom ^^^^^^^ 2. show usage ax-ac / recursive ...imadomg fnrndomg unidom unidomg uniimadom uniimadomf iundom cardval cardon cardid oncard cardne carden cardeq0 cardsn carddomi carddom cardsdom domtri entri entri2 entri3 sucdom unxpdomlem unxpdom unxpdom2 sucxpdom... ^^^^^^^ The theorem we want is the last theorem in the first list that appears in the second list. In this case, it is unxpdom. And, indeed, we see that unxpdom appears in proof step 43 of sucxpdom, and we can check that unxpdom uses ax-ac for its proof. Of course, there may be other theorems used by the sucxpdom proof that require ax-ac, and the only way to determine that is to see if any of the other theorems used in the sucxpdom proof appear in the second list. But for a quick indication of what ax-ac is needed for, the method above can be useful. (21-Jul-2006) oev2, etc. are part of Mr. O'Cat's cleanup project for some of the remaining *OLDs. (20-Jul-2006) istps5 is important because it reassures us that our definitions of Top and TopSp match exactly the standard textbook version, even though the latter is more complex when the words are converted to symbols. I don't think istps5 will have an actual practical use, though, because of the simpler theorems we have available. (18-Jul-2006) It is awkward to eliminate the "J e. V" hypothesis from istop and isbasis/isbasis2 when it is redundant (as in most uses) - it requires working with a dummy variable then elevating it to a class variable with vtoclga - so I'm replacing these with "g" versions that have "J e. V" as an antecedent (actually "J e. A" as customary, to allow easy use of ibi when we need only the forward implication). I think the non-g versions will be used so rarely that it's not worth keeping them, so they will be deleted, and it is trivial to use instead the "g" version + ax-mp. [Update: istop, isbasis, and isbasis2 have now been deleted.] I also modified Stefan's 0opn, uniopn to use an antecedent instead of hypothesis. (17-Jul-2006) It is interesting that Munkres' definition of "a basis for a topology" can be shortened considerably. Compare http://us2.metamath.org:8888/mpegif/isbasis3g.html (Munkres' version) with http://us2.metamath.org:8888/mpegif/isbasisg.html (abbreviated version). Munkres' English-language definition is (p. 78): "Definition. If X is a set, a _basis_ for a topology on X is a collection _B_ of subsets of X (called _basis elements_) such that (1) For each x e. X there is at least one basis element B containing x. (2) If x belongs to the intersection of two basis elements B1 and B2, then there is a basis element B3 containing x such that B3 (_ B1 i^i B2." ------- The symbols and statement labels for topology were changed in order to conform more closely to the terminology in Munkres, who calls a member of our (old) Open a "topology" and calls a member of our (old) Top a "topological space". Old symbol New symbol ---------- ---------- Top TopSp Open Top Old label New label --------- --------- ctop ctps cope ctop df-top df-topsp df-open df-top dfopen2 dftop2 optop eltopsp elopen1 istop elopen2 uniopnt op-empty 0opnt empty-op sn0top op-indis indistop ------- If J is a topology on a set X, then X is equal to the union of J. For this reason, the first member of a topological space is redundant. I am doubtful that the topological space definition will offer any real benefit and am considering deleting it. If anyone knows of a good reason to keep it, let me know. (12-Jul-2006) I added definitions for topology (df-open and df-top), and added some theorems that Stefan Allan proved back in Feb. and March. Note that "e. Open" and "e. Top" are equivalent ways of saying something is a topology, which will be shown by the (in progress) theorem "( <. U. A , A >. e. Top <-> A e. Open )", and "e. Open" is often simpler. (7-Jul-2006) Over 100 *OLD's were removed from the database, and 83 remain, 27 of which are in the Hilbert space section. After subtracting the *OLD's, there are 6053 theorems in the non-Hilbert-space part and 1181 for Hilbert space, a total of 7243. This is the first time I have become aware that the Metamath Proof Explorer has officially passed the 6000 theorem "milestone", which happened 53 (green) theorems ago. The 6000th theorem was mt4d, added on 18-Jun-2006. (6-Jul-2006) With the updated projlem31, all uses of ~~>OLD have been eliminated. Soon I will remove the corresponding *OLD theorems from the database. (5-Jul-2006) With ege2le3, we have finally completed the non-Hilbert-space conversion of ~~>OLD based theorems, so that no theorem in the non-Hilbert-space section depends on ~~>OLD anymore. We can't delete their *OLD theorems yet, because ~~>OLD is still used in a few places in the Hilbert space section, but that should happen soon. (If you type "show usage cliOLD/recursive" you will see, before the Hilbert space section, the *OLDs that will be deleted. I will delete them when the Hilbert space stuff is finished being converted to ~~>.) By the way, we can also now delete all uses of sum1oo ("show usage csuOLD /recursive"), which I will do soon. (4-Jul-2006) Currently there are separate derivations of 2 <_ e, e <_ 3, and 2 <_ e /\ e <_ 3. Eventually, I'll probably delete the first two, but for now my immediate goal is to replace the *OLDs. (1-Jul-2006) The proof of class2seteq was shortened. Compare the previous version, class2seteqOLD. (30-Jun-2006) Continuing with the *OLD cleanup project, the future ereALT (to replace the existing ereALTOLD) will be an alternate derivation of ere. The proof of ereALTOLD specifically focuses on the number e and has a much simpler overall derivation than ere, which is a special case of the general exponential function derivation. The lemmas for ereALTOLD are mostly reused to derive the bounds on e (ege2OLD, ele3OLD, ege2le3OLD). Since ereALTOLD ends up being a natural byproduct of those theorems, I have so far kept it even though it is redundant, in order to illustrate a simpler alternate way to prove e is real. By the way, if you are wondering how the mmrecent page can "predict" a future theorem, the web page generation program simply puts "(future)" after the comment markup for any label not found in the database (and also issues a warning to the user running the program). The mmrecent page is refreshed with the "write recent" command in the metamath program. (21-Jun-2006) The conversion of cvgcmp3ce to cvgcmp3cet is one of our most complex uses to date of the Weak Deduction Theorem, involving techniques that convert quantified hypotheses to antecedents. The conversion is performed in stages with 2 lemmas. (Eventually I want to look at proving the theorem form directly, with the hope of reducing the overall proof size. But for now my main goal is to replace the *OLDs.) (20-Jun-2006) As mentioned below, most of the recent convergence stuff like cvgcmp3ce is slowly replacing the *OLDs. My goal is to be able to delete a big batch of *OLDs, used to develop the exponential funtion df-ef, within 2 weeks. We can't get rid of them until the last piece in the chain of proofs is completed. (6-Jun-2006) caucvg will replace caucvgOLD as part of the *OLD elimination project. BTW Prof. Wachsmuth doesn't know where this proof is from; he may have originated it: "I wrote many of the proofs years ago and I don't remember a good reference for this particular one" (email, 6-Jun). It is a nice proof to formalize because it is more elementary than most, in particular avoiding lim sups. (We have df-limsup defined, but it needs to be developed. As you can see, df-limsup is not so trivial. One of its advantages is that it is defined - i.e. is an extended real - for all sequences, no matter how ill-behaved.) (5-Jun-2006) kbass2t is the 2nd bra-ket associative law mentioned in the notes of 17-May-2006. (4-Jun-2006) The description of ax-un was made clearer based on a suggestion from Mel O'Cat. (2-Jun-2006) unierr is our first theorem related to qubits and quantum computing. In a quantum computer, an algorithm is implemented by applying a sequence of unitary operations to the computer's qubit register. A finite number of gates selected from a fixed, finite set cannot implement an arbitrary unitary operation exactly, since the set of unitary operations is continuous. However, there is a small set of quantum gates (unitary operations) that is "universal," analogous to the universal set AND and NOT for classical computers, in the sense that any unitary operation may be approximated to arbitrary accuracy by a quantum circuit involving only those gates. Theorem unierr tells us, importantly, that the worst-case errors involved with such approximations are only additive. This means that small errors won't "blow up" and destroy the result in the way that, say, a tiny perturbation can cause completely unpredictable behavior in weather prediction (the "butterfly effect"). (1-Jun-2006) Someone complained about not being able to understand infpn (now called infpn2), so I decided to make the "official" infpn use only elementary notation. (27-May-2006) The label of the ubiquitous 'exp' (export) was changed to 'ex' to prevent it from matching the math token 'exp' (exponential function), in anticipation of the 24-Jun Metamath language spec change. Normally I don't mention label changes here - they're documented at the top of set.mm - but ex, used 724 times, is the 5th most frequently used theorem, so this change will almost certainly impact any project using set.mm as a base. Although ex is of course not new, I re-dated it to show up in today's "Most Recent Proofs." (25-May-2006) I think csbnest1g is interesting because the first substitution is able to "break through" into the inside of the second one in spite of the "clashing x's". Compare csbnestg, where x and y must be distinct. I found it a little tricky to prove, and I wasn't even sure if it was true at first. (18-May-2006) kbasst proves one of the associative laws mentioned yesterday: Dirac: ( |A> = |A> ( ) Metamath: ( ( A ketbra B ) ` C ) = ( ( ( bra ` B ) ` C ) .s A ) (17-May-2006) Dirac bra-ket notation deciphered Most quantum physics textbooks give a rather unsatisfactory, if not misleading, description of the Dirac bra-ket notation. Many books will just say that is defined as the inner product of A and B, or even say that it _is_ the inner product, then go off and give mechanical rules for formally manipulating its "components" . For physicists, "formally" means "mechanically but without rigorous justification." If the dimensions are finite, there is no problem. A finite-dimensional Hilbert space is essentially linear algebra, and it is possible to prove that any vector and linear operator can be represented by an n-tuple (column vector) and matrix respectively. In finite dimensions, we just define |A> to be a column vector and (the inner product) and |A> is a member of Hilbert space (a vector) and call equals the inner product of A and B. While this solves some of the problems, mysteries remain, such as, what is the "outer product" |A> = |A> ( ) ( ) to be defined as ( B .i A ). So now we have the "best of both worlds" and can choose either (which physicists consider synonymous with inner product) or (A .i B) = , to match whatever text we're working with. There are actually 4 kinds of objects that result from different bra and ket juxtapositions: complex numbers, vectors, functionals, and operators. This is why juxtaposition is not "exactly" a product, because its meaning depends on the kind of objects that the things being juxtaposed represent. The starting operations on vectors are as follows: Finite dim. Operation Notation Metamath Value analogy ket |A> A vector column vector bra ( B .i A ) complex number complex number outer product |A> can also be expressed as ( ( bra ` A ) ` B ) - as today's theorem bravalvalt shows - and this will be needed to use the table below (in line 5). (Lines 3 and 4 in the above table are redundant, since they are special cases of lines 5 and 4 below; line 4 above is computed ( A ketbra ( `' bra ` ( bra ` B ) ) ) = ( A ketbra B ).) We will represent the four kinds of possible results in the "Value" column above as v, f, c, and o respectively. After accounting for the restrictions on juxtaposing bras and kets (e.g., we can never have an inner product followed by a ket), exactly the following cases can occur when two Dirac subexpressions T and U are juxtaposed to produce a new Dirac expression TU: T U TU Metamath operation Description c c c ( T x. U ) Complex number multiplication c f f ( T .fn U ) Scalar product with a functional v c v ( U .s T ) Scalar product (note T & U swap) v f* o ( T ketbra ( `' bra ` U ) ) Outer product (with converse bra) f v c ( T ` U ) Inner product (bra value) f o f ( T o. U ) Functional composed with operator o v v ( T ` U ) Value of an operator o o o ( T o. U ) Composition of two operators *Note: In line 4, U must be a continuous linear functional (which will happen automatically if U results from a string of kets and bras). This is needed by the Riesz theorem riesz2t, which allows the inverse bra to work. The other lines have no restriction. See df-hfmul for ".fn", df-co for "o.", and df-cnv for "`'". Line 5 can be stated equivalently: f* v c ( U .i ( `' bra ` T ) ) Inner product (with converse bra) So, the "operation" of juxtaposition of two Dirac subexpressions can actually be any one of 8 different operations! This is why we can't easily express the Dirac notation directly in Metamath, since a class symbol for an operation is supposed to represent only one object. Supplementary note: Physics textbooks will often have equations with an operator sandwiched between a bra and a ket. Its juxtaposition with a bra or ket also now becomes easy to formalize: match an entry from the table above where the operator corresponds to an "o" input. As an example of the use of the above table, consider the associative laws above and their set-theoretical (Metamath) translations, which we will eventually prove as theorems in the database. Dirac: ( |A> = |A> ( ) Metamath: ( ( A ketbra B ) ` C ) = ( ( ( bra ` B ) ` C ) .s A ) Dirac: ( ) . <. B | ) ( | C >. <. D | ) = | A >. ( <. B | ( | C >. <. D | ) ) Metamath: ( ( A ketbra B ) o. ( C ketbra D ) ) = ( A ketbra ( `' bra ` ( ( bra ` B ) o. ( C ketbra D ) ) ) ) ) There you have it, a complete formalization of Dirac notation in infinite dimensional Hilbert space! I've never seen this published before. For an intuitive feel for the table above, it can be useful to compare the finite dimensional case using vectors and matrices. Suppose A and B are column vectors of complex numbers [a_1] [b_1] [a_2] [b_2] Then |B> is the same as B, and becomes the inner product a_1* x. b_1 + a_2* x. b_2, and |B> y = x and A. x A. y ph <-> A. y A. x ph. To accomodate this, we could have a notation representing the opposite of a $d that says two variables are not necessarily distinct. Alternately, we could adopt the approach of restating the axiom system so that set variables are always distinct, as described on http://planetx.cc.vt.edu/AsteroidMeta//mmj2Feedback (search for "Here is an idea"). However, in set theory and beyond, situations where set variables are not required to be distinct are not very common. (10-May-06) A reader comments at http://www.jaredwoodard.com/blog/?p=5 that he wishes I'd spend less time on Hilbert space and more on cleaning up *OLDs. The cleaning up of *OLDs has actually been happening "behind the scenes" even though people may not notice it. Almost 200 *OLDs have been eliminated since January (there were 380 then; now there are 185). Yesterday's geoser and expcnv will eliminate their corresponding *OLDs, for example. Mel O'Cat is also working on a list of 27 of the *OLDs. I realize hardly anyone cares about the Hilbert space stuff, and regular visitors are probably bored to tears seeing the dreary purple theorems day after day. :) (I do try not to go too long without a pretty green one now and then.) My long term goal is to build a foundation that will let me explore rigorously some new ideas I have for Hilbert lattice equations that may lead to writing a new paper. I also want to build up a foundation for theorems related to quantum computing. In a few days hopefully we will have a theorem related to error growth in qubits (quantum gates). (9-May-06) Compare geoser to the old one, geoserOLD, that it replaces. Wikipedia entry: http://en.wikipedia.org/wiki/Geometric_series (5-May-06) The Riesz representation theorem is used to justify the existence and uniqueness of the adjoint of an operator. In particular, the rigorous justification of Dirac bra-ket notation in quantum mechanics is dependent on this theorem. See also Wikipedia: http://en.wikipedia.org/wiki/Riesz_representation_theorem (13-Apr-06) One thing to watch out for in the literature is how the author defines "operator". I put some notes at http://us2.metamath.org:8888/mpegif/df-lnop.html on the various definitions: for some they are arbitrary mappings from H to H, for others they are linear, for still others they are linear and bounded. In set.mm, "operator" means an arbitrary mapping. Today's goal is: a linear operator is continuous iff it is bounded. This will be called "lncnbd" when it is completed later today. lnopcon provides the basic proof of this: it is not a trivial proof, 220 steps long, and to me it is non-intuitive. Many authors forget about the case of the trivial Hilbert space, where sometimes a result holds and other times not. lnopcon does hold, but we have to prove the trivial case separately, and in step 219 we combine the nontrivial case of step 194 with trivial case of step 218. (12-Apr-06) The astute observer may have noticed that the dates on the "Recent Additions to the Metamath Proof Explorer" now have 4-digit years, e.g. 12-Apr-2006 instead of 12-Apr-06. Version 0.07.15 of the metamath program implements 4-digit date stamps, and set.mm has been updated with 4-digit years. The program still recognizes 2-digit years (for the 'write recent' command) but assumes they fall between 1993 and 2092. (10-Apr-06) The reason the xrub proof is long is that it involves 9 cases: 3 cases for B (real, -oo, and +oo), and for each B, 3 cases for x. (We handle some of them simultaneously in the proof.) This theorem means we only have to "scan" over the reals, rather than all of the extended reals, in order to assert that B is an upper bound for set A. This is true even if A contains non-reals or if B is non-real (+oo or -oo). When we quantify over all extended reals, often we have to consider the 3 cases of real, -oo, +oo separately. The advantage of this theorem is that we don't have to handle the last two cases anymore, so some proofs will become significantly shorter as a result. (6-Apr-06) The proof of unictb is very different from Enderton's, which I found somewhat awkward to formalize. Instead, it is effectively a special case of Takeuti/Zaring's much more general uniimadom. It is also interesting how simple it is to go from the indexed union to the regular union version of a theorem, whereas it can be rather difficult in the other direction. For example, iunctb to unictb is trivial through the use of uniiun. But for the finite version of this theorem, compare the difficulty of going from the regular union version, unifi to iunfi, requiring the not-so-trivial fodomfi, which was proved for this purpose. The conversion of unifi to iunfi involved substituting z for x and { y | E. x e. A y = B } for A in unifi, using dfiun2 on the consequent, and doing stuff to get the antecedents right. The "doing stuff" ends up being not so simple. Perhaps if I had to do it over, it might have been simpler to prove iunfi first, then trivially obtain unifi via uniiun, although I'm not really sure. Both iunctb and iunfi are intended ultimately to be used by a Metamath development of topology, which Stefan Allan has started to look at. (1-Apr-06) Today's avril1 http://us2.metamath.org:8888/mpegif/avril1.html is a repeat of last year's, except for a small change in the description. But I bring it up again in order to reply to last year's skeptics. Unlike what some people have thought, there is nothing fake about this theorem or its proof! Yes, it does resemble an April Fool's prank, but the mathematics behind it are perfectly rigorous and sound, as you can verify for yourself if you wish. It is very much a valid theorem of ZFC set theory, even if some might debate its relative importance in the overall scheme of things. The only thing fake is that Prof. Lirpa uses a pseudonym, since he or she wishes to remain anonymous. Tarski really did prove that x=x in his 1965 paper. While it is possible he wasn't the first to do so, he did not attribute the theorem to anyone else. The theorem -. ( A P~ RR ( i ` 1 ) /\ F (/) ( 0 x. 1 ) ) importantly tells us we cannot prove, for example, ( A P~ RR ( i ` 1 ) /\ F (/) ( 0 x. 1 ) ) if ZFC is consistent. If we utter the latter statement, that will indeed be a hilarious joke (assuming ZFC is consistent) for anyone who enjoys irony and contradiction! But anyone who could prove the latter statement would achieve instant notoriety by upsetting the very foundation that essentially all of mathematics is built on, causing it to collapse like a house of cards, into a pile of (Cantor's) dust that would blow away in the wind. That assumes, of course, that the paradox is not hushed by the established mathematical community, whose very livelihoods would be at stake. In that case, the discoverer might achieve wealth instead of fame. So, in effect the theorem, being preceded by the "not" sign, really tells us: "I am _not_ an April Fool's joke." Thus we are reminded of the Liar Paradox, "this sentence is not true," but with an important difference: paradoxically, avril1 is not a paradox. For those whose Latin is rusty, "quidquid germanus dictum sit, altum viditur" means "anything in German sounds profound." Just as logicians have done with Latin ("modus ponens" and so on), set theorists have chosen German as their primary obfuscating language. For example, set theory texts lend great importance and mystery to the otherwise trivial subset axiom by calling it "Aussonderung." This helps keep the number of set theorists at a comfortable level by scaring away all but a few newcomers, just enough to replace those retiring. To derive avril1, we have used an interdisciplinary approach that combines concepts that are ordinarily considered to be unrelated. We have also used various definitions outside of their normal domains. This is called "thinking outside of the box." For example, the imaginary constant i is certainly not a function. But the definition of a function value, df-fv, allows us to substitute any legal class expression for its class variable F, and i is a legal class expression. Therefore ( i ` 1 ) is also a legal class expression, and in fact it can be shown to be equal the empty set, which is the value of "meaningless" instances of df-fv, as shown for example by theorem ndmfv. http://us2.metamath.org:8888/mpegif/df-fv.html http://us2.metamath.org:8888/mpegif/ndmfv.html Now that the technique has been revealed, I hope that next year someone else will make a contribution. You have a year to work on it. (28-Mar-06) sspr eliminates the hypotheses of the older version, which has been renamed to ssprOLD. (27-Mar-06) As of today's version of set.mm, 183 out of the 315 theorems with names ending "OLD" were removed, so there are only 132 *OLDs left. This has made set.mm about 300KB smaller. (The 132 remaining can't just be deleted, since they currently are referenced by other proofs, which will have to be revised to eliminate their references. Mel O'Cat has started working on some of them.) (20-Mar-06) Stefan has done the "impossible," which is find an even shorter proof of id. (See the note of 18-Oct-05 below.) His new proof strictly meets the criteria I use for accepting shorter proofs (described at the top of the set.mm file). He writes, "Too bad you don't get a special prize for shortening this one!" I agree; any suggestions? About a1d, he writes, "[a1d] is not a shorter proof in compressed format, and is in fact the same size as the old one. However it has fewer steps if expanded out into axioms, so you might want to include it in set.mm anyway." (24-Feb-06) Stefan's sylcom proof has 1 fewer character in set.mm than the previous, and 9 fewer characters in its HTML file. I think we may be approaching the theoretical limit. :) (22-Feb-06) The proof of efcj uses some obsolete theorems with the old convergence ~~>OLD, but I don't have the updated ones ready yet and just wanted to get efcj out of the way since we will need it for more trignometry. Eventually the proof of efcj will be updated. Note that "obsolete" doesn't mean "unsound"; the proof is perfectly rigorous. The purpose of the new notation is to make proofs more efficient (shorter) once everything is updated with it. (17-Feb-06) efadd was completed a little sooner than I thought. Here are some statistics: the set.mm proof (efadd + 28 lemmas) takes 47KB (out of 4.5MB for set.mm). The HTML pages take 4.7MB (out of 372MB total for mpegif). (13-Feb-06) Over the next couple of weeks, we will be proving what has turned out to be a difficult theorem - the sum of exponents law for the exponential function with complex arguments, i.e. e^(a+b)=e^a.e^b. Even though textbook proofs can seem relatively short, the ones I've seen gloss over many of the tedious details. After several false starts I came up with a proof using explicit partial sums of product series and explicit comparisons for the factorial growth (we will use the recent fsum0diag and faclbnd4 for this). The whole proof will have around 30 lemmas. (12-Feb-06) nonbool demonstrates that the Hilbert lattice is non-Boolean. This proves that quantum logic is not classical. Of course this is well known, but I've only seen it stated without proof, so I came up with a formal demonstration. It seems the dimension must be at least 2 to demonstrate non-Boolean behavior. Note that we have already shown that it is orthomodular (in pjoml4), but Boolean is a special case of orthomodular, so that in itself doesn't demonstrate that quantum logic is not classical. (11-Feb-06) Even though the climmul proof is long, I'm not unhappy about it, since Gleason dedicates over 2 pages to the proof (although some of it is an informal discussion of how one goes about coming up with such a proof). While our proof roughly follows Gleason, our trick of constructing a new positive number less than both A and 1, given a positive number A, is not in Gleason - he uses the infimum of A and 1 (bottom of page 170), which would be more awkward (for us at least) to deal with. This trick is proved by recrecltt and is used in climmullem5. (9-Feb-06) I found a slightly shorter equivalent for ax-groth expanded to primitives. The idea was to use fun11 at step 42, so that the old steps 42-60 become 42-48. But the result was a little disappointing. I had higher hopes for the idea but it only ended up removing one binary connective. At least the proof is 59 instead of 71 steps. (The old one has been kept temporarily as grothprimOLD.) Probably the biggest problem is the repeated use of grothlem (4 times) to expand binary relations. I wonder if there is a shorter way to effectively express this concept. (8-Feb-06) dummylink was added for a project to interface O'Cat's mmj2 Proof Assistant GUI with the metamath program's Proof Assistant, but I've discovered that it can be quite handy on its own as suggested by its description. For more background see "Combining PA GUI and CLI - an interim solution?" at the bottom of web page http://planetx.cc.vt.edu/AsteroidMeta/mmj2ProofAssistantFeedback (Downloaders - the Metamath download containing this proof will be in tomorrow's download. In general, the Most Recent Proofs usually take about a day to propagate to the downloads.) (4-Feb-06) More shorter proofs by O'Cat - pm2.43d, pm2.43a. rcla4cv ended up shortening 26 proofs by combining rcla4v and com12. The result was a net reduction in the database size, even after accounting for the new space taken by rcla4cv. (3-Feb-06) Mel O'Cat found shorter proofs for sylcom, syl5d, and syl6d while having fun with his new toy, the Proof Assistant GUI. Note: the new proofs of of syl5d and syl6d have the same number of logical steps, but proofs are shorter if we include the unseen wff-building steps. Out of curiosity I restored the original syl5d proof, since it had already been shortened by Josh Purinton, and called it syl5OLDOLD. Here are the complete proofs for the syl5d versions: syl5d: 14 steps wph wps wta wch wth wph wta wch wi wps syl5d.2 a1d syl5d.1 syldd $. syl5dOLD: 16 steps wph wps wch wth wi wta wth wi syl5d.1 wph wta wch wth syl5d.2 imim1d syld $. syl5dOLDOLD: 19 steps wph wta wps wth wph wta wch wps wth wi syl5d.2 wph wps wch wth syl5d.1 com23 syld com23 $. (30-Jan-06) Today we start a brand new proof of the binomial theorem that will be much shorter than the old one. It should also be much more readable. This is what the new one will look like (A e. CC, B e. CC): ( N e. NN0 -> ( ( A + B ) ^ N ) = sum_ k e. ( 0 ... N ) ( ( N C. k ) x. ( ( A ^ ( N - k ) ) x. ( B ^ k ) ) ) ) Compare to the old, binomOLD: ( ( N e. NN0 /\ A. k e. ( 0 ... N ) ( F ` k ) = ( ( N C. k ) x. ( ( A ^ k ) x. ( B ^ ( N - k ) ) ) ) ) -> ( ( A + B ) ^ N ) = ( <. 0 , N >. sumOLD F ) ) (24-Jan-06) (Compare note of 22-Oct-05.) Per the request of Mel O'Cat, I eliminated all connective overloading in set.mm by making weq, wel, and wsb "theorems" so that he can use set.mm with his GUI Proof Assistant. This involved moving set theory's wceq, wcel, and wsbc up into the predicate calculus section, which is somewhat confusing, so I added extensive commenting to explain it hopefully. Note that the web page "proofs" of weq, wel, and wsb have only one step: this is because they are syntax proofs, and all syntax building steps are suppressed by the web-page generation algorithm, which doesn't distinguish weq, etc. from "real" theorems. I'm not yet sure if it's worth changing the algorithm for this special case. To see the actual steps, in the Metamath program type "show proof weq /all". (20-Jan-06) supxrcl shows the usefulness of the extended reals: the supremum of any subset always exists. Compare the non-extended real version suprcl, which has a complicated antecedent that must be satisfied. (19-Jan-06) A new set theory axiom, ax-groth, was added to the database. This axiom is used by Mizar http://mizar.org to do category theory (that ZFC alone cannot do), and I think it is appropriate to add it to set.mm. One of the negative aspects of this axiom (aesthetically speaking) is that it is "non-elementary" and very ugly when expanded to primitives, unlike the ZFC axioms. I worked out grothprim because I was curious to see what it actually looks like. I don't think grothprim will actually be used for anything since it is impractical; instead, ax-groth would be the starting point. However, grothprim can provide a benchmark for anyone seeking a shorter version. There may be a theoretical reason why it can't be as short as say ax-ac, but I don't think anyone knows what the shortest possible equivalent is. mmset.html has also been updated to include ax-groth below the ZFC axioms. I wrote to Jeff Hankins: I added ax-groth partly in response to your email on Mycielski's ST set theory,* although it's been on my backburner for a while. In my mind, ax-groth "completes" set theory for all practical purposes. (The Mizar people, who use this axiom, also think so.) Unlike the controversial assertions of ST, ax-groth is relatively mild and uncontroversial - I don't know of any debate over it, unlike the debate on the Continuum Hypothesis. I am pretty sure that Mycielski's Axiom SC implies ax-groth from his comments, although I haven't worked out a proof. So ZFC+ax-groth is most likely a subset of ST. * http://www.ams.org/notices/200602/fea-mycielski.pdf - free AMS sign-up required to view article (12-Jan-06) The exponential function definition df-ef is new. Yesterday's version of df-ef was reproved as theorem dfefOLD that will eventually be deleted after the old infinite summation notation is phased out. Compare the new exponential function value formula, efvalt, with the old one, efvaltOLD. Don't you agree that it is much easier to read? This kind of thing makes me believe that the effort to introduce the new summation notation was worthwhile. :) In addition, will have a much nicer version of the binomial theorem (whose old version has already been renamed to binomOLD), with a much shorter proof - stay tuned! (11-Jan-06) isumvalOLDnew links the old and new definitions of infinite sum, allowing us to temporarily reuse theorems in the old notation until they are phased out. See comments below of 1-Nov-05, 2-Nov-05, 20-Dec-05, and 21-Dec-05 regarding the new finite/infinite sum notation df-sum. The present definition of the exponential function, df-ef, makes use of the obsolete infinite sum notation. dfefOLDnew will replace df-ef in the next few days and become its new official definition. The old df-ef will become a (temporary) theorem that will be used to support the old infinite sum notation until it is phased out. gch-kn was updated with new hyperlinks. (10-Jan-06) Regarding the 9-Jan-06 item in "Also new", the primary reason I added the "/except" switch to "minimize_with" was to exclude 3syl. While 3syl may shorten the uncompressed normal proof, it often makes compressed proofs grow longer. This happens when the intermediate result of two chained syl's is used more than once. When the intermediate result disappears due to 3syl, two formerly common subproofs have to be repeated separately in the compressed proof - part of the compression is achieved by not repeating common subproofs. So, typically I exclude 3syl then minimize with it separately to see if the compressed proof shortens or lengthens. Maybe I'll add an option to also check the compressed proof length instead of or in addition to the normal proof length, but the "/exclude" was easier to program, and curiously 3syl is the only problematic theorem I'm aware of. (9-Jan-06) climOLDnew is a temporary theorem that links the old and new limit relations, ~~>OLD and ~~>. This will let us "jump ahead" and work on exponentiation, etc. with the new notation before cleaning up all the ~~>OLD stuff (which I will still clean up eventually). The linkage is needed to avoid any gaps in the released set.mm. (The metamath command "verify proof *" should always pass for the daily releases of set.mm, ensuring absolute correctness of its contents - even the stuff that's obsolete.) The main difference between ~~>OLD and ~~> is that ~~>OLD has the rigid constraint that the sequence F be a function from NN to CC, whereas ~~> allows F to be any set with lots of irrelevant garbage in it as long as it eventually has function values in CC beyond some arbitrary point. This can make ~~> much more flexible and easier to use. The uppercase "OLD" in climOLDnew means the theorem will go away; for my cleanup I will be phasing out and deleting all theorems matching *OLD*. Currently there are 380 *OLD* theorems due to be phased out. They can be enumerated by counting the lines of output of the metamath command "show label *OLD*/linear". (6-Jan-06) syl*el* are all possible combinations of syl5,syl6 analogs for membership and equality. I added them to shorten many proofs, since these patterns occur frequently. (Since 1999 I've added the syl*br* versions of these for binary relations and found them useful, so I decided to add the syl*el* versions.) There is a curious asymmetry in which ones ended up being useful: syl5eqel got used over 30 times, whereas syl5eleq isn't used at all so far. I don't know if this is because I wrote the original proofs with certain patterns subconsciously repeated, or if there is something more fundamental. (3-Jan-06) r19.21aiva adds 319 bytes to the database, but it reduces the size of about 50 (compressed) proofs by 765 bytes total, for a net reduction in database size of 466 bytes. (21-Dec-05) All theorems that involved df-fsum have been updated to use the dual-purpose (finite and infinite) df-sum instead. So, we now have: Definition Token Symbol Yesterday Today Yesterday Today Yesterday Today df-sum df-sum sum_NEW sum_ \Sigma_NEW \Sigma df-fsum df-fsum sum_ sum_OLD \Sigma \Sigma_OLD df-fsumOLD df-fsumOLD sumOLD sumOLD \Sigma_OLD \Sigma_OLDOLD The names with "OLD" are now kind of oddly inconsistent, but everything with "OLD" in it (whether label or token) will eventually be deleted so it doesn't really matter. (20-Dec-05) The new finite sum stuff looks like it will be very useful, and we will need an infinite sum version to replace the current df-isum. Rather than repeat the whole development with new equality, bound variable, etc. utility theorems, I decided to combine the two definitions. The new combined definition is called df-sum, which is basically the union of two definitions. The index range (finite or infinite) determines whether the sum is finite or infinite. See the comments in df-sum. We need about half a dozen utility theorems. Then, after changing the "sigmaNEW" to "sigma", we can "plug in" the new definition and re-use the theorems we have already developed for finite sums without further modification. fzneuzt is the basic theorem that lets us distinguish the finite half vs. the infinite half of df-sum. (14-Dec-05) fsum1sNEW (to be renamed fsum1s) exploits class2set to eliminate the hypothesis of fsum1slem, so that we require only existence of A(M) as a member of some arbitrary class B, rather than requiring that it be a complex number (as yesterday's fsum1s requires). This will shorten future proofs by allowing us to apply fsum1sNEW directly when A is a real, rational, etc. I had almost forgotten about class2set, which I think is a neat trick. Yesterday's fsum1s will be renamed to fsum1sOLD and eventually deleted. I'm not sure if fsum1s2 will be useful, but it lets us show off an application of the interesting fz1sbct. (13-Dec-05) fsum1slem shows an example of a use for the new substitution-for-a-class notation. Compare it to the implicit substitition version fsum1. fsum1s turns the hypothesis A e. V of fsum1slem into an antecedent. Since A is quantified, we have to work a little harder than usual to accomplish this. (4-Dec-05) See http://planetx.cc.vt.edu/AsteroidMeta//metamathMathQuestions for comments on equsb3. (30-Nov-05) csbnestg is the same as csbnestglem except that it has fewer distinct variable restrictions. Its proof provides another example of a way to get rid of them; the key is using a dummy variable that is eliminated with csbcog. I think it is a neat theorem and was pleasantly surprised that so few distinct variable restrictions were needed. The antecedents just say that A and B are sets and are easily eliminated in most uses. By having antecedents instead of A e. V, B e. V hypotheses, we can make more general use of the theorem when A and B sethood is conditioned on something else; hence the "g" after "csbnestg". (23-Nov-05) In all versions of set.mm from 18-Nov-05 to 22-Nov-05 inclusive, the line htmldef "QQ" as "QQ"; should be htmldef "QQ" as "QQ"; Thanks to Jeff Hankins for pointing this out. (18-Nov-05) sbccom is the same as sbccomlem except that it has fewer distinct variable restrictions. Its proof shows an example of how to get rid of them when they are not needed. ------- I made around 80 changes to the bixxx series names to be consistent with earlier bixxx -> xxbix changes in prop. calc. E.g. biraldv was changed to ralbidv. r - restricted al - for all bi - biconditional d - deduction v - $d instead of bound-variable hypothesis Also, bi(r)abxx were changed to (r)abbieqxx e.g. biabi was changed to abbieqi. ab - class abstract (class builder) bi - hypothesis is biconditional eq - conclusion is equality i - inference As usual, all changes are listed at the top of set.mm, and as instructed there can be used to create a script to update databases using set.mm as their base. As always, better naming suggestions are welcome. (17-Nov-05) abidhb is a very neat trick, I think! It will let us do things that the Weak Deduction Theorem by itself can't handle. For its first use, we create a "deduction" form of the bound-variable hypothesis builder for function values, hbfvd - this is actually a closed form that allows _all_ hypotheses to be eliminated, since 'ph' is arbitrary and can be replaced with a conjunct of the hypotheses. And the only thing hbfvd needs is the "inference" version hbfv in step 5! Before I thought of abidhb, hbfvd was going to require a long chain of hbxxd's (hbimd, hbabd,...) that would build up to the function value definition. I was actually starting to get depressed about the amount of work that would have been needed. But as they say, laziness is the mother of invention. Now, we can just add hbxxd's as needed, starting from the hbxx "inference" versions! (15-Nov-05) Note that fsumeq2 requires a $d for A and k, whereas fsumeq1 doesn't. On the other hand, we have analogously iuneq1 and iuneq2, neither of which require the bound variable to be distinct! I spent a lot of time trying to get rid of it for fsumeq2 by changing the definition df-fsum, but it always seemed that if I got rid of it in fsumeq2 it would show up in fsumeq1. So I don't know whether it is theoretically possible to get rid of it. In the current version of the fsumeq2 proof, the $d is needed to satisfy resopab in steps 9 and 10. Getting rid of $d A k in fsumeq2 would be advantageous if I add an "explicit substitution" form of induction like (for ordinals) Raph Levien's findes, where the hypothesis findes.2 has the substituted variable free in the expression to be substituted. So, if anyone can solve this, let me know! (14-Nov-05) Today we introduce a new definition, df-csbc, the proper substitution of a class variable for a set into another class variable. We use underlined brackets to prevent ambiguity with the wff version, otherwise [ x / y ] A R B could mean either x e. { y | A R B } for the df-sbc wff version or <. [ x / y ] A , B >. e. R for the df-csbc class version. So instead we use [_ x / y ]_ A for the class version. One reason I chose the underline is that it is easy to do in Unicode and LaTeX, but if you have another idea for the notation let me know. See notes of 5-Nov-05 for other notes on the definition. (13-Nov-05) I decided to make the new finite summation notation df-fsum official. The old has been renamed to df-fsumOLD. I am uncertain about whether to keep the old (under a different name yet to be determined) or delete it eventually. There are 61 theorems using it (21 of which are the binomial theorem binom) which I hope to eventually re-prove with the new notation. (5-Nov-05) Regarding sbabex: The notation "[ y / x ] ph" means "the proper substitution of y for x in phi". We do not have a separate notation for the class version of this, so until such time (if it becomes common enough to warrant a special notation), the idiom "{ z | [ y / x ] z e. A }" means "the proper substitution of y for x in class variable A". In other words we turn the class into a wff - the predicate "is in A" - then do the proper substitution, and finally turn the result back into a class by collecting all sets with this predicate. I think that's a neat trick, and it will become the definition if we introduce a notation for it. Note that the argument of "[ y / x ]" is "z e. A", which is a wff. (2-Nov-05) I have about a dozen theorems in progress with the current 'df-fsum' notation, that I might as well finish before switching to the new notation. These will provide reference proofs that will make the corresponding versions in the new notation easier to prove, but they will eventually be deleted (assuming I adopt the new notation, whose definition I'm still fine tuning.) Not all theorems will be shorter with the new notation, which is one reason for my indecision. For example: Old: (fsumserz) |- F e. V => |- ( N e. ( ZZ> ` M ) -> ( <. M , N >. sum F ) = ( ( <. M , + >. seq F ) ` N ) ) New: (fsumserzNEW) |- F e. V => |- ( N e. ( ZZ> ` M ) -> sumNEW k e. ( M ... N ) ( F ` k ) = ( ( <. M , + >. seq F ) ` N ) ) (1-Nov-05) The proof of the binomial theorem painfully illustrates that the current notation for summations is very awkward to work with, particularly with nested summations. A new definition I'm experimenting with is df-fsumNEW, which, unlike df-fsum (which is a class constant with no arguments), has a dummy variable k and two other arguments. To indicate the sum A^1 + A^2 +...+ A^N, we can write sumNEW k e. ( 1 ... N ) ( A ^ k ) instead of the present ( <. 1 , N >. sum { <. k , y >. | ( k e. ZZ /\ y = ( A ^ k ) ) } ) (where usually the class builder is stated as a hypothesis like F = { <. k , y >. | ( k e. ZZ /\ y = ( A ^ k ) ) } to keep the web page proof size down). Nested sums are even more awkward, as the hypothesis "G =" in the binomial lemmas shows. With the new notation, the binomial theorem would become: ( N e. NN0 -> ( ( A + B ) ^ N ) = sumNEW k e. ( 0 ... N ) ( ( N C. k ) x. ( ( A ^ k ) x. ( B ^ ( N - k ) ) ) ) ) The price we pay is that 'sumNEW' is not just a set-theoretical class constant like 'sum', but instead a symbol with arguments and a bound variable, analogous to indexed union df-iun. In particular, its soundness verification, while still simple, is not as "trivial" as with new class constants. There is nothing wrong with this in principle, but it is contrary to my simplicity goal of introducing only new class constants for new definitions, thus keeping the number of "primitive" syntactical structures to a bare minimum. But in this case I think practicality will win out. The proofs should be more elegant with 'sumNEW' (later to be changed to 'sum' if I decide to keep it), and I also think it is more readable. Of course, soundness justification will not be an issue with the eventual Ghilbert version. To further elaborate on my simplicity preference (for which df-fsumNEW will be an exception), below I reproduce my response to an email Josh Purinton wrote me (on Oct. 18) regarding the notation for fzvalt. > Consider using square brackets for 'compatibility' with the > distinction between a closed and open interval. My response: I understand what you are getting at, but there is a slight problem. df-fz is just the class symbol "..." which is used as an operation, and the parentheses are just the normal parentheses that surround an operation value. Thus "( 1 ... 3 )" means "( ... ` <. 1 , 3 > )". I could define a new syntactical structure or pattern "[ A ... B ]" but then I couldn't use all the equality, hb*, etc. theorems we have for operation values. After basic set theory development, which is more or less finished, I've been trying to introduce only new class constant symbols (with exceptions for a few very common things like the unary minus "-u"; actually that is the only exception so far). In addition to allowing us to reuse general-purpose theorems, the soundness justification is trivial for a new constant class symbol, which is what I like most about that approach. Also, "( m ... n )" is really more of a discrete, unordered list than an a continuous closed interval. I will probably never be completely happy with "..." in particular because it is nonstandard and unfamiliar, but on the other hand it has turned out to be very useful for theorems involving finite sums. But I didn't consider it so important that it justifies its own new syntactical pattern. It is so rare in the literature (if it ever occurs) that I was pleased to stumble across Gleason's partial version of the notion. For the four real intervals (x,y), (x,y], [x,y), [x,y] I haven't decided what to do yet. It would be preferable to have them be just operation in the form of new class constant operation symbols, but I haven't thought of any good notation to accomodate them in this form. We could have e.g. "(.,.]" or "(]" so we'd have "( A (.,.] B )" or "( (.,.] ` <. A , B >. )" or "( A (] B )" etc. but these are odd-looking. What I will end up doing is very open at this point. Maybe it's time to start using words like "closed", "rclosed", "lclosed", "open", etc. in some way? Right now we have only the two workhorses "( F ` A )" and "( A F B )" for general function/operation values. Analogously we have "A e. R" and "A R B" for predicates/binary relations. They are the only general patterns the reader has to be familiar for virtually all new definitions. In theory these are all that we need, although certain notations become very awkward (e.g. extending them to more arguments via ordered pairs, and the real intervals you have brought up). Note that right now, we are using the "workhorse" ( A F B ) for virtually all of the new definitions of sums, sequences, shifts, sequential integer sets, etc. I like it because there is only one underlying notation, i.e. operation value, that you have to be aware of. But I think the present df-fsum stretches the limit of what is practical and "user-friendly". -------------------- (24-Oct-05) Today we introduce the superior limit limsup, which will be one of our principal uses of the extended reals. (22-Oct-05) It appears I mispoke yesterday when I said "The new syntax allows LALR parsing," and I changed it to "The new syntax moves us closer to LALR parsability." From Peter Backes: It makes it more LALR than before, but not completely. ;) What remains is 1) set = set (trivial, since redundant) [i.e. weq vs. wceq] 2) set e. set (trivial, since redundant) [i.e. wel vs. wcel] 3) [ set / set ] (trivial, since redundant) [i.e. wsb vs. wsbc] 4) { <. set , set >. | wff } (we already discussed it and agreed it was not easy to solve) [i.e. copab] 5) { <. <. set , set >. , set >. | wff } (ditto) [i.e. copab2] These are all easy to fix by brute force (eliminating weq, wel, and wsb, and changing "{", "}" to "{.", "}." in copab and copab2) but I don't want to be too hasty and am looking into whether there are "nicer" ways to do this first. (21-Oct-05) A big change (involving about 121 theorems) was put into the database today: the indexed union (ciun, df-iun) and indexed intersection symbols (ciin, df-iin) are now underlined to distinguish them from ordinary union (cuni, df-uni) and intersection (cint, df-int). Although the old syntax was unambiguous, it did not allow for LALR parsing of the syntax constructions in set.mm, and the proof that it was unambiguous was tricky. The new syntax moves us closer to LALR parsability. Hopefully it improves readability somewhat as well by using a distinguished symbol. Thanks to Peter Backes for suggesting this change. Originally I considered "..." under the symbol to vaguely suggest "something goes here," i.e. the index range in the 2-dimensional notation, but in the end I picked the underline for its simplicity (and Peter prefered it over the dots). Of course I am open to suggestion and can still change it. In the distant future, there may be 2-dimensional typesetting to display Metamath notation (probably programmed by someone other than me), but for now it is an interesting challenge to figure out the "most readable" 1-dimensional representation of textbook notation, where linear symbol strings map 1-1 to the ASCII database tokens. iuniin is the same as before but has an expanded comment, and also illustrates the new notation. (18-Oct-05) Today we show a shorter proof of the venerable theorem id. Compare the previous version at http://de2.metamath.org/mpegif/id.html . fzvalt is the same as before but has an expanded comment. (15-Oct-05) Definition df-le was changed to include the extended reals, and df-le -> xrlenltt -> lenltt connects the new version to the existing theorems about 'less than or equal to' on standard reals. (14-Oct-05) The set of extended reals RR*, which includes +oo and -oo, was added, with new definitions df-xr, df-pinf, df-minf, and df-ltxr. The old < symbol was changed to <_RR, the new df-ltxr symbol was called <, and the ordering axioms were reproved with the new < symbol (and they remain the same, since in RR, < and <_RR are the same by ltxrlt. This allows us to use all remaining theorems about RR in the database unchanged, since they are all restricted to elements of RR. The theorems proved today are the minimum necessary to retrofit the database in this way. I was pleasantly surprised at how easy it was to add in the extended reals. Unlike textbooks, that just say +oo and -oo are "new" distinguished elements without saying what they are, we must state concretely what they are in order to use them. So I picked CC for +oo and { CC } for -oo. Many other choices are possible too. The important thing is not what elements are chosen for them, but how the new < ordering is defined. Unlike some analysis books, Gleason finds it unnecessary to extend the arithmetic operations (only the ordering) for +oo and -oo, so I will be avoiding that too unless a clear advantage becomes apparent. E.g. some books define +oo + A = +oo, A / +oo = 0, etc. but for us that is now undefined. (6-Oct-05) modal-b is analogous to the Brouwer modal logic axiom if we map "forall x" to box ("necessarily") and "exists x" to diamond ("possibly"). See http://plato.stanford.edu/entries/logic-modal/ and also http://www.cc.utah.edu/~nahaj/logic/structures/systems/s5.html In fact, our axioms ax-4, ax-5, and ax-6 (plus ax-1/2/3, modus ponens, and generalization) are *exactly* equivalent to modal logic S5 under this mapping! This was not intended when I first came up with the axioms. Our axioms have a different form because I arrived at them independently when I didn't know what modal logic was, but they (or Scott Fenton's ax46/ax-5) are provably equivalent to S5 and can be used (under the mapping) as alternate axioms for S5. Conversely, all the theorems of S5 will automatically map to theorems of our "pure" predicate calculus. Axiom ax-7 has no modal logic analog, since it has two variables. However, if we restrict x and y to be distinct, it might be possible to make an analogy between it and the Barcan Formula BF (see the plato.stanford.edu page), particularly because the BF converse is also true (http://plato.stanford.edu/entries/actualism/ltrueCBF.html) as it also is for ax-7. (4-Oct-05) ser1f0 - The difficulty of proving this "obvious" fact was surprising. (30-Sep-05) df-isum is a new definition for the value of an infinite sum that will eventually replace df-sumOLD. Its advantage is that the sum can start at any index N instead of the fixed index 1. isumvalt is the first use of the new definition. The notation of df-isum perhaps isn't as nice as df-sumOLD, but I don't know how else to do it since we are using a linear notation with no subscripts. (The infinity superscript is not a separate symbol but part of the fixed infinite summation symbol, represented as "sumoo" in the database.) (27-Sep-05) The obsolete definitions df-seq0OLD and df-climOLDOLD, along with all theorems dependent on them, have finally been purged from the set.mm database, lightening it a bit. dfseq0 is nice. Perhaps I'll interchange df-seq0 and dfseq0 some day. (19-Sep-05) Scott Fenton found a shorter proof of ax46. (18-Sep-05) It is becoming apparent that the recently introduced new version of df-clim (now called df-climOLDOLD), although very useful because of its arbitrary starting point, has some limitations: since there is no built-in requirement that the limit be a complex number or that the sequence have complex values, additional conditions would have to be stated for proving convergence that will make a lot of theorems awkward. Therefore I changed the definition to today's df-clim, called the old ~~> as ~~>OLDOLD, and we will reprove most or all of the old theorems then delete them. The new definition df-clim is more complex but it should be worth it in the end. (There is still df-climOLD, which is severely limited to sequences starting at 1, that must eventually be replaced with df-clim. This will be a longer-term project, since df-clim is directly or indirectly affects around 500 theorems. df-climOLDOLD, with its short-lived existence, only affects around 20 theorems.) (14-Sep-05) elfz4b shows the converse of elfz4t also holds - a nice surprise. Maybe I'll use it instead of elfz4t. (11-Sep-05) Today we introduce an new definition, df-uz. The idiom "set of all integers greater than M" keeps recurring, so I decided to add a notation for it to shorten proofs, even though it is nonstandard. "ZZ subscript >=" is a function that maps an integer to the set of all integers greater than or equal to it. I think I chose a good notation for it that should be easy to remember; if you don't think so let me know. (8-Sep-05) fsumserz is an important theorem that expresses a finite sum as a partial sum of a sequence builder. In fact, it shows that the finite sum notation is redundant, although we'll keep it because it slightly more compact and seems like a more natural notation. (7-Sep-05) A small change was made to df-fz to restrict its domain to ZZ X ZZ, requiring a new version of fzvalt. All other theorems remain compatible, but the change allows us to state the useful elfz7t, where (provided N is a set) we can deduce that M and N are in ZZ simply from the fact that ( M ... N ) has a member. This will allow us to simplify proofs by not requiring M e. ZZ and N e. ZZ as additional hypotheses. (The fact that N must be a set is an artifact of our operation value definition. I'm currently pondering changing the operation value definition so that N would not be required to be a set in elfz7t, but that would be a huge change throughout the db - perhaps in the long term future.) seq0seqz and seq1seqz are yesterday's promised special cases of "seq". (6-Sep-05) The old symbol "seq" for a 1-based infinite sequence builder has been changed to "seq1" (df-seq1) for consistency with the 0-based version "seq0". The symbol "seq" (df-seqz) has been (re)defined to be an infinite sequence builder with an arbitrary starting index, and we will show, today or tomorrow, that "seq0" and "seq1" are special cases of it. (26-Aug-05) I didn't like the notation for finite sums so I decided to do it all over again. Everything in the last few days has been renamed to *OLD. These will be reproved with the new notation and the *OLDs deleted. (Also, I extended the definition so the value is zero if the lower limit is greater than the upper limit, like some books do.) So, instead of the (to me) awkward "F Sigma " for --- N \ F_i / --- i = M we now can state this as "Sigma`<,F>", which seems more natural. By df-opr this is equivalent to " Sigma F", which results in shorter proofs, so that's what I'll use for most proofs. But that's just a technicality that has nothing to do with the definition; anyone can trivially reprove them with the "Sigma`<,F>" notation if they want. (20-Aug-05) Many new or revised definitions today: df-shft replaces df-shftOLD and df-shftOLDOLD - I extended it to all complex numbers, not just integers, for more flexible long-term use. df-clim replaces df-climOLD - Convergence is now independent of the domain (NN, NN0, ZZ) of the underlying sequence - much nicer! In fact the input function can be garbage at the beginning, as long as there exists an integer somewhere beyond which it behaves. df-seq0 replaces df-seq0OLD and df-seq0OLDOLD in order to use the new df-shft df-fsum is the definition of a finite series summation. I have mixed feelings about the notation (see fsumvalt comment), and comments are welcome. df-plf is the addition of two functions. I made it so it can apply to complex functions in general, not just sequences. For sequences, we'll restrict the function sum to NN, etc. to strip out meaningless values. df-muf multiplies a constant times a function, again for complex functions in general. Slowly the obsolete *OLD versions will be replaced and eventually deleted. Yesterday's shftcan1t, etc. are already obsolete! The lesson learned from the multiple versions of df-shft was that it seems more useful to keep the definitions simple and as general as possible. Individual theorems can impose the necessary restrictions as needed, rather than having the restrictions "hard-coded" into the definition. For example, df-clim is now dramatically more useful by not restricting the domain of the underlying sequence to NN. ------- I am thinking about a general 'seq' recursive sequence generator with an arbitrary starting point. The present 'seq' would be renamed to 'seq1'. What would be nice would be: ( + seq0 F ) = ( + ( seq ` 0 ) F ) ( + seq1 F ) = ( + ( seq ` 1 ) F ) etc. Unfortunately seq0 and seq1 are proper classes and can't be produced as function values, but restricting them to be sets would limit their usefulness. On the other hand, defining seq so that ( + seq0 F ) = ( < + , 0 > seq F ) or ( + seq0 F ) = ( + seq < F , 0 > ) or ( + seq0 F ) = ( < + , F > seq 0 ) etc. can be made to work without a restriction but none of the 12 possibilites seem natural to me. What do you think? (17-Aug-05) cvgratlem1,2,3 will replace the longer old versions (renamed *OLD) in a re-proof of cvgrat that I have planned. (5-Aug-05) The old definitions of the shift and seq0 operations have been SCRAPPED. They have been renamed df-shftOLD and df-seq0OLD (to be deleted eventually), and replaced by new ones df-shft and df-seq0. All of the old theorems are obsolete and have been renamed *OLD. The old symbols have been changed to prevent accidental re-use. The new definitions will provide simpler and more general theorems. For example, seq01 and seq0p1 are now the exact analogs of seq1 and seqp1 - compare seq01OLD and seq0p1OLD, which required an annoying functionality hypothesis. (31-Jul-05) Per a suggestion from Scott Fenton, I renamed the following theorems: Old New syl34d imim12d syl4d imim2d syl3d imim1d syl34 imim112i syl4 imim2i syl3 imim1i syl2 imim2 syl1 imim1 (27-Jul-05) I was finally able to find a shorter proof of uzind. Veteran visitors to this site will recall the 3/4 megabyte proof described on 18-Jun-04 in mmnotes2004.txt, then called zind, and currently renamed to uzindOLD. (11-Jul-05) Back to the drawing board... I decided to change binary coefficient df-bc so that it is now defined (as 0) outside of its "standard" domain of 0 <_ k <_ n, as is often done in the literature. With the old definition, I can now see that many proofs using it would have been very awkward. Accordingly, several proofs were changed to accomodate the new definition (not shown on the 'most recent' page - I usually do not re-date modified proofs) and today's new ones were added. (6-Jul-05) peano2re, although it is trivial and seems silly, shortens a dozen proofs and reduces the net size of the set.mm database file. (5-Jul-05) peano5nn is a simplification of the previous version. df-n was also simplified. (28-Jun-05) pm4.83 finally completes the entire collection of the 193 propositional calculus theorems in Principia Mathematica. This had been done before for the Metamath Solitaire applet in http://us2.metamath.org:8888/mmsolitaire/pmproofs.txt - but the set.mm proofs are hierarchically structured to be short, indeed as short as I (or Roy Longton for some of them) could find. An ordered index of these can be found on the xref file http://us2.metamath.org:8888/mpegif/mmbiblio.html in the [WhiteheadRussell] entries. (26-Jun-05) Yesterday's reuunixfr probably ranks among the most cryptic in the database. :) Today's reuunineg shows an application of it that is much easier to understand, with most of the confusing hypotheses of reuunixfr eliminated. (21-Jun-05) rabxfr lets us conclude things like the following: (z e. RR -> (z e. {x e. RR | x < 1} <-> -z e. {y e. RR | -y < 1})). The first two hypothesis just specify that y mustn't be free in B and C (a less strict requirement than distinct variable groups y,B and y,C). pm4.42 is Roy Longton's first Metamath proof. (20-Jun-05) shftnnfn and shftnnval show the example of NN shifted to NN0 described in df-shft. Hopefully these two theorems make it clear, in a simple and intuitive way, what the 'shift' operation does. (19-Jun-05) df-shft is a new definition; see its comment for an explanation of the sequence shift operation. In general I dislike introducing a made-up explicit notation for a concept that exists in the literature only implicitly in informal proofs, and I try to avoid it when possible, because the notation will be completely unfamiliar even to mathematicians. But in the case of df-shft, after careful consideration I felt the benefits will outweigh this disadvantage. Once the initial complexity is overcome with some basic lemmas, it is a relatively simple concept to understand intuitively. (18-Jun-05) We will start using j,k,m,n for integer set variables and J,K,M,N for integer class variables. I hope this will improve readability a little. Over time I will retrofit old theorems. This will be a major change involving hundreds of theorems, so if you have comments on this let me know. In the retrofitted proof of bcvalt, you can see the effect of this change. (17-Jun-05) imret provides us with a simpler way to define the imaginary part, compared to df-im. I may do that eventually. (11-Jun-05) I finally caved in and revised df-exp so that 0^0=1 (as can be seen with expnn00) instead of being undefined. I have decided that otherwise, some future things I have in mind are just going to be too awkward. Raph Levien came up with the original df-exp, where 0^0=1. But he's a computer scientist. From a more purist perspective, I felt it was an "inelegant patch," as it has been called, and I changed his definition to exclude it. For the past year we've trodded along merrily without it. But I'm starting to see that 0^0=1 will lead to simpler proofs and statements of theorems in many cases. So now we have 0^0=1 again. Gleason's book defines 0^0=1 and uses it extensive, e.g. in the definition of the exponential function (where we avoided it by breaking out the 0th term outside of the infinite sum). For a further discussion of this see: http://www.faqs.org/faqs/sci-math-faq/specialnumbers/0to0/ Another reason I wanted to avoid defining 0^0 is that years ago on Usenet, and probably still, there were endless arguments about it. I wanted to distance Metamath from that. :) (10-Jun-05) The choose function df-bc was added. The literature uses math italic capital C - but that conflicts with our purple C for classes (when printed on a black-and-white printer). So I decided to use a Roman C. bcvalt is somewhat awkward to prove because of its "Pascal triangle" restricted domain instead of the full NN0 X. NN0. Thus we have to use oprabvali instead of the more efficient oprabval2. (4-Jun-05) As far as I know, inf5 has never been published. I think it is neat. pm2.13 seems like a rather silly variant of excluded middle (exmid). What can I say - I'm just implementing what's in the book. (2-Jun-05) efclt makes use of (in efcltlem1) the very important and useful ratio test for convergence, cvgrat of 28-May-05, to show (in efcltlem3) the convergence of the exponential function. This in turn lets us show that the value of the exponential function is a complex number. This will open a lot of doors with what we can do with the exponential function. Note that all of the confusing (or at least unconventional) limit, seq, and infinite sum stuff have disappeared, having served their purpose, and we're back into familiar territory. Interestingly, the bounds we found earlier for Euler's constant e, in ege2le3, didn't require all this work. That is because e is a special case of the exponential function that is much easier to work with. (27-May-05) sercj tells us the the complex conjugate of each term in an infinite series is the sum of the complex conjugates of the underlying sequence. We prove it by induction. Recall that (+ seq F)`A means --- A \ F_i / --- i = 1 Theorem minusex just says that the negative of any class whatsoever (even a proper class) is a set. While this is not very meaningful when the argument is not a complex number, it saves the effort of proving that the argument is a complex number, making it trivial, for example, to eliminate the hypothesis "A e. V" of yesterday's cvgcmp3cet. (26-May-05) cvgcmp3cet is a pretty massive application of the Weak Deduction Theorem http://us.metamath.org/mpegif/mmdeduction.html that converts 8 hypotheses into antecedents. A number of tricks were employed to make the proof sizes manageable. I didn't bother with the final hypothesis, "A e. V", because it's trivial to eliminate with vtoclg if needed (you don't need the Weak Deduction Theorem for that) and in most cases A will exist anyway. (25-May-05) The theorems expgt1t and oewordri have little to do with each other. There is an isomorphism between finite ordinal exponentiation and exponentiation of the natural number subset of reals, that could be exploited in principle, but they are independently developed in our database. A common root for both can be traced back to ordinal multiplication (which is a starting point for the construction of the reals), but from that point they diverge. And when ordinals get into the transfinite, the behavior of exponentiation becomes bizarrely different, as we will see when (hopefully) I eventually get to it. (24-May-05) Two of the hypotheses of cvgcmp3ce, cvgcmp3ce.4 and cvgcmp3ce.7, are theoretically unnecessary. However, eliminating them is tedious (it involves dinkering around in the hidden regions of F and G prior to index B; these regions were purposely left undefined to make the theorem more general) and for most practical purposes unnecessary, so I decided to leave the theorem "less than perfect," so to speak, at least for now. We could also, with only a few more steps (by changing y to a dummy variable z and using cbvexv and mpbi at the end) eliminate the requirement that x and y be distinct variables. I may do this if it ever becomes useful to do so. In that case, the distinct variable group "x,y,G" would split into "x,G" and "y,G". The new rcla4 series swaps the antecedents. I think this makes their use more natural in a lot of cases. However, I'm wondering if this was a mistake: rcla4v was used in around 90 theorems, and it took several hours just to convert a couple dozen of the easiest ones. In maybe 75% of those cases the proof size was reduced, but in others it was increased, and the hoped for net "payback" in terms of reduced database size hardly seems worth it, if there will be a net reduction at all. The old rcla4* versions were renamed with an "OLD" suffix, and I'll be eliminating them over time (on dreary days when I'm feeling otherwise uninspired). By the way here is an informal breakdown of the cryptic name "rcla42v": 'r' - uses restricted quantification (vs. "cla4*") 'cl' - deals with substitution with a class variable 'a4' - an analog to the specialization axiom ax-4 (and Axiom 4 in many systems of predicate calculus, which is stdpc4 in our database) '2' - deals with two quantifiers 'v' - distinct variables eliminate the hypothesis that occurs in rcla4 (21-May-05) eqtr2t and eqtr3t were added because they shortened 16 proofs, with the net effect of reducing the total database size. (20-May-05) odi is essentially the same proof as the 2/3 smaller nndi for natural numbers, except that it uses transfinite induction instead of finite induction. So we have to prove not only the 0 and successor cases but also the limit ordinal case. But the limit ordinal case was a monstrosity to prove, taking up 2/3 of the proof from steps 59 through 257. Eventually I'll shorten nndi as a special case of odi. (16-May-05) drex2 is part of a cleanup of some old lemmas. The notable feature of this theorem and others like it is that x, y, and z don't have to be distinct from each other for the theorem to hold (normally, z can't occur in the antecedent, as in for example biexdv). The "Distinctor Reduction Theorem" provides a way to trim off unnecessary antecedents of the form (not)(forall x)(x=y), called "distinctors," in a system of predicate calculus with no distinct variable restrictions at all (which makes automated proof verification trivial, like for propositional calculus). (That system is the same as ours minus ax-16 and ax-17. The paper shows that it is complete except for antecedents of the form (not)(forall x)(x=y). To translate its theorems to standard predicate calculus, these antecedents are discarded and replaced with restrictions of the form "x and y must be distinct variables.") We can also translate distinctors to distinct variable pairs in the logic itself (after ax-16 and ax-17 are added) by detaching them with dtru. The reverse can be done (distinct variable pairs to distinctors) by using dvelim. This comes in handy when a distinct variable restriction is unnecessary, e.g. x and y in ralcom2; we convert the distinct variable pair to a distinctor with dvelim then eliminate the distinctor with the algorithm of the Distinctor Reduction Theorem. (13-May-05) Thank goodness caucvg is out of the way... The lemmas just seemed to grow bigger and bigger as I scrambled to complete it and is quite a mess towards the end. When the proof author said "this direction is much harder" he/she is not joking. There is often much hidden detail you end up discovering, that isn't apparent at first, when you try to formalize a proof. (For example, the very first stumbling block was how to formalize "the set of numbers less than all values of F except for finitely many". Certainly "finitely" isn't to be taken literally, i.e. strictly less equinumerous than omega, unless we want an incredibly complex proof.) It looks like I should eventually introduce an abbreviation for Cauchy sequences, like I do for Hilbert space. Then these proofs can be redone with a somewhat simplified notation. (That's easy to do, once you have the proof.) (12-May-05) For the caucvg proof, I am formalizing the proof found at http://pirate.shu.edu/projects/reals/numseq/proofs/cauconv.html . I couldn't find this proof in a textbook (most of those proofs use "lim sup" instead). If someone has a textbook reference for this particular proof, it will be appreciated. cruOLD has been phased out. (11-May-05) cru generalizes the old version (now called cruOLD until it is phased out) to include the converse. (10-May-05) relimasn is a version of imasng that doesn't require that A be a set (in the case where R is a relation, which is most of the time). When A is not a set, the theorem isn't really meaningful - both sides of the equality become the empty set - but relimasn allows us to prove more general theorems overall. (9-May-05) This morning a correspondent wrote me: > Do you know of a rigorous formulation of Wang's single axiom schema for > first order identity theory? I saw one in Copi's 'Symbolic Logic [Fourth > Edition]' page 280, but I don't quite follow his notation nor do I see how > to precisely state the stipulations for the variables in a "phi and psi" > style axiom schema. And I didn't see it in your proof explorer as a theorem. I added sb10f and answered: I have the 5th edition, and I think you mean P6 of system RS_1 on p. 328. (The 5th ed. added a chapter on set theory, which moved all the later pages up, probably by 48 pages or so. Copi was noted for killing the used-book market by releasing new editions every few years.) In the way it is stated, this axiom apparently has 2 errors. First, there are no restrictions (that I could find) stated for Fx and Fy. Let us suppose Fz is x=z. Then Fx is x=x and Fy is x=y. Putting these into P6, we get: A. y (Fy <-> E. x (x=y /\ Fx)) A. y (x=y <-> E. x (x=y /\ x=x)) A. y (x=y <-> E. x (x=y)) A. y (x=y <-> true) A. y (x=y) The last line is false in a universe with 2 or more elements, so the system is inconsistent. The correction is to add a proviso that x must not occur free in Fy. The second mistake is that there is no requirement (that I could find) that x and y must be be distinct variables. But if they are not distinct, an inconsistent system results. With these corrections, the proofs of the usual axioms on p. 329 still go through. The "A. y" prefix is redundant in P6, since it can be added by R2 (generalization). In a logic that allows an empty universe, the A. y would be needed, but on p. 319 it is stated that RS_1 is intended to be true in a nonempty universe (and the rest of the axioms won't work in an empty universe). So, it seems like R6 is an afterthought tacked onto the system. Even the notation Fx and Fy is different from the Q that represents the substitution instance of P in earlier axioms e.g. P5 p. 294. I added what I thought was a close approximation to P6 (without the redundant quantifier) here: http://us2.metamath.org:8888/mpegif/sb10f.html The hypothesis specifies that x must not occur free in phi, and x and y must be distinct, as must necessarily be the case. Three other variants that are similar to P6 are: http://us2.metamath.org:8888/mpegif/sb5.html http://us2.metamath.org:8888/mpegif/sb5rf.html http://us2.metamath.org:8888/mpegif/equsex.html , the last one implicitly substituting y for x in phi to result in psi. By the way, even though we can express a logical equivalent to P6 in Metamath, this does not mean that it becomes the sole axiom replacing the other equality/substitution axioms. (It is possible that one or more of the others could become redundant, but I haven't thought about it too much.) The reason is that in RS_1, substitution is done at the metalogical level, outside of the primitive system. In Metamath, we do this "metalogic" at the primitive level of the system itself, and we use additional axioms involving equality to accomplish this. In many ways substitution and equality are closely related, and the standard formalization "hides" this by moving substitution outside of the axioms. You might want to re-read these that explain this in more detail: http://us.metamath.org/mpegif/mmset.html#axiomnote http://us.metamath.org/mpegif/mmset.html#traditional (8-May-05) While Euclid's classic proof that there are infinitely many primes is easy to understand intuitively, I found the proof used by infpnlem1 and infpnlem2 simpler to formalize. (Specifically, this proof avoids the product of a hypothetical finite set of all primes, which I found cumbersome to formalize.) Here is the proof: For any number n, the smallest divisor (greater than 1) of n!+1 is a prime greater than n. Hence there is no largest prime. Or, in explicit detail: Suppose there are a finite number of primes. Let p be the largest. Let q be the smallest divisor (greater than 1) of p!+1. (The set of divisors of p!+1 is nonempty, since p!+1 is one of them, so by the well-ordering principle there is a smallest, which we will call q.) Then none of 2,3,4,...,p divides q since otherwise it would also divide p!+1, which is impossible. (2,3,4,...,p all divide p!, so none divides p!+1.) And none of p+1,...,q-1 divides q since otherwise it would also divide p!+1, and then q would not be the smallest divisor of p!+1. Therefore q is prime, and q > p, so p is not the largest prime. (6-May-05) funcnvres2 is a tongue-twister, or perhaps a brain-twister... I reproved cvgcmp as a special case of cvgcmp2. However, I have (temporarily?) left in the original proof and called it cvgcmpALT, as I think it might be instructive. Comparing cvgcmpALT to cvgcmp2lem+cvgcmp2, i.e. 33 vs. 68+17=85 steps, you can see how much extra work was needed just to ignore up to the Bth term in cvgcmp2. (5-May-05) cvgcmp2c is useful because it allows the test function to be much larger (via a large C) than the reference function, yet still converge. (4-May-05) divclt, divrect, redivclt are slight modifications of their older versions, which have been renamed divcltOLD, divrectOLD, redivcltOLD and which will disappear when they are phased out over time (3-May-05) prodgt0t also works for A=0, not just A strictly greater than 0. This makes the theorem more general - something I like to do when I can - but requires more work. In prodgt0 (from which prodgt0t is derived with dedth2h) you can see the separate derivation that we need for the A=0 case. (2-May-05) reccl* and rereccl* shorten many proofs (by replacing explicit division closure that was used before) - e.g. I shortened 18 proofs with rereccl. So even though these are trivial they were worth adding. (1-May-05) cvgcmp2 will be used to build a general-purpose comparison test for complex number sequences. cvgcmp2 tests for convergence of a nonnegative real infinite series (+ seq G) (which normally will be a series of absolute values), which is compared to a known convergent series (+ seq F). This version of cvgcmp allows us to ignore the initial segment up to the Bth term; this was a tricky thing to do. To achieve this I compare G to an auxilliary sequence H (see cvgcmp2lem) instead of F; H adds the supremum of the initial segment of G to F, so it is guaranteed to be bigger than G everywhere including the initial segment. Originally I planned to use climshift of 24-Apr and sertrunc of 27-Apr to achieve this (the ignoring up to the Bth term); now it looks like they are no longer needed. Too bad; they were a lot of work. Perhaps I'll leave them in in case a use for them shows up in the future. In cvgcmp2 we show the actual value it converges to (i.e. the sup) rather than just existence. This will allow us to use hypotheses instead of antecedents, which will make some proofs smaller. For our final theorem we will eliminate the hypotheses with the Weak Deduction Theorem dedth then produce a simpler-to-state existence version. (24-Apr-05) 2eu8 is a neat little discovery that I doubt has ever been published. It is fun seeing what you can do with the E! quantifier. Hardly anything about it exists in the literature, and apparently double E! has never even been published correctly; see 2eu5. Exercise: Can you change E!x E!y to E!y E!x in either side of 2eu8 by using 2eu7 as suggested? (Hint: use ancom.) Note that E!x E!y doesn't commute generally, unlike Ex Ey; probably not too many people know that. Another interesting thing about 2eu7 and 2eu8: x and y don't have to be distinct variables. (23-Apr-05) climshift shows that we can ignore the initial segment of a sequence when computing the limit. This is intuitively obvious (since it's only what happens towards infinity that counts) but is a little tedious to prove. (22-Apr-05) It is curious that max1 doesn't require B to be a real number. (21-Apr-05) In steps 1-19 of climre, you may wonder why we have extra steps using vtoclga to switch from variable x to variable w, when in fact variable x could have been used throughout up to step 19. The answer is that by using w's, we can reuse step 14 at step 43, without having to repeat its work. This is a little trick that shortens the compressed proof and the web page. (The uncompressed proof, on the other hand, is lengthened because it does not reuse previous steps, but normally set.mm is stored with compressed proofs.) (20-Apr-05) For serabsdif, note that (+ seq F)`n - (+ seq F)`m is the sum from m+1 to n of the terms of F, i.e. F`(m+1) + F`(m+2) + ... + F`n. So even though our notation for series (+ seq F) is limited for notational simplicity to start at the fixed lower index of 1, we can represent a more general lower limit using this method. (A more general notation for series may be introduced in the future.) (19-Apr-05) We're going to be using (+ seq F) a lot, so here's a refresher, since this notation is not standard. We are using a special case of our general-purpose "seq" operation, and there seems to be no standard notation for (+ seq F) in the literature other than the informal "a_1 + a_2 + ... + a_n" which is not suitable for formal math. (Gleason uses "F-" in front of an infinite sum to indicate the partial sum function underlying the infinite sum, but it is not standard.) If you are following these theorems it might be useful to keep the following note in mind. It is straightforward if you understand the correspondence to the conventional notation. (+ seq F) is the sequence of partial summations in an infinite series. E.g. for a sequence of squares: argument sequence partial sum of series n F`n (+ seq F)`n 1 1 1 2 4 5 3 9 14 4 16 30 ... Of course this series diverges, so the infinite sum doesn't exist, but all partial summations exist as shown above. Today's theorem serft expresses a very obvious fact: if F is a function from the positive integers to the complex numbers, then so is (+ seq F). Another example: n F`n (+ seq F)`n 1 0.5 0.5 2 0.25 0.75 3 0.125 0.875 4 0.0625 0.9375 ... This series converges to 1, so the infinite sum sigma-1-oo`F exists and equals 1 (see theorem geosum1). By "converges" we mean the limit as n->oo of (+ seq F) is 1; we express this as (+ seq F) ~~> 1 (see theorem geolim). (17-Apr-05) ege2le3 proves with absolute rigor that scientific calculators display the value of e correctly to at least 1 decimal place. :) This is a nice "sanity check" of all the abstract stuff that finally collapses to this result. (13-Apr-05) It surprised me that it took so much work to prove that e is a real number. (5-Apr-05) equid2 shows how you'd prove x=x in a traditional system of predicate calculus which has a9e as an axiom. This proof is almost identical to Tarski's. Still, I think the no-dummy-variable equid proof is neat and unexpected. (1-Apr-05) The Usenet announcement of Poisson d'Avril's theorem is here: http://groups-beta.google.com/group/sci.logic/browse_frm/thread/7aa9265da2819705/ee8862dd6adb3fad#ee8862dd6adb3fad (26-Mar-05) geosum1, which you may be familiar with from high-school algebra, is the culmination of our initial development of infinite series. As you can see, it is much harder than the high-school version if you want to prove it with absolute rigor! geosum0 shows the sum from 0 to infinity instead of 1 to infinity. It provides a good illustration of why I chose to have our "fixed limit" infinite series operator start at 1 instead of 0, so that we just add the 0 case outside of the summation when we need a lower limit of 0. If we started at 0, we would have to worry about the meaning of 0^0 in this case. A long time ago, when I introduced the ~~> symbol (mainly for use with Hilbert space), I initially defined it for functions starting at 0 instead of 1. But I ran into so many special cases (divide-by-0 and so on) that eventually I redefined it to start at 1. This greatly simplified many proofs. Who knows, this may be the real reason that analysts start with 1 for the natural numbers (unlike set theorists, who start at 0). In fact Schechter's _Handbook of Analysis_ doesn't even have a separate symbol for the set of nonnegative integers; instead he uses "NN (natural numbers) union {0}" in the one case I could find where he needed them. (16-Mar-05) In sumvalt and sumval2t, the function Sigma_1^oo is one symbol (object), not three symbols, even though it is shown as three as a mnemonic device. To make the proofs and notation simpler, the lower limit is fixed at 1, and even with this constraint it should be useful in a wide variety of applications. I chose 1 instead of 0 as the lower limit because the 0th term often involves messy special cases (e.g. to avoid divide-by-0), and it's easy enough to add the 0th term outside the summation when we want. In addition, our limit relation ~~> is defined for sequences that start at 1, making it compatible. (Later we may introduce a more general summation with variable upper and lower limits, but that wouldn't buy us a whole lot now and would just make the notation complex.) The Sigma_1^oo function has a single argument, which is an infinite sequence that starts at 1 (i.e. is a function from natural numbers to complex numbers). The value of Sigma_1^oo is the infinite sum of all the numbers in the sequence, if that sum exists. I thought the the conventional summation sign would be neat reminder that this is what the function does, even though it is used slightly differently from the textbook summation (e.g. there are no dummy index variables; these are implicit in the sequence that it takes for an argument). The Sigma_1^oo object is, on the one hand, just a set like any other set; more specifically it is a set of ordered pairs that gives us a function of one variable. But if you think about it, it contains implicitly an amazingly complex "machine" inside. It can take in any infinite sequence whatsoever and produce the infinite sum, if the sum exists. If you peel away the layers of its definition, you'll find df-rdg, our recursive function generator on ordinals. (14-Mar-05) For sersuc, recall the notes of 7-Mar-05 below. sersuc tells us the value of the next partial sum in an infinite series. Yesterday's ser1 tells us the value of the first partial sum i.e. F ` 1 . (9-Mar-05) ltsor arises as part of our complex number construction; it is needed for the proof of a couple of our complex number axioms. ltso, on the other hand, is a result derived from the complex number axioms after those axioms have been proved. I got tired of having to clean up accidental uses of ltsor in place of ltso (which destroys the portability of our complex number construction), so I decided to prevent it once and for all by adding the artificial right conjunct. Clever, eh? (7-Mar-05) This is what the 'seq' operation in sercl is all about: ( + seq F ) ` 1 = F ` 1 ( + seq F ) ` 2 = F ` 1 + F ` 2 ( + seq F ) ` 3 = F ` 1 + F ` 2 + F ` 3 etc. In other words we are using the powerful 'seq' recursion operation to create the terms of an infinite series from an arbitrary infinite sequence F. We will encountering this device frequently as we get into infinite series. (With 'times' instead of '+', 'seq' will give us infinite products.) Earlier we used 'seq' to define exponentiation df-exp and factorial df-fac. By the way, note that in theorem sercl, the class A is normally dependent on x. In other words, A should be thought of as A(x). The way you can tell is that A and x are not both in the same distinct variable group, yet A is in the scope of x (i.e. inside the braces in the first hypothesis and in the scope of the quantifier in the second hypothesis). THIS IS AN IMPORTANT PRINCIPLE THAT CONFUSES SOME PEOPLE. We also know that A is _not_ dependent on y because A and y must be distinct. (On the other hand, the fact that B and x must be distinct is an irrelevant artifact resulting from a sloppy proof. I may improve the proof at some point to get rid of that unnecessary restriction, unless it lengthens the proof too much. Sometimes I leave such artifacts in, especially in lemmas used once, if it results in a shorter proof as it often does.) As a background project I'm slowly filling in the remaining propositional calculus theorems from Principia Mathematica. I figure we might as well have the complete collection eventually. (5-Mar-05) By using brrelexi and the new climrel in the proof of clim, clim's old "F in V" hypothesis was eliminated. The new version of clim also allowed us to prove new versions of the other clim* theorems with this hypothesis eliminated. Overall the database size decreased since theorems referencing clim* no longer have to prove "F in V". (4-Mar-05) oancom requires the Axiom of Infinity for its proof. Virtually all other results on ordinal arithmetic, remarkably, do not require the Axiom of Infinity (which starts being used with theorems inf4 and after; inf4 currently has the colored number 3567). (1-Mar-05) alephval3 - I finally figured out how to prove this convoluted, self-referencing definition that at least two authors prefer. Even though in some sense it may be "intuitive," it wasn't an easy thing to prove formally. (20-Feb-05) Well, I proved that ax0re is redundant as a complex number axiom, the first redundancy that anyone has found in 8 years. 0re replaces the old ax0re, and ax0re was removed from http://us2.metamath.org:8888/mpegif/mmcomplex.html . 0cn is a completely new proof of the old 0cn that doesn't depend on the old ax0re, and it provides the key to eventually proving 0re. The table row on the mmcomplex.html page, after the grayed-out ax0re axiom label, shows the connections between the important theorems for the proof. (18-Feb-05) dfsb2 was one of the two choices I was considering a long time ago when I finally chose df-sb somewhat arbitrarily. I decided to prove dfsb2 as a theorem, well, just to have it proved. Although the proof isn't long, it's actually somewhat involved all the way back to axioms. Axiom ax-11 seems essential: ax-11 -> equs2 -> equs5 -> sb4 -> dfsb2. By the way it is an open problem whether ax-11 is redundant. I can prove from the others all cases of ax-11 where phi is replaced by any expression without a wff variable e.g. (x=z -> -.w=y). ddelimf2 was, I felt, a major step towards helping prove the redundancy of ax-11, but I haven't made any progress since. (17-Feb-05) I was somewhat surprised hbequid could be proved without ax-9. (I found the proof while toying with the open problem of item #16 at http://us2.metamath.org:8888/award2003.html . That problem is still unsolved, though.) (16-Feb-05) dfrdg2 (which uses yesterday's eqif) allows us to introduce a new definition df-rdg of the rec operation. The new df-rdg now fits on one line if you put your browser in full screen mode! I rewrote the df-rdg comment slightly. df-rdg is still almost "impossible" to understand, but I think the use of the "if" operation improves its understandability a little. I cleaned up the list of traditional predicate calculus with equality axioms at http://us2.metamath.org:8888/mpegif/mmset.html#traditional and added stdpc6 to match Mendelson's system exactly. What is very strange is why stdpc6 is quantified. It is possible this may make it sound in empty domains to make it compatible with the system on p. 148, but I couldn't find any indication of this. Does anyone know? If you have the 3rd or earlier (pre-1997) edition of Mendelson's book, could you check for me that Axiom 6 p. 95 (or the equivalent page) is quantified? I renamed sbequ2a from yesterday to become stdpc7 and enhanced its description. (15-Feb-05) eqif shows an example of what elimif is intended for. stdpc7 is a faithful emulation of the 5th traditional axiom at http://us2.metamath.org:8888/mpegif/mmset.html#traditional and replaces sbequ2, which wasn't quite right for that purpose. (13-Feb-05) 2eu6 is nice in that it, unlike 2eu4, only has one mention of phi on the right-hand side. This can be useful when phi is a long expression, so we don't have to repeat it twice. It also makes less urgent the need for a rarely-used and nonstandard new definition for double existential uniqueness such as E!!xy or something, which is nonexistent in the literature (which incorrectly uses E!xE!y; see 2eu5). As you can see 2eu6 was somewhat tedious to prove. (9-Feb-05) axaddopr and axmulopr can't be proved directly from the axioms for complex numbers shown at http://us.metamath.org/mpegif/mmcomplex.html , so I had to dig back into their construction. Brings back memories - I haven't looked at it in years. The construction was not easy, with very tedious equivalence relations (look at oprec and ecoprass for example) to jump from one level to the next - positive integers, positive rationals, positive reals (using Dedekind cuts), signed reals, and finally complex numbers. Nonetheless it appeared to be the simplest of several methods I looked at. The final axioms are cleanly broken out, so a different construction could be "plugged in" trivially for anyone who prefers a different approach (and is willing to go through the tedious construction). axaddopr and axmulopr will be used for some infinite sequence stuff because it will make some things slightly simpler. However, I decided not to replace the "official" axioms at http://us.metamath.org/mpegif/mmcomplex.html with these because I think the official ones look nicer, and in principle they are sufficient. (8-Feb-05) In flval2t, "U{x in ZZ|...}" can be read: "The unique integer x such that..." (provided there is exactly one such x, which in this case there is). (4-Feb-05) The two variations ordsssuc and ordsssuc2 show that either A or B can be a general Ord class. However, it fails if both of them are Ord classes. In other words at least one of them must exist as a set, i.e. a member of On. Of course both may be sets, as shown by the weaker onsssuc. ordunisuc2 is one of those neat counterintuitive results - you wonder what one side of the equivalence could possibly have to do with the other... (3-Feb-05) An error made in some published math books for the layman is the statement that there are aleph-one reals (or irrationals). In fact, there are 2^aleph-zero reals, which is equal to aleph-one only when we assume the Continuum Hypothesis. Without CH we can only state that there are at least aleph-one reals (or irrationals), but there could be more. To prove aleph1irr we invoke the somewhat exotic infdif. (31-Jan-05) I realized that onpwsuc could be generalized to all ordinal classes, not just ordinal numbers, so the result is ordpwsuc. I also simplified the proof of onpwsuc so as to derive it trivially from ordpwsuc. By the way I mentioned onpwsuc on Usenet; search for "onpwsuc" in groups.google.com. (20-Jan-05) alephfp is one of several theorems where we show an explicit expression, vs. textbooks, which typically show only existence. To me, an actual example is a lot more satisfying. Two other examples in set.mm are trcl and cardaleph. The rec function is one of the main tools that lets us construct explicit expressions. (19-Jan-05) oev2 is the simplest expression for the "traditional" definition I could find, using only the rec function and set theory primitives (without the "if" function of oev). The left-hand side of the intersection is the "natural" definition that arises from transfinite recursion, and the right-hand side suppresses 0^(limit ordinal) to be 0 instead of 1, while keeping 0^0=1. (12-Jan-05) Yesterday I suggested proofs from biluk and mpbi as an exercise. I looked them up in the _Polish Logic_ book and they are not obvious at all. pm4.2 can be proved in 13 steps, but the others are much longer. So if you're interested, get the book to avoid a lot of frustration. (11-Jan-05) The proof of biass was reduced to 35 steps (with the help of or42 and xor). biluk is an easy consequence of biass. Note that biass, bicom, and pm4.2 can be derived from biluk and mpbi and nothing else, although we won't be doing it. (Exercise for the reader...) or42 is the 'or' version of an42, which "rotates" the right-most 3 conjuncts. an42 and an42s are used surprisingly often (for such an apparently obscure operation). Other analogues are add42 and caopr42. (7-Jan-05) tfinds3 can shorten certain proofs that used to use tfinds by 8 steps, which is the number of steps in the proof of tfinds3 from tfinds. By applying it to the 5 theorems in its referenced-by list, I reduced the net size of the set.mm database file (vs. before I added tfinds3). In some cases I shortened them more: compare (until the mirror site gets refreshed) http://us2.metamath.org:8888/mpegif/oacl.html (29 steps) vs. http://us.metamath.org/mpegif/oacl.html (41 steps) (5-Jan-05) Again, compare oesuc to omsuc to see the additional complexity caused by the awkward traditional definition. oelim (compared to omlim) isn't too bad, though. (4-Jan-05) According to oe0m1, zero to the omega power is zero, not one. This is the requirement that complicates df-oexp. (1-Jan-05) oev is our first theorem involving ordinal exponentiation. It uses the somewhat awkward traditional definition. Compare it to omv for example; an alternate, but nontraditional, definition for exponentiation would be exactly as simple. I decided on the current definition after a discussion here: http://groups-beta.google.com/group/sci.logic/browse_frm/thread/fac0ce315e8ea855 However it will be making a number of proofs, such as this one, more complex. (30-Dec-04) I think iunss2 is rather nice (even though its proof is very short), and I might be able to use it to shorten oaass. (25-Dec-04) oaass is a huge proof! This has been on my to-do list for a long time, so getting it done is a relief. Before we only had nnaass, which is limited to natural numbers and which I did a long time ago for the real/complex number construction. To build the oaass proof I basically copied nnaass, then added the induction hypothesis for limit ordinals (step 164). But proving this hypothesis is 2/3 of the final proof, or steps 44-164! (And this doesn't count oalimcl, which is really a lemma for oaass with no other use I'm aware of.) For the other steps, you may want to compare oaass to nnaass - they are identical except for closure laws. (Eventually I'll use oaass to shorten nnaass.) By the way to understand why nnaass was needed for our complex number construction, you can follow its path as it propagates forward through the stages of construction of complex numbers (and the associative law for complex number addition in particular), which you can see in the "referenced by" lists: nnaass -> addasspi -> addasspq -> addasspr -> addasssr -> axaddass BTW Merry Christmas! (23-Dec-04) sumdmd is Holland's theorem (finally). (21-Dec-04) sbequ5 is just a cleanup of an earlier version. By the way the theorem eq5 used in its proof was the first "significant" theorem (so I felt) that I proved from the unusual predicate calculus subsystem ax-4 through ax-15, and it was the initial clue towards eventually proving completeness for my "Finitely Axiomatized..." paper. I remember feeling very excited that it held, some dozen years ago. I still think its proof is strange and unintuitive. Axiom ax-11 is not used for its proof and was added later. An open problem is whether ax-11 is redundant; it can be proved as a "metamatheorem" (outside of Metamath) from the others but I've never been able to prove it directly from the others. The Hilbert space stuff is more progress towards Holland's theorem. (20-Dec-04) Q is strictly less equinumerous than R as promised. (Now, that wasn't hard, was it?) (19-Dec-04) qbtwnre, that Q is dense in R, is an important result about real numbers. This is one of those counterintuitive results about infinite sets: Q is strictly less equinumerous than R (I guess I should add a theorem for that - yes, I'll do it for tomorrow), but on the other hand, in between any two reals, no matter how close they are, you'll find a rational! How can that be? Think about it. This used to bother me when I was younger, and I suppose it still gives me an uneasy feeling now and then.:) (17-Dec-04) I finally received from the library Holland's 1969 paper (listed in the Hilbert Space Explorer references) which turns out to have a marvelous proof of subspace closure of the sums of dual modular pairs. I've been looking for such a proof for a long time. Maeda's version that I tried to use has a gap I haven't been able to figure out, but Maeda died a couple of years ago so I can't ask him to clarify it (and no one else seems to know). The Hilbert space results for the last couple of days are some basic facts we'll eventually draw on for Holland's proof. (15-Dec-04) Some simple but frequently-used theorems that let me shorten a bunch of proofs. (14-Dec-04) The proof of Prop. 8.8 in Takeuti&Zaring p. 59 has a typo: the index on the big union should be "delta < gamma", not "delta < beta". (This brings to total typos I've found in the book to about 30. In spite of this, it is by far the best book for the technical details of many proofs that other books gloss over. Too bad it is out of print.) I put a lot of work into oawordeulem because I was afraid it was going to be huge. The end result is that it is actually 1/3 smaller than the finite version that it replaces, and I'm kind of proud of it. (The finite version was nnordexlem, which I deleted but you can still find on some of the mirrors.) The intersection of the temporary class S corresponds to gamma in the T&K proof. (13-Dec-04) These variations of modus ponens and syllogism are used to shorten a number of proofs and will be handy to have available in the future. (12-Dec-04) In our database, ordinal arithmetic is not well developed. The history is that I did just enough to construct real numbers, and some operations (e.g. the associative law nnaass) are proved for natural numbers only. Over time I'll be extending these to the transfinite. I'm discovering that some textbook proofs have subtle mistakes (or to be generous, they gloss over details) that only become apparent when you try to work them out, making for a somewhat frustrating task. Perhaps I'll mention some of them here as they come up. Today's oawordri is the commuted version of oaword of 8-Dec. Unlike the natural number version it must be proved separately since ordinal addition ceases to be commutative in the transfinite. Note that the commuted version of oaord of 7-Dec does not hold in the transfinite. Also, the converse direction of oawordri does not hold in the transfinite. (11-Dec-04) In ordunidif, note that we're not removing a single element but an entire subset of elements up to that element. (Otherwise it would be A\{B}, not A\B.) For example: if A is 5={0,1,2,3,4} and B is 4={0,1,2,3}, then A\B is {4}. Then the union of A is 4, and the union of A\B is also 4. Notice that for finite (and other successor) ordinals, the union operation subtracts 1. (9-Dec-04) All the previous work leading to rebtwnz, involving heavy use of the archimedean principle and well ordering of bounded integer sets, finally pays off by letting us show the properties of the floor function in flleltt. It is interesting that so much work is needed for such a "trivial" thing as the floor function. (7-Dec-04) I have been wanting to prove oaordi for some time but the limit case of the transfinite induction (steps 32-51) kept proving illusive. Finally I realized ssiun2s was the trick I needed, and we don't even need the available "for all y less than x" antecedent (added with a1d at step 51, and that proved to be a red herring in earlier proof attempts). (3-Dec-04) Look at how easy the definition of factorial df-fac becomes using our new seq operation. I'm very pleased. (See http://us2.metamath.org:8888/mpegif/mmset.html#function for why the notation is (!`n) instead of n!.) (2-Dec-04) I am torn whether to add a definition for the class of all cardinal numbers. Well, it actually has been added - df-cardn - but I consider it provisional and so far I've avoided using it. With the definition, ondomcard could be simplified to {x e. On|...} e. Card. But df-cardn saves only a few symbols, and already we have so many definitions that it's hard to remember them all. I've never seen both df-card and df-cardn in the same textbook, although textbooks have the advantage of informal English like "is a cardinal number". shsumval3 is nice, compared to the "official" subspace sum value, shsumvalt. I'm tempted to redefine df-shsum accordingly, although the current definition does capture the "intent" of subspace sum even though it's more complicated. (1-Dec-04) zmax shows an application of reuxfr that we proved a few days ago. Note the use of reuhyp to eliminate one of reuxfr's hypotheses. I added a new "feature" for those people who check the most recent proofs page every day. You can go directly to the new theorems via http://us2.metamath.org:8888/mpegif/mmrecent.html#table without having to scroll down. (30-Nov-04) It is surprisingly difficult to generalize the integer B in uzwo2 with the real B in uzwo3, when on the surface this seems so obvious. Integers and reals don't mix naturally. (19-Nov-04) euxfr, even though it is simple-looking, is a culmination of a lot of results about existential uniqueness. As you backtrack through its proof you will find mopick, moexex, 2moswap, 2euswap, and euxfr2, all of which I find interesting as somewhat non-obvious, but none of which appear in the literature to my knowledge. But euxfr is often used in informal proofs implicitly; for example: if we know that there is exactly one x such that f(x)=4, then we can conclude there is exactly one y such that f(3y+5)=4. And vice-versa. The formalization of this argument is euxfr. Who would have guessed that something so "obvious" is apparently so hard to prove? (I don't know how hard it would be to prove euxfr directly, though.) (17-Nov-04) Takeuti/Zaring have a rather long proof of ac6s2 that I wasn't looking forward to formalizing. Then it occurred to me that I already have ac6s, which is proved rather simply from the AC variation ac6 plus the Boundedness Axiom bnd2. The final proof of ac6s2 is very simple. (11-Nov-04) uzwo generalizes nnwo, whose proof has been shortened to become a special case of uzwo. Interesting tidbit: I did a quick count, and there are now 5177 theorems (including Hilbert space) in set.mm, and 1480 of them reference 969 bibliographic entries (theorems, exercises, etc.) on the mmbiblio.html Bibliographic Cross Reference page. (This is more bibliographic entries than I would have guessed offhand.) (10-Nov-04) mapunen, with its long, tortuous proof, completes (whew) the "basic" equinumerosity theorems needed for the 3 operations (addition, multiplication, exponentiation) of cardinal arithmetic. (5-Nov-04) syli is a little "discovery" that shortens 14 proofs, more than paying for itself by reducing the total size of the database. (4-Nov-04) I think Takeuti/Zaring's definition of omega, dfom2, is rather strange and unintuitive. But this proof confirms it is correct. (2-Nov-04) For infcntss, Takeuti/Zaring write after their Exercise 8 for this: "Hint: Use AC". Our antecedent expresses "is infinite" in a different way that allow us to avoid AC. (We also don't need the Axiom of Infinity because the antecedent is false if omega doesn't exist. This is a fortuitous quirk of our definition of dominance, and a different definition of dominance might require Infinity for this theorem.) (1-Nov-04) qexpclt nicely illustrates the use of our general-purpose exponentiation closure lemma expcllem. (29-Oct-04) nominpos was written specifically to refute an incorrect statement on Usenet: http://groups.google.com/groups?selm=XY-dnRTa4ISmyhzcRVn-vw%40rcn.net A somewhat significant change was put into the site today. The old object "ded", used for the weak deduction theorem, has been renamed to "if", and it is now called the "conditional operator" instead of the "deduction class". This is more in keeping with its true function, especially since it is being used more and more for non-deduction-theorem things. df-ded was renamed to df-if, along with some other renaming. The Deduction Theorem page was revised to reflect this change. You may want to look at the quick example at the beginning of the Deduction Theorem page mmdeduction.html to see if it makes more sense now. (25-Oct-04) I forgot to mention that on 19-Oct I added onpwsuc specifically to answer this Usenet question: http://groups.google.com/groups?threadm=Ts6dnc3uTY-bLejcRVn-qQ%40rcn.net (23-Oct-04) infdif was quite frustrating, particularly because Enderton mentions it in passing almost as an obvious, trivial fact. His one-line proof gives no hint of the difficulties lurking underneath. I could see no shorter way to prove it. (22-Oct-04) The new, elegant df-exp, that makes use of our new infinite sequence builder df-seq, now completely replaces the old awkward one. Decisions, decisions. I don't like that the new df-exp requires the artificial complex number axiom axmulex. I might redefine it so that in place of "( x. seq ( NN X. { x } ) )" it would use "( ( x. |` ( CC X. CC ) ) seq ( NN X. { x } ) )", so that the sethood of multiplication "x." becomes irrelevant. But that complexifies df-exp, and I'm also thinking about future applications of df-seq. So I'm not sure what I'll do. (20-Oct-04) By using the Axiom of Choice, qnnen has the shortest proof I'm aware of. The standard proof uses somewhat complex (from Metamath's point of view) algebraic manipulations. Of course most authors want to avoid AC when possible, but I thought it interesting to show how it can be shortened with AC. (18-Oct-04) I finally completed what turned out to be a bigger project that I expected, which is to prove that the reals are uncountable. Today's proof ruc is the culmination of several days' work. It actually involves 39 lemmas, ruclem1 through ruclem39. (I artificially backdated the lemmas ruclem1 through ruclem39 so they won't clutter up the "most recent proofs" page.) See us2.metamath.org:8888/mpegif/mmcomplex.html#uncountable for a detailed description. (17-Oct-04) I think df-seq is going to be extremely useful in the future. For a long time I've been stuggling with a good way to represent finite and infinites sequences, series, products, etc. which are universally prevalent in the study of limits, calculus, and pretty much all of higher analysis. Each one seemed to require its own messy development - see our ugly df-exp for example, and that's just the tip of the iceberg. Well, df-seq seems to be the answer! For example, df-exp can be alternately defined so its value becomes (in set.mm notation): ( A ^ B ) = ( ( x. seq ( NN X. { A } ) ) ` B ) where NN X. { A } is the constant function with value A. The seq operation produces the sequence A, A x. A, (A x. A) x. A,... that we evaluate at B. How much simpler could it get? Eventually I'll redefine df-exp accordingly. df-seq will even be useful for showing that real numbers can be represented by an infinite decimal expansion (a mundane thing in everyday mathematics but quite difficult formally). So expect to see a lot more of the seq object! Fortunately you don't have to be concerned with df-seq itself (with its horrendously complex definition) but only with its consequences seq1 and seqsuc, which should be enough to handle most work with sequences and series. In fact the only purpose of df-seq is to provide us with an object that has those simple properties. I defined seq to start a 1, not 0, to make it compatible with our df-clim. A long time ago I initially defined df-clim to start at 0 but ran into messy problems avoiding divide-by-zero when actually computing limits, and 1 makes certain things a lot simpler. This is possibly one reason most analysts have the natural numbers start at 1. Anyway, if the need for starting at 0 or some other integer arises, I can redefine a more general df-seq, but for now I wanted to keep it as simple as possible. Internally, df-seq uses an ordered pair to increment a counter that is used to look up the value of the input sequence, so in a way it is almost like it has a little "computer program" inside. Unlike a computer program, though, the "counter" counts to infinity! The incredible power of this concept almost boggles the mind. By the way, with regard to "computer program like" devices, it is becoming clear that the df-ded object, now used almost exclusively for the weak deduction theorem, is in fact a far more applicable kind of general-purpose "if statement". Raph Levien pointed this out to me, and I've already used it as such in unxpdomlem. ded(phi,A,B) means "if phi is true, return A, otherwise return B" and is almost exactly analogous to the C-language conditional operator "condition ? A : B". It can be useful for defining functions such as / | 1 / x, if x =/= 0 f(x) = < | 0, otherwise \ and if you look at Takeuti/Zaring's proof of unxpdomlem you'll see that this is exactly what I am doing, with nested ded's to represent 3 possible alternatives. Maybe I should rename "ded" to be "sel" for selector or even "if". What do you think? The "seq", "ded", and "rec" objects in set.mm apparently do not appear in the literature, so there is no standard notation for them, which I think is unfortunate. (12-Oct-04) Prof. Nambiar communicated his paper to me earlier this year. gch-kn shows that the Generalized Continuum Hypothesis is equivalent to a generalized version of his Axiom of Combinatorial Sets, from which the result in his paper falls out as the special case of A=0. (10-Oct-04) ssenen doesn't seem to appear in any book but is apparently assumed implicitly in the proof in Lemma 6.2 of Jech's _Set Theory_ p. 43 (otherwise I can't see how that proof could work). At first I thought the proof of ssenen would be trivial - it seems intuitive enough on the surface. But I soon discovered it's a rather difficult thing to prove, and I hope you like the proof I came up with. (9-Oct-04) The theorem cdaval has a comment explaining how we do cardinal arithmetic, which you may find confusing. The only new operation we actually introduce is +c (disjoint union), and we re-use existing set operations for the others. Even +c is not really an operation from cardinal numbers to cardinal numbers, but just a general set operation. Here is the mapping: +c is used to represent cardinal number addition cross product is used to represent cardinal number multiplication set exponentiation is used to represent cardinal number exponentiation equinumerosity is used to represent cardinal number equality With this mapping, we can easily obtain the "real" cardinal arithmetic operations, in the sense of operations that take actual cardinal numbers and produce new actual cardinal numbers. To get the actual cardinal number (which is the smallest ordinal equinumerous to it) corresponding to any set, we use the "card" function. A cardinal number is its own cardinal number as shown by cardid and cardcard. A cardinal number is nothing more than a measure of the "size" of a set. For finite sets, it is easy to see how the set operations correspond. For example, the cross product of a set with 3 elements and a set with 5 elements will result in a set with 15 ordered pairs, corresponding to 3 x 5 = 15. Our approach is exactly the approach used by Mendelson, and it saves us having to have a whole bunch of new operations and theorems on them that would essentially just be window dressing disguising what we already have. (5-Oct-04) infxpidm is a deep and important theorem that is typically one of the most difficult in elementary set theory textbooks, and some books don't even get that far (or omit the proof). The only other formal proof of this I'm aware of is on the Isabelle prover. About half of the paper "Mechanizing set theory: cardinal arithmetic and the axiom of choice" http://www.cl.cam.ac.uk/users/lcp/papers/Sets/AC.pdf is devoted to the Isabelle proof of infxpidm. Of course that was in 1996, so Metamath is about 8 years behind the times.:) (2-Oct-04) Some important milestones today! In our development there are two completely different sets, omega (a subset of ordinal numbers) and N (a subset of complex numbers), that are both called "natural numbers," undoubtedly causing a great deal of confusion. The former are the natural numbers of set theorists, the latter the natural numbers of analysts, and both sets satisfy Peano's axioms. nnenom relates these two sets, that live in completely different worlds, by showing that they are equinumerous. xpomen is a milestone theorem showing the cross product of omega with itself is countable, and will be used as a key stepping stone for the multiplication of infinite cardinal numbers. xpomen is derived directly from xpnnen with nnenom. xpnnen in turn is essentially dependent on Raph Levien's nn0opth. It is pleasant to see everything finally all fits together, with one thing building on another. (1-Oct-04) Right now, the proof of sucdom uses some heavy-duty AC (entri2) and Infinity (omex, nnsdom) stuff for such a trivial theorem (that is casually used, without mention of how it's proved, in textbooks). I could not for the life of me figure out how to prove it without AC and Infinity. Does anyone know? (30-Sep-04) As unrelated as they may seem, these are all preliminaries towards getting into transfinite cardinal arithmetic. dedex was a surprise: I expected a somewhat involved proof, and behold! it is just a special case of keepel. (29-Sep-04) An interesting curiosity is that xp2cda is an actual equality, not just an equinumerosity relation. I added a note to yesterday's cdaval, explaining our approach to cardinal arithmetic. (28-Sep-04) Today, with cdacomen, we take the first baby step in what promises to be a fascinating journey into the deeply profound world of cardinal arithmetic, where we will uncover the mysterious properties the objects in "Cantor's paradise" (transfinite cardinal numbers). The new definition added is df-cda. Some books define cardinal arithmetic operations explicitly, whereas other books use cardinality and equinumerosity applied to ordinary set operations. We will use Mendelson's hybrid approach, where cardinal sum is defined as a kind of "disjoint union" operation, and where cross product and set exponentiation serve the roles of product and powers. So, equinumerosity will mean "equals", and dominance will mean "less than or equal to". (27-Sep-04) I changed the turnstile |- color from green to gray. The color had no purpose other than to make it stand out, and with so many other colors now (especially with the new rainbow-colored little numbers) it seems like a gratuitous and pointless use of color. In the old days I suppose it added a splash of color to an otherwise dreary page, but now it just makes the page busy. In addition, it is inconsistent since color otherwise distinguishes variables from constants (and |- is a constant). I hope the gray is a sufficient distinction to impress the reader that it is a meta symbol that somehow transcends the "normal" math. The indentation indicator next to it is also gray, but I don't think it will cause confusion. It seems another proof system called "DC Proof" is comparing itself to Metamath. I suppose I should take that as flattery. :) http://groups.google.com/groups?selm=OQh5d.2688%24KF.21330%40tor-nn1.netcom.ca (24-Sep-04) Our proof of undom is much simpler (when formalized) than Mendelson's, who defines a complicated one-to-one function from the left-hand to right-hand side of the dominance relation in the conclusion. Basically we exploit unen, domen, and ssdom2g, avoiding any use of functions at all. (17-Sep-04) dmsnsnsn allows us to complete the "theory" of domains of iterated singletons of the empty set 0. Thus: dom 0 = 0 dom {0} = 0 dom {{0}} = 0 dom {{{0}}} = {0} dom {{{{0}}}} = {{0}} dom {{{{{0}}}}} = {{{0}} etc. The first 3 are dm0, dmsn0, dmsnsn0. By the way this has absolutely no practical value whatsoever. :) Historical trivia: The singleton {0} reminded me of the theorem pwpw0, which is considered too obvious even to mention as a separate theorem in most books. Suppes leaves its proof as an exercise. Even when a book gives a "proof" it goes something like "{0} has 2 possible subsets, 0 and {0}, so its power set is {0,{0}}." In the early days of Metamath I was completely baffled how to formalize this and spent several frustrating days with this theorem. I finally stumbled upon exintr, which I don't think is in any book, as the key to its proof. As a kind of cynical joke, in its description I said (as many books do) that we "compute" its power set, although the proof seems to shed little light on what the algorithm might be. :) (16-Sep-04) The key theorem for qaddclt is divadddivt at step 26, which allows us to prove that the sum of two ratios of integers is a ratio of integers. The other stuff is the messy overhead, made more so by having to avoid dividing by zero and having to work with N, Z, Q, and C in the same proof. (14-Sep-04) I like abrexex2. Simple to state yet intrinsically quite deep and powerful. The Axiom of Replacement plays an important role in its proof, and this theorem might be equivalent to it (although I'm not sure). Note that it is essentially derived from iunex, which in turn derived from abrexex and the Axiom of Union. On the other hand, abrexex can be easily recovered from abrexex2 by replacing phi with y=B, since the 2nd hypothesis {y|y=B} always exists (it is either a singleton, or the empty set when B is a proper class). Wow! I wonder if this has ever been published. (13-Sep-04) I think uni0b would make a nice textbook exercise. By the way sssn can be used to show that A in uni0b is either the empty set or the singleton of the empty set. sssn is another nice little theorem that seems obvious but whose proof is not quite as simple as one might think. (12-Sep-04) zltle1 is one of those unfuriatingly long proofs of something seemingly extremely trivial. This is in spite of the fact we already have nnlelt1 (with an insanely long proof that has so far defied attempts to shorten it), on which zltle1 is ultimately based. elab2 and elab2g replace the venerable vtoclab and vtoclabg, generalizing them so as not to require that x and B be distinct variables. This is important in cases where B is a class variable in a hypothesis, usually as part of an intermediate lemma, that will eventually be eliminated with cleqid. By removing this restriction I was able to shorten a couple of additional proofs that previously used elab. (7-Sep-04) Our proof of Abian's "A most fundamental fixed point theorem" (abianfp) is, as far as I know, the first credible validation of this interesting and not-so-trivial theorem. In his last years, Abian espoused some rather unconventional theories on Usenet, causing him to be considered a crank and not taken seriously. The following Usenet post appears to be the original one announcing this theorem: http://groups.google.com/groups?selm=68v5ij%24bar%241%40news.iastate.edu&output=gplain Abian claimed for this theorem, "The fundamental significance of the Theorem lies in the fact that a great many fixed point theorems can be reduced to the special cases of the Theorem." After some effort I located a post where Abian seems to have proved Tarski's classical Fixed Point Theorem as a corollary to this one: http://groups.google.com/groups?selm=6972ve%24t96%241%40news.iastate.edu&output=gplain This seems correct (although I need to work out the details to be completely convinced). If so, I think this is significant, but apparently no one on Usenet took it seriously nor indicated they understood it. iunconst, fnresdm, and abianfplem are prerequisites for abianfp. (6-Sep-04) tfinds2 should produce shorter proofs than tfinds for some problems (analogous to finds vs. finds2). I derived it from tfindes to show it can be done (i.e. going from explicit back to implicit substitution), but it required the new (and somewhat unusual) theorems sbralie (what a brain-teaser), sbcco2 (allows us to obtain [ suc x / x ] phi where suc x and x share a free variable), and sbcie added today. An interesting exercise, and the technique should be useful in the future. (2-Sep-04) dedlem1 and dedlem2 are not new but revisions of older versions. The "ded" operation can be useful for more than just the weak deduction theorem - as these lemmas show, it acts as a "selector" from two classes depending on the truth or falsity of a wff. (31-Aug-04) undm may be the shortest proof in the database. ("V \ A" means the "complement of A", and most authors introduce a symbol abbreviating this expression. We need it so rarely that I decided not to define it separately, at least not yet.) Mr. O'Cat provides us with a shorter proof for ja. difun was used to shorten the proof of dif23 of Aug 27. (30-Aug-04) Yesterday's cfub is used to simplify the proof of cf0. (29-Aug-04) A neat thing about snsspr is that neither A nor B have to be sets (which we accomplish with the use of snprc in its proof). I was actually surprised that I didn't have this in the database, and was able to shorten a couple of proofs with it (see the "referenced by" list). cfub is (for me) an important breakthrough that will let us finally get some cofinality stuff proved efficiently. I haven't seen this in a textbook. (28-Aug-04) fconstfv is tricky - the cases of A = empty and A = non-empty must be proved separately. The reason is the non-emptiness requirement of r19.9rzv in step 23. This is one of those little surprises you don't think of until you actually try to prove it. But it's neat the theorem still holds when A is empty. It's also neat the theorem still works when B is a proper class, and it is a pleasant coincidence that the theorem still works when generalized to these cases. By contrast, fconst2 (without its hypothesis) fails when B is a proper class. Specifically, if F is empty and A is non-empty, the left-hand side will be false but the right-hand side will be true. See snprc for what happens to a singleton of a proper class, and see f00 for what happens when a mapping has an empty codomain. (26-Aug-04) In dminss, R"A is the image of R under A - it "maps" a subset of the domain to a subset of the range. Taking the converse image of this, we go back and get something at least as big as the original subset of the domain. Example: consider the constant function; the converse image of any (non-empty) subset of the range (i.e. the whole of the range, which is a singleton) will be the entire domain. And this of course is at least as big as the original subset of the domain. Suppes calls this "somewhat surprising" but doesn't say why. (21-Aug-04) I saw iuniin for the first time on the web page http://en.wikipedia.org/wiki/Union_(set_theory) (go to bottom of the page) and thought it was neat, so here is the proof. Does anyone know what book it comes from? (20-Aug-04) A long time ago when I was first learning logic, I was so enamored with dfor2 that I wrote it on an index card for my "curious math facts" collection. I can't remember if I read it somewhere or stumbled across it on my own. At one point I thought looinv was intuitionistic and sent Mr. O'Cat on a wild goose chase trying to prove it from ax-1 and ax-2. It turns out it's as nonintuitionistic as it could possibly be, being equivalent in fact to Peirce's axiom (theorem peirce). (17-Aug-04) ac6s had me stumped for a long time. It seemed reasonable to me that there was no reason to require that B be a set in ac6, but proving it was a different matter. It now seems we need the very deep and strong Boundedness Axiom (whose proof is the culmination of our development of rank and the cumulative hierarchy of sets). The proof of ac6s is a nice example of an application of the Boundedness Axiom. (16-Aug-04) Using a technique inspired by Mel O'Cat, which is just deleting the whole proof then letting the metamath program try to prove it using "improve all/depth n" for n=1,2,3,4 in succession, I found shorter proofs for the following 43 theorems: pm2.21i pm2.21d pm2.18 pm2.65 orc jctil jctir imp42 imp44 imp45 adantl adantr adantld anidm anidms ancom imdistanri abai anabs1 anabs7 anabsi5 anabsi8 anabss1 anabss3 anabss4 anabss5 anandis anandirs ibi bilimd iba ibar 3impdi 3impdir pm5.1 baibr biantr niabn ninba ax6 19.33 euorv exmoeu2 I tried this on the first 984 theorems in the database (everything up to set theory). ("improve all" was not designed to find proofs, but just to fill out syntax constructions.) Out of these, "improve all" was able to find proofs for 237 of them! 134 of these were identical to the existing proofs found by hand, 60 were longer, and the 43 above were shorter. (16-Aug-04) zorn2 is a simpler version of Zorn's Lemma than zorn, so I made it the "official" Zorn's lemma on the Theorem Sampler of the Metamath Proof Explorer Home Page. (15-Aug-04) qlax* are the new theorems that directly correspond to the Quantum Logic Explorer axioms. Most of today's non-Hilbert-space theorems are just slight modifications to older ones for better overall consistency. Sometimes the older ones are still there if you click the "Next" link, until I slowly weed them out of the theorems that use them. If you have any comments on the new "Theorem list" links let me know. For example, is the upper right corner the best place to put it? Do you find it useful? (14-Aug-04) Mr. O'Cat comes through for us again with loolin. Well, well, it looks like _both_ directions of reluni hold! Neat. Only the reverse direction is given in Takeuti/Zaring's exercise (and in set.mm since 1994, in the older version of reluni, which was based on the excercise). I wonder if they knew that both directions hold. (13-Aug-04) fh1 is (one-half of) the very famous Foulis-Holland theorem of orthomodular lattices, which was proved and published independently by Foulis and Holland at almost exactly the same time. I'm not sure who really came first but both are always credited. This proof is identical in structure to the Quantum Logic Explorer version http://us2.metamath.org:8888/qlegif/fh1.html which you may wish to compare it to, and in fact I "borrowed" it from there. You can see why the Quantum Logic Explorer is simpler to work with for these kinds of things: we don't need "member of CH" hypotheses, and we don't have to keep proving operation closure over and over. I put fh1 in the Hilbert Space Explorer also because we will later need it for deeper Hilbert space things that the Quantum Logic Explorer axioms can't do (e.g. involving quantification). A bit of trivia regarding this theorem: I used this theorem to win a $2.00 bet with physicist John Baez, who was unaware of it. See http://groups.google.com/groups?selm=MaVe6.4799%24JG.614310%40news.shore.net (12-Aug-04) Compare the proofs of un00 and chj00 - they're very similar! (8-Aug-04) The last half of my talk at the Argonne workshop http://www-unix.mcs.anl.gov/~mccune/award-2004/ is related to theorem mdsym (of 2-Aug-04). (As you can see, work on this site continued from my hotel room thanks to Linux and ssh...:) (4-Aug-04) Most of today's theorems prove the "theorem" form of the result rather than the "inference" form of the result, and most steps do nothing more that manipulate antecedents with propositional calculus. Their proof displays give us excellent examples of the usefulness of the little colored numbers. Look, for example, at div23t. At first it seems to be a formidable, hopelessly complicated mess. But if you ignore the steps with red/orange numbers, there are only 5 steps left that have blue numbers. Now, if you look at only those steps and ignore the rest, you can pretty much see "in your head" how to connect the equalities to obtain an informal proof. Doesn't it make more sense this way? The other way to prove the "theorem" form is to use the weak deduction theorm dedth and its variants. But as the number of antecedents grows, so does the size of the proof using the weak deduction theorem. Depending on the proof, sometimes you'll get a shorter proof proving it directly, like with today's. Sometimes I would prove it both ways and pick whichever way was shortest (in terms of size of the compressed proof in set.mm). (3-Aug-04) expcllem shortens the proof of reexpclt (closure of natural number exponentiation of reals) and will later be reused for integers, rationals, etc. (2-Aug-04) mdsym (M-symmetry in Hilbert space) is the final goal of the recent Hilbert space work. I will be giving a talk on this remarkable theorem Aug. 5-7 at the Automated Reasoning workshop http://www.mcs.anl.gov/~mccune/award-2004/ and I can now claim that I understand its proof in complete detail. (1-Aug-04) halfnz is a curiously long proof for such a simple fact. If anyone sees a way to shorten it let me know. On the other hand it may illustrate the implicit complexity underlying even trivial arithmetic facts (even if we start with the axioms for arithmetic). (31-Jul-04) 2cn is a long-overdue theorem that shortens the proof of the venerable 2p2e4 (2+2=4) as well as a couple of dozen other theorems. (28-Jul-04) oncardid and oncardon are used to reprove cardnn so that it doesn't use the Axiom of Choice. I didn't like that overkill for something as basic as cardnn. I also reproved cardom so it also doesn't need the Axiom of Choice. (26-Jul-04) dom2d will help us prove some equinumerosity dominance theorems more easily, i.e. without directly exhibiting the 1-1 function required by brdom. (25-Jul-04) nthruz is the 4000th theorem added to the non-Hilbert-space part of set.mm. (Hilbert space adds another 600 or so.) I'll have to update the summaries that say the Metamath Proof Explorer has "over 3000 theorems" to say "over 4000 theorems", once the 4001st is added. The date stamp was added to the metamath program on 5-Aug-93, which is the earliest date in set.mm, at which point there were 450 theorems covering propositional calculus, predicate calculus, and the beginnings of set theory (subsets, union, intersection, abstraction classes, unordered and ordered pairs). (24-Jul-04) nn0enom provides an important link between between two very different theories - complex numbers (nonnegative integer subset) and ordinal numbers (finite subset). With it, we can exploit our many complex number theorems to get some equinumerosity results that will ultimately be used for cardinal arithmetic. I guess it's kind of indirect and ugly but it will save a lot of work. (And formally it's absolutely rigorous of course.) Actually we'll need the reals anyway to study the continuum so maybe it's not that ugly. (23-Jul-04) bnd2, like bnd, is one of those "obvious" things that is actually quite deep. The proof (if you trace it back) ties together key results about ranks, the Axiom of Regularity, and the Axiom of Infinity. It was quite a journey to get atom1d - whereas Beltrametti/Cassinelli merely say "The one-dimensional subspaces of H are obviously the atoms of C(H)." I guess this is the difference between formal and informal math. (22-Jul-04) ranksn, rankuni, rankuniss are some interesting little facts about rank. Most textbooks seem to have them, so why not. (But we don't have a use for them yet.) (21-Jul-04) I am torn about the notation "Mod" for modular pair. Maeda uses (A,B)M for our A Mod B, but a plain "M" just seems too unspecific in a general set theory environment. On the other hand "Mod" suggests the arithmetic modulo, but I'll use "mod" (lower case) for that when we start to need that. But I still may change A Mod B to something else after I think about it - it is nonstandard and I dislike nonstandard notation in set.mm unless it is unavoidable. (18-Jul-04 - 19-Jul-04) isfinite2 does not require the Axiom of Infinity for its proof, whereas the stronger isfinite does require it. (14-Jul-04) unbnn will eventually give us a proof that a set strictly dominated by omega is finite, without invoking the axiom of infinity. (This is the hard part of the proof.) See Suppes Th. 42 p. 151 for more information. unbnn is also a useful general result in its own right; for example it will become useful for proving certain cofinality results. (12-Jul-04 - 13-Jul-04) Cardinal exponentiation, when we define it in the future, will be the cardinality of set exponentiation. mapen and xpmapen provide two fundamental theorems used to establish properties of cardinal exponentiation. (xpmapen was tough to formalize - ugh - I wonder if there is an easier way to do it.) (11-Jul-04) canth3 gives us an unlimited supply of ever-bigger infinite cardinal numbers. (10-Jul-04) cardnn gives us a very convenient property of finite cardinal numbers: the natural numbers and the finite cardinal numbers are the same thing! The alternate definition of cardinals given by karden (using the Axiom of Regularity instead of the Axiom of Choice) does not have this property, which is one of their disadvantages. Once I worked out the first few finite cardinals from karden (it is an exercise in Enderton) and they become very messy and complicated very quickly. (9-Jul-04) sucprcreg is probably not useful for anything, but it was fun. (We already have sucprc that sometimes helps prove more "general" theorems where A does not have to be a set. And I like to avoid theorems requiring the Axiom of Regularity whenever possible.) (8-Jul-04) aleph1 is an important theorem piecing together some recent results (alephnbtwn2, canth2, pw2en). The cardinality of reals is 2^aleph0 (which hopefully we will prove here someday), and in the absence of the Continuum Hypothesis this theorem is the best we can do - the reals are at least as numerous as aleph-one, but possibly more so. (7-Jul-04) I had been wanting to prove spansncv for a long time but was unable to (it is an exercise in Kalmbach with no answer given). Prof. Eric Schechter (http://www.math.vanderbilt.edu/~schectex/) finally provided me with the missing piece, which is the theorem spansnj. (6-Jul-04) The proof of pw2en seems much longer than it should be. I'll have to revisit it someday. (1-Jul-04) ssext is interesting because A and B may be proper classes, yet we need to compare only their (non-proper-class) subsets to determine whether they are equal. nssinpss is one of those neat little "it ought to be a theorem or example or exercise in a book" things, but apparently it isn't. (Any of our theorems that omits a bibliographical reference doesn't appear in any book that I'm aware of. If you find one that does let me know!) (30-Jun-04) elsuc vs. sucel - We have many theorems expanding membership of a class in a defined object, such as elsuc, but hardly any expanding membership of a defined object in a class. I'm not sure why that is, but the need just doesn't seem to arise that often. Today I added two, 0el and sucel, and used them to shorten a couple of existing proofs. These actually seem to be the only such theorems in the database - I looked through set.mm for others, but I couldn't find any others of the form "(defined object) in (class variable) <-> ..." whose right-hand side didn't reference the defined object. On the other hand there are probably over 100 theorems showing membership _in_ defined objects, like elsuc. Strange. By the way there is no "membership in" version of 0el in our database but rather we use the simpler noel. (29-Jun-04) I used to be in awe of the fact that Cantor's theorem failed in NF. With ncanth it now seems kind of obvious why it would fail, because V is a set in NF. (The actual proof in NF is somewhat different since the axioms are different, but the idea is the same.) Anyway ncanth takes away some of the mystery, which is good, but unfortunately it also means I'm not as awed as I used to be. I guess a little mystery can sometimes make things more appealing. (28-Jun-04) xpsnen shows another use of our friend en2 of 25-Jun. The proof makes essential use of the interesting op1st that extracts the first member of an order pair. eqssd was used to shorten 11 proofs that previously used eqss, jca, and sylibrd. The number of bytes trimmed off of their compressed proofs exceeded the number of bytes in eqss, its comment, and its proof, so the database size SHRUNK when eqssd was ADDED! I guess that's what you'd call a positive ROI. :) spanun and spansn are two important theorems of Hilbert space. spanun allows us to work in set theory to take the union of two sets of vectors, then take the span to get their subspace sum. This can be a very powerful simplification over working directly with subspace sum. spansn equates two very different ways of expressing a one-dimensional subspace. (Familiar example: if the Hilbert space is the ordinary 3-dimensional RxRxR of real numbers, a one-dimensional subspace is a line that goes through the origin. Any non-zero vector A on the line essentially specifies the line. To get the line we take the span of the singleton of A. Alternately, we take the orthocomplement of the singleton of A, which becomes the plane (through the origin) perpendicular to A. Then the second orthocomplement recovers the line from that plane. By the way there are 4 kinds of objects in the Hilbert lattice of R^3: the point at the origin, lines, planes, and the entirety of R^3. These are such different things, like apples and oranges, that it seems counterintuitive that they could all be "like" each other in the sense of being the elements of a lattice. But it is often useful to think of the R^3 example when trying to understand Hilbert lattice theorems intuitively.) (27-Jun-04) Normally rank is defined in terms of R1 (the cumulative hierarchy of sets) - see df-rank. Monk takes the opposite approach - he defines rank first, then defines R1. Here we prove, with r1val2, Monk's definition as a theorem of our approach. R1 and rank are in a sense equivalent concepts, although very different in their approach and uses, analogous to say the time domain vs. frequency domain representations of a signal in Fourier analysis: either representation can be recovered from the other. And each representation can have advantages over the other depending on the problem. dfom2 is a traditional way of defining the set omega of natural numbers (as a subset of the ordinal numbers, not the natural number subset of the reals) that is used in many textbooks. However, dfom2 requires the Axiom of Infinity in order to be valid, whereas our df-om does not. (If you look under the proof of dfom2, you'll see that ax-inf is used, and in fact it is needed to ensure that at least one set x in the class abstraction exists.) This means that in these books, all theorems about natural numbers presuppose the the Axiom of Infinity. The reason they do this is that it makes their development easier. Our philosophy is to avoid the Axiom of Infinity when it is not necessary, so our development starting with df-om is quite a bit more complex than in these books. (Takeuti/Zaring's book takes our approach. However they take a shortcut by using the Axiom of Regularity, which is still undesireable. By carefully reworking their proofs, I was able to avoid both Infinity and Regularity in our development of natural numbers. For example, if you look at say nnacom you'll see infinity is not used. T/Z also give a somewhat confusing definition of omega. Our equivalent but simpler definition df-om doesn't seem to occur in the literature, although it is related to Bell/Machover's natural number predicate.) (26-Jun-04) map1 is our first theorem using en2. See how easy the proof is - no messy one-to-one onto function stuff! (There is a function that comes from elmap, but that has to do with the definition of set exponentiation, and we quickly get rid of it with fconst2.) In Hilbert space I'm building up a collection of basic theorems that will be needed to develop the theory of atoms. (25-Jun-04) I think en2 is quite neat - it is not immediately obvious (to me at least) that you can conclude equinumerosity from the innocent-looking hypotheses. But look at its simple, beautiful proof: the function stuff magically appears then vanishes! I don't think en2 is published anywhere. It should let us arrive at some equinumerosity proofs much more quickly. (24-Jun-04) omsmo is one of those "intuitively obvious" or "trivial" facts that textbooks never bother to prove. Unfortunately for us we must actually prove it to use it since Metamath will not let us overlook such minor details. :) (20-Jun-04) I haven't had any luck shortening zind. I am tired of working on it and will put it aside for a while. Today's theorems finish up the list of single-digit operations, where the result is a single digit. As you probably have noticed we don't yet have a standard way of expressing multiple digit numbers like they are normally written down. I used to have a definition for it but it was clumsy and I took it out. The interesting thing is that in abstract math there is rarely a need for numbers larger than say 4 (which is the largest number I have used so far outside of the list that we completed today). Glancing through the proof of Fermat's Last Theorem - one of the most difficult proofs ever - I don't think I saw a number larger than 12. So I'll probably postpone multiple-digit numbers until it is really needed. And even so we can represent them with expressions of single-digit numbers; for example 42 can be expressed as 6 times 7. Now that is an interesing problem: find the shortest representation of all numbers from say 1 to 1000 in terms of operations on single digits (addition, subtraction, multiplication, division, and exponentiation). Any takers? Raph will have multiple-digit numbers in Ghilbert. (18-Jun-04) At first zind looked trivial. After all, we already have induction on natural numbers (ind at step 290), and this theorem is just "shifting" it to start at a different number. But as I got deeper into it more and more unpleasant surprises kept popping up. After hours of tedious work I ended up with one of the biggest and ugliest proofs in set.mm that I see no way to simplify, and there are no "neat" lemmas that stand out (if it were broken up it would only be for the purpose of breaking it up and would not make the proof more efficient). It doesn't make any sense why it should be so huge and it makes me depressed. I'll have to look at it again after I recover.:) (17-Jun-04) See isomin for an explanation of Takeuti and Zaring's initial segment idiom in isoini. This theorem, though cryptic, is copied symbol for symbol from T&Z. oprprc1 is philosophically ugly but will allow us to "cheat" and shorten a few proofs in the future, by not having to worry so much about whether a class is proper or not. (16-Jun-04) erthi is the "key" theorem involved in the proof of erdisj. (14-Jun-04) zrevaddclt basically says, if B is an integer and A is a complex number and A+B is an integer, then A is an integer. I think reverse closure laws are interesting twists on closure laws. (13-Jun-04) isotr and isotrALT are two different proofs of the same thing, because I wanted to see which approach is shorter. Well, one is shorter in set.mm and the other produces a shorter web page. So which do I pick? f1owe uses the new isomorphism stuff to dramatically simplify its proof. Compare the old version (that will eventually be deleted), f1oweOBS. (12-Jun-04) The definitions df-un and df-in have been changed to be more traditional. What used to be df-un is now dfun2 and vice versa; what used to be df-in is now dfin2 and vice versa. The price we pay is a dummy variable, but I finally decided that dfun2 and dfin2 are just too unconventional. The proof of isowe was left as an exercise for the reader in Takeuti and Zaring, and thankfully I was able to figure it out! It will provide a great reduction in size of the enormous proof for f1owe - stay tuned... The new definition df-cv for covering allows us to have a much simpler definition for atoms, and df-at is the new version of the defintion. Theorems ch0psst, atcv0, and atelt are supporting theorems for the new df-at. (11-Jun-04) bitr2d, psseq2i, psseq2d shorten some existing proofs. znegclt, zaddclt, zsubclt will be needed when we get into proofs about integers. cvbr, cvbr2 express the basic property of the new df-cv (covers) definition. There is much literature on the properties related to covering in Hilbert space, and these properties are ultimately tied to the superposition of states (Schroedinger's cat, etc.) (10-Jun-04) funimass2 arose in a proof in a to-be-published logic book I'm reviewing. I thought it was very neat that the domain and range of F are completely irrevelevant to the result, nor does F have to be one-to-one or onto. funimass1 is similar but depends on the range of F. (9-Jun-04) elnnz1 could be used as a definition for natural numbers if we started with integers. nn0subt is a kind of closure law for subtraction of nonnegative integers. It is kind of surprising that its proof is so long, given that we already have the almost-identical nnsubt, but I couldn't find a simpler proof. An example of shintclt is that the intersection of two planes (linear subspaces) is a line (a linear subspace). spanclt shows that the span function "grows" a arbitrary set of vectors into (the smallest) linear space containing them. (8-Jun-04) anadis, anandirs shorten a bunch of proofs previously using anandi, anandir plus sylbir. By the way the "s" suffix on anandis, anandirs means "eliminates a syllogism". ssel2 is a trivial variant of ssel but it is needed so often that I decided to put it in - it makes a bunch of proofs slightly shorter. The proof of isofr was left as an exercise for the reader in Takeuti and Zaring - I always worry about these because figuring out the answers to exercises is often the hardest part of doing these proofs, and if I can't do it the whole project comes to a standstill because there can't be any missing pieces the database. hvm1negt, hvsubcan1t, hvsubcan2t - some more elementary Hilbert space elementary results; by the way the suffix "t" means "theorem form" (vs. "inference form" where the membership requirements are hypotheses). (7-Jun-04) If you think isomin is incomprehensibly cryptic, don't blame me. :) It is stated exactly as written in Takeuti and Zaring's book, symbol for symbol. I did add a note explaining the initial segment idiom. I was surprised there's never been a need for something as basic as unss12 yet. funimacnv tells us that if A is a subset of the range, the image of the converse image of A is A - neat and in a sense intuitive, but its proof took some thought... elznn0 and elnn0z express some more number class relationships. (3-Jun-04) df-span is the same as the span of a vector space you learned about in linear algebra. It does not require any special properties of Hilbert space. (2-Jun-04) onxpdisj may ultimately find use in extending the set of complex numbers with +infinity, -infinity (for reals) that is useful in analysis. Basically, in our definition df-c (and most textbook definitions) complex numbers are ordered pairs, so per onxpdisj any ordinal provides a disjoint set that could extend C with artificial things like +infinity. (1-Jun-04) sqr9 extends the series sqr0, sqr1, sqr4. r1pwcl is a new version of the May 30 version that now has Lim B as an antecedent (instead of a hypothesis). (31-May-04) h1dle simplifies the proof of hatomic and should be useful for other things too. Note: "atom" has nothing to do with physical atoms but is just a mathematical term describing a smallest non-zero element of a lattice, where there is nothing smaller than it except the zero element. In the lattice of closed subspaces of Hilbert space, all atoms are one-dimensional subspaces (I hope to show the proof of this soon). To get a one-dimensional subspace, we start with a singleton containing any non-zero vector, then take the double orthogonal complement to grow the singleton to a closed subspace (which in this case would just be the set of all vectors that differ by a scalar factor, i.e. a one-dimensional "line"). I haven't seen the idiom _|_ _|_ {B} for a one-dimensional subspace in the literature, but it is convenient. (The antecedent of h1dle doesn't require that B be nonzero, but that's because the theorem also works when B is zero - although _|_ _|_ {B} is not a 1-dimensional subspace in that case but instead the zero subspace.) (30-May-04) Raph contributed 3 really nice results - rankr1a, r1pw, and r1pwcl - on rank and the cumulative hierarchy of sets (R1). I haven't seen any of them in a textbook. (28-May-04) exp0 and expp1 are the final goals of exponentiation of complex numbers to nonnegative integer powers. These effectively provide the recursive definition found in most textbooks (which typically do not go through the tedious work of justifying the recursion). We could have started with exp0 and expp1 instead of the direct definition df-exp, but then we would have had to introduce expp1 as an axiom because it is self-referential. Instead, we want to ensure direct traceability back to ZF set theory axioms. From this point forward we will develop all further properties from exp0 and expp1 only, never referring back to the development leading up to them (and never referring directly to df-exp again). The work we did to develop recursion on nonnegative integers - nn0rzer and nn0rsuc - will be reusuable in the future for other recursive definitions. It turns out that nn0rone and nn0rfnnn0 are not needed. In particular, we can prove closure of exponentiation from exp0 and expp1 using induction, eliminating the need for nn0rfnnn0. However I'll leave them in because they may useful in future applications. (24-May-04) Raph Levien developed the definition of exponentiation of complex numbers to nonnegative integer powers in Ghilbert and translated it to Metamath. (He has a Ghilbert to Metamath translator now.) I am adapting it for the official set.mm. It will replace the current "dummy" definition df-expOBS that only works for powers of 2. (18-May-04) None of these function-related theorems appear in any textbook I'm aware of (if there is no bibliographical reference), but most will be needed later to prove some theorems (about cardinality etc.) that do appear in textbooks. I guess that once set theory advances beyond a certain point, these kinds of theorems are expected to be "obvious" to the reader and are not explicitly mentioned. Some of them might make nice homework exercises for a set theory class. I wonder if any teacher has ever borrowed from our collection for that purpose. (15-May-04) I've often thought Takeuti/Zaring's initial segment notation in today's eliniseg, iniseg, dffr3 is a rather cryptic and convoluted idiom just to avoid the dummy variable in {y|y show labels *a4* The assertions that match are shown with statement number, label, and type. 1971 a4i $p 1979 a4s $p 1983 a4sd $p 2402 a4a $p 2407 a4c $p 2413 a4c1 $p 2533 sbea4 $p 2534 sbia4 $p 2535 sbba4 $p 2643 a4b $p 2648 a4b1 $p 2653 a4w $p 2659 a4w1 $p 3905 ra4 $p 3906 ra4e $p 3907 ra42 $p 4642 cla4gf $p 4643 cla4egf $p 4649 cla4gv $p 4650 cla4egv $p 4657 cla4e2gv $p 4658 cla42gv $p 4665 cla4v $p 4666 cla4ev $p 4673 rcla4v $p 4674 rcla4ev $p 4686 rcla42v $p 4687 rcla42ev $p 4696 cla4e2v $p 4891 a4sbc $p A few people have commented that the theorem labels are obscure, and in general I'm not thrilled with them either. Every now and then I revise a few for better uniformity. But no one has been able to suggest a better approach that still keeps the names short. For example, I use "sbth" instead of "SchroederBernsteinTheorem" because the latter would be annoying to type while entering a proof, and lists like the above would not be very compact. In general, an "English-like" descriptive label would be have to be very long to be meaningful - how would you label cla42gv? I wonder if a breakdown for each label like the "cla42gv" example above would be useful enough to be worth the effort of maintaining it. Maybe this could be done with an annotated label in each description, something like: cl+a4+2:4+g+v where cl, a4, etc. are looked up in a list and 2:4 refers to the 4th meaning of "2" in that list. (10-May-04 - 12-May-04) Some miscellaneous stuff and simple arithmetic facts we'll need later on. A bold attempt to venture beyond 2+2=4... Someday of course we'll have to complete the elementary school addition and multiplication tables. (At least Raph Levien will do this for his Ghilbert database, and has ideas for decimal numbers - I used to have them but took them out because I thought the notation was ugly and I wasn't happy with it.) However it is interesting that "higher math" rarely uses numbers beyond 2. For example scanning the proof of Fermat's last theorem http://math.stanford.edu/~lekheng/flt/wiles.pdf the largest number I see for most of the proof is 4 (although the last paragraph mentions some larger numbers, but this may be material beyond FLT, I'm not sure). (9-May-04) exists2 is another one from the existential uniqueness list I made several years ago. I think it is interesting because it shows a way to "talk" about multiple objects without using set theory. (7-May-04) rdglem1 was a final loose end that was cleaned up. zfreg* are cleanups of older versions that didn't use restricted quantification. Other stuff you don't see behind the scenes: over 2 dozen theorems were deleted. These were obsolete versions have been completely eliminated from all proofs using them, and there are now no *OBS theorems in set.mm. zfregs (the strong version) is curious because the Axiom of Infinity is apparently needed for its proof (at step 5 of tz9.1 used in its proof) yet there is no obvious component of Infinity in the result. No one, including some prominent mathematicians, has been able to tell me why or even whether Infinity is _required_ for a proof of this theorem. All textbook proofs I've seen just implicitly use Infinity (in the form of a finite recursion) at this step without further comment. In other proofs I've been able to avoid the Axiom of Infinity (vs. textbook proofs) by using transfinite recursion instead - interestingly transfinite recursion does not require the Axiom of Infinity, whereas finite recursion (the textbook version) does. This is because the latter takes a shortcut by just showing the existence of the necessary function over omega, without showing what it looks like, whereas the former (with far more difficulty) must state explicitly the required function over ordinals because it is proper class and does not exist as a set. In set.mm we don't take these shortcuts, and in most cases we use finite recursion (frfnom, frzer, frsuc) without invoking the Axiom of Infinity. (6-May-04) ackm completes the proof of Maes' axiom by establishing the final link to our Axiom of Choice ax-ac. (4-May-04) More work on the new transfinite recursion proof. Done with all the lemmas now. By the way, when I revise existing proofs, I usually suffix the old one with OBS (obsolete), so the old tfrlem1 becomes tfrlem1OBS. The OBS version will stay until all references to it are removed. This way I can ensure that, at any point in time, the set.mm database as a whole is consistent and complete, even while things are being modified. (3-May-04) More work on the new transfinite recursion proof. (2-May-04) isoid, cflem, and cfval are simple theorems to try out the new df-iso and df-cf definitions. The proof of transfinite recursion (tfrlem*) is being redone in order to be slightly shorter. (1-May-04) fv2, fv3, tz6.12*, etc. are part of a cleanup of function value stuff to use the more compact xFy (df-br) in place of e. F. (30-Apr-04) Finally, aceq5. (28-Apr-04 - 29-Apr-04) We need aceq3 and aceq4 to get the final piece of aceq5, and they will also fill in part of an AC equivalence series I have in mind. (The eventual goal here is to get a link from Mae's AC to ours.) (27-Apr-04) I figured I might as well complete the equality/subclass series with eqsstrd, eqsstr3d, etc. These are named for their resemblance to bitr (simple chaining), bitr3 (useful for eliminating definitions), and bitr4 (useful for introducing definitions). limelon is cleaned up from an older version and is needed for some future stuff related to df-cf. (26-Apr-04) I cleaned up some function value stuff to improve the proofs leading to the very useful abrexex, which is one of those simple-looking theorems that hides a lot of power. After noticing several times the "need" for syl5ss, syl5ssr, syl6ss, syl6ssr, I finally added them in. (These, like others in the syl5xx, syl6xx "family", are named after their resemblance to syl5 and syl6.) I was able to shorten over 2 dozen proofs with them, so I was curious what this actually meant in terms of set.mm file size. It turns out that a total of about 300 bytes were trimmed from the (compressed) shortened proofs, vs. about 1000 additional bytes for the 4 new theorems. So they haven't quite "payed" for themselves yet. Perhaps they will when set.mm is 3 times as big... (23-Apr-04 - 24-Apr-04) These are some function and function value theorems that we will need later. And a step closer to aceq5... (17-Apr-04 - 22-Apr-04) I have a list of uniqueness and "most one" theorems I worked out several years ago, and I decided to add some of them to the database. Here is an excerpt from an email I wrote. Let us consider ax-1 through ax-16 (plus ax-mp, ax-gen). We will ignore ax-17 (which, external to set.mm, is a metatheorem derivable from the others). I mentioned earlier that ax-1 through ax-15 (the $d-free fragment) cannot prove all $d-free theorems, and indeed by Andreka's theorem a complete $d-free fragment is impossible. For completeness we need to add ax-16, which states: A. x x = y -> ( phi -> A. x phi ) where $d x y This is the axiom that, one way or another, ultimately allows us to eliminate dummy variables from a proof. But it has a $d condition. There is a trivial theorem (exists1 in the current set.mm) that states: E! x x = x <-> A. x x = y where $d x y E! means "there exists exactly one." With this we can replace the antecedent of ax-16 and restate it so that it does not have a $d condition. E! effectively "hides" the distinct variable y otherwise needed for ax-16. Of course E! is a defined connective, and when we eliminate the definition the distinct variable returns. I would be surprised if, by treating E! as a primitive connective and adding $d-free axioms for it, we could exploit its "hiding" feature to defeat Andreka's theorem. But I don't have a proof that we couldn't defeat it. In any case I think E! is an interesting connective, and it seems poorly developed in the literature, usually mentioned briefly in passing and sometimes not (formally) at all. Maybe it's considered too trivial. But its theorems are sometimes not that obvious, and in fact E! x E! y has been mistakenly used to mean "there exists exactly one x and exactly one y." (Nowhere, to my knowledge, is the correct expression for this found, except in set.mm). I have sometimes wondered what a complete axiomatization of E! might look like under (standard, not set.mm's) predicate calculus without equality (that would allow one to prove all theorems mentioning E! but not mentioning =). In set.mm there are many theorems of this sort such as E! x E. y phi -> E. y E! x phi (15-Apr-04) I want to link Maes' AC all the way to our Axiom of Choice ax-ac, but I'm missing some standard equivalents needed to do that, and I'll be working on those as I can fit them in. aceq5lem1 is a start at this. When it is complete, aceq5 will prove: aceq5 $p |- ( A. x E. f ( f (_ x /\ f Fn dom x ) <-> A. x ( ( A. z e. x -. z = (/) /\ A. z e. x A. w e. x ( -. z = w -> ( z i^i w ) = (/) ) ) -> E. y A. z e. x E! v v e. ( z i^i y ) ) ) $= (14-Apr-04) Here, finally, we present Maes shorter version of his 5-quantifier AC. (The longer one has been deleted from the database, since it's no longer "interesting.") Not only does this version have only 5 quantifiers (our ax-ac has 7 or 8 if put in prenex form), but if we expand out the biconditional in our ax-ac, Maes' axiom is the same length as ours, tying my claim that ax-ac is the shortest possible. This result refutes a conjecture of Harvey Friedman that a 5-quantifier AC was not possible (and he thanked me for verifying Maes' result). (7-Apr-04) Maes discovered a SHORTER version of his 5-quantifier Axiom of Choice. That means the old one is obsolete, so we'll scrap it and reprove some lemmas to derive the shorter one. (6-Apr-04) exists1 is an interesting little observation. If we treat E! as a primitive connective, we can use exists1 to replace the antecedent of ax-16. Now, disregarding ax-17 (which is techically redundant), the only axiom requiring distinct variables is ax-16, so exists1 makes that requirement go away. Of course we would need to add axioms for E!, and I'm not sure what they would be. (5-Apr-04) eldm2, elrn2 shorten several proofs by eliminating the need for df-br. (3-Apr-04) Some more r19.* stuff, while we're on a roll... (2-Apr-04) I thought I'd need these for Maes' theorem - I didn't - but they still might be handy in the future. (1-Apr-04) As you can guess, I've been busy formalizing the proof of Kurt Maes' theorem. It's now done! AC with only 5 quantifiers in prenex normal form is a new record and has not been published yet. The Metamath proof now confirms with absolute certainty that there is not a mistake in Kurt Maes' proof. References: http://www.cs.nyu.edu/pipermail/fom/2003-November/007653.html http://www.cs.nyu.edu/pipermail/fom/2003-November/007690.html (25-Mar-04) Miscellaneous stuff. (25-Mar-04) 19.28v shortens some proofs that used to use 19.28. uni0 revises an earlier version. funfvima3 might be useful in the future to help prove Axiom of Choice equivalents. (24-Mar-04) inf5 - Now, is this neat, or what? (23-Mar-04) unidif0 and yesterday's disj4 will find use in a neat theorem tomorrow - check back! (22-Mar-04) mprgbir, even though kind of specialized, found use in shortening 6 proofs. I haven't seen disj4 in books, but it is kind of interesting - in particular it implies the right-hand side is symmetrical when A and B are swapped. (17-Mar-04 - 21-Mar-04) More odds and ends for future use. (13-Mar-04 - 16-Mar-04) I thought dmuni is kind of interesting; I'd never seen it before I glanced at it in Enderton so I put it in; don't know if it has any use yet. In addition to residm, resabs1,2, we also have rescom (commutative) law proved earlier. elimasn is often used implicitly by Takeuti/Zaring to represent minimal elements - I don't think it is intuitive but it does save a bound variable the way they use it e.g. in their Def. 6.21 p. 30. (12-Mar-04) hta is a nice theorem - it tells us the closest we can get to Hilbert's epsilon in ZFC, and provides a way to eliminate it from proofs. For a discussion see http://ghilbert.org/choice.txt (11-Mar-04) Theorem schemes can be represented equivalently with class variables and wff variables in set theory. scottexs and scott0s provide excellent examples of converting theorems with class variables (scottex, scott0) to theorem schemes with wff variables. (10-Mar-04) A surprising result communicated by Gregory Bush. If "B" represents df-bi, then in D-notation the proof reads: DDD3DD2D1D3D1B11B. George wrote: 'It's strange for two reasons: first, it's radically more direct than the listed proof (17 steps instead of 859). Second, it doesn't matter what axiom (or definition) goes in the second-to-last place. When I first found this, it was actually coming up with df-or instead of ax-1 in that spot, even though "or" never appears in the expression, which led me through hours of debugging.' (9-Mar-04) Strengthen some class substitution theorems. elabs (which used to be called sbc7) now does not have any distinct variable restrictions. (8-Mar-04) These will be needed to work with abstraction classes substituted for class variables. (7-Mar-04) heplem is something I'm working on with Raph Levien and needs to be cleaned up at some point. The second hypothesis is not very intuitive. Basically I took a shortcut and reused zornlem1 to get this result fast, in order to prove that it could be done, but the result is a very complicated Hilbert epsilon formula (although mathematically it's perfectly sound). Raph indicated he would post some stuff on Hilbert epsilons soon at ghilbert.org. (6-Mar-04) The old transfinite induction theorems were converted so that they now use restricted quantification. (5-Mar-04) find updates an older version of finite induction by using restricted quantifiers. dmeqi, dmeqd, rneqi, and rneqd shorten a bunch of proofs by eliminating ax-mp or syl needed for these commonly used variants of dmeq and rneq. Trivial as they are, last night I shortened 32 proofs with them. (4-Mar-04) r19.20i2 will help us in some future proofs. zfinf shortens an older version by using restricted quantification. The proof of omex was shortened by using r19.20i2, the new version of zfinf, and the new version of peano5 (proved a couple of days ago). ralxp is kind of neat, letting us convert from a single quantification over a cross product to a double quantification; it should be useful for some future theorems involving operation values. (3-Mar-04) reuuni, reuunis clean up older versions. dfchj3 is a simpler definition of closed subspace join in terms of supremum; I might make it the official definition some day. (2-Mar-04) peano5 has been restated with restricted quantification and reproved with a 25% shorter proof (although it is still somewhat long and difficult). Unlike many textbooks, we prove peano5 without invoking either the Axiom of Infinity nor the Axiom of Replacement. This makes the proof longer than those textbook proofs. However, I think it is philosophically nicer to be able to work with natural numbers without having to assume that an infinite set exists. (1-Mar-04) tfis, tfis2f, tfis2 update older versions of these theorems by using restricted quantifiers. Their proofs are shorter as a result (30% shorter for tfis), and they will also shorten proofs that use them as they are phased in. (29-Feb-04) tfi updates an older version of the same theorem by using a restricted quantifier. (28-Feb-04) r1val1 is one of several alternate definitions of R1 in the literature. (27-Feb-04) cardaleph not only shows us that there exists an aleph for every transfinite cardinal, but it gives us an explicit expression for that aleph! The textbook theorems I've seen only show existence, but I think it's nicer to have a closed expression for the actual thing. The proof is quite long and brings together a lot of the cardinal/aleph results we've proved up to this point. The idea behind the proof is to show it separately for 0, successor, and limit alephs. Miraculously, each of the three cases evaluate to the exact same expression. Then we combine the 3 cases with jaod at the end, then eliminate the triple OR with ordzsl. (26-Feb-04) alephord2i finally completes all the aleph ordering stuff we should need. alephle may seem "obvious" - after all, 0 is certainly less than aleph`0 (countable infinity), 1 is less than aleph`1 (first uncountable infinity), and so on. In fact you'd almost think that it should be "less than" instead of "less than or equal to". Well, think again - it turns out that A=aleph`A when A is big enough! I think that's amazing, and hopefully we'll prove it eventually. (25-Feb-04) hbeleq nicely shortens proofs when it can used in place of hbeq - I don't know why I didn't think of it before! I like this little theorem. I shortened 12 proofs with it as you can see in the "referenced by" list. Exercise: Does hbeleq hold if y and A are not distinct? (24-Feb-04) These are some boring utility theorems that will come in handy later and also shorten some existing proofs. fnopab2 has already "paid" for itself (meaning it shortens enough proofs so that the net effect of adding it to the database is to reduce the database size) (look at its "referenced by" list). (23-Feb-04) elxp2 "modernizes" elxp with restricted quantification. I think elxp4 is really cool - no dummy variables - it exploits op1sta and op2nda we proved on 17-Feb. (22-Feb-04) r19.22dv2, r19.26m, and ralcom give us some tools to use when two different quantifier bounds are involved. This will shorten some future and existing proofs. eusn and euuni are revisions of earlier versions of these. Someday I might introduce an iota, but so far it wouldn't be used much. I think Raph uses iota a lot in his Ghilbert HOL stuff. There is a person who _may_ have volunteered to write a gh2mm translator; if that happens we'll have a lot of new stuff to put here. (21-Feb-04) onuninsuc, orduninsuc, nnsuc restrict the existential quantifier of older versions to strengthen them. (20-Feb-04) nndiv may become the basis for a new defined relation x|y or "x divides y" for number theory. divcan2t and divcan3t are needed for nndiv. divcan2t and divcan3t actually were already proven as divcan2z and divcan3z; we just use the weak deduction theorem to get the "t" versions. It would have been nicer to prove them directly but we would need divmult and divasst, and we only have divmulz and divassz. Maybe someday I'll prove divmult and divasst, then I can shorten the proofs of divcan2t and divcan3t. The weak deduction theorem produces horrible-looking proofs but it is a quick and dirty way to get stronger theorems. (19-Feb-04) Added some elementary theorems for the ordinal number 2. (18-Feb-04) disj is a version of disj1 with restricted quantification that will shorten some proofs. wefrc, tz7.5 are new versions of older theorems with restricted quantification and shorter proofs. alephgeom shows all alephs are infinite; the converse arises because (with our definition) a function's value outside its domain is the empty set per ndmfv. (17-Feb-04) r19.23aivv lets us shorten several proofs. dffr2, frc, dfepfr, and epfrc are new versions of older theorems that have been shortened with restricted quantifiers and re-proved with shorter proofs. op1sta is from Raph's HOL Ghilbert stuff, where he calls it fst, and op2nda is a modified version of his snd. (16-Feb-04) birex2i, r19.21ad, cleqrabi are simple but we'll use them later. fvres is a revision of an older version. dmsn0 completes the series: dm0, dmsn0, dmsnsn0. Curiously dom 0 = dom {0} = dom{{0}} = 0, but dom {{{0}}} and beyond are not 0. For example dom {{{0}}} = {0}. (15-Feb-04) Enderton p.222 gives an interesting discussion of kard vs. card. One thing I don't like about kard (that Enderton doesn't mention) is that kard A is not always equinumerous to A, a property that card has by cardid. For example, kard of 0, 1, and 2 are the singletons {0}, {1}, and {2}, which are all equinumerous to 1 by ensn1. (kard 3 is not {3} but a messy expression, and they get messier as n grows.) (14-Feb-04) We prove kardex using Scott's trick. This is a little different than the proof suggested in Enderton. (13-Feb-04) I've never seen alephsuc2 in a book, but it looks like a nice way to define the aleph function, and it may be the shortest possible. It exploits yesterday's cardval2. (12-Feb-04) alephord3 is yet another ordering theorem. (In the end we will need all of these.) I think cardval2 is kind of nice; I came across it in Suppes only after I started, and maybe one day I'll change df-card. Suppes is the only place I've seen it. The equivalence of cardval and cardval2 is a very long journey and uses the Axiom of Choice. (11-Feb-04) alephord2 finally gives us a real textbook theorem (and its converse as well). alephsucdom is a consequence of the ordering stuff; actually one of the earlier lemmas, alephordlem1, is more convenient to prove it, and alephnbtwn2 also gets used. (10-Feb-04) alephord generalizes alephordi to include the converse. I decided not to combine alephord and alephordi because the latter has one less antecedent, making it slightly easier to use. It is also referenced twice in the alephord proof, so combining would cause its proof to be redundantly expanded twice. (9-Feb-04) alephordi is a key theorem essential for further aleph results. The proof provides a nice example of applying transfinite induction. This version (using strict dominance) doesn't seem to appear in textbooks but later we will derive the textbook version that uses cardinal ordering. (8-Feb-04) alephnbtwn2 is the equinumerosity version of alephnbtwn. (7-Feb-04) sneqi,sneqd are trivial but convenient and will shorten some proofs. ordsucun will be needed later to compute the rank of an unordered pair. (6-Feb-04) fvreseq shortens an earlier version with wral notation. inv and unv are trivial and probably useless things to do when feeling lazy... (5-Feb-04) sbcel1 and sbcel2 complete class substitution into a membership relation; equality is defined in terms of membership so now we can do that too in principle. I can't think of a use for dmco2 yet but I thought it was neat and couldn't resist proving it. It wasn't immediately obvious (to me at least) but the proof is relatively simple. (4-Feb-04) With sbci, sbcn, and sbcal we now have the basic tools to move class substitution in and out of any wff, since all other wff connectives are defined in terms of implication, negation, and universal quantification. (3-Feb-04) rabid is trivial but it can simplify some proofs such as nnwos. sbcn gets a basic class substitution property out of the way for future use. (2-Feb-04) hbf is needed for fopab2. fvopab2 is a more general form of an earlier version that had separate hypotheses. fopab2 is a generalization of an earlier version that only proved the forward direction of the conclusion; it is expected to be a nice tool for some future equinumerosity theorems. (1-Feb-04) dmopabss is a simple result that previously was proved explicitly in a couple of other proofs, and it simplifies those proofs. dmopab2 is a powerful equivalence that is expected to be useful in the future and has already been used to simplify some proofs; previously only the forward direction was proved. (31-Jan-04) Got a bunch of bound-variable hypothesis builders out of the way. Actually they all chain together to help prove fnopabg, which replaces an earlier version with a more powerful bidirectional conclusion. dmsn is kind of curious - it holds with all of its class existence hypotheses eliminated, although the proof of this is somewhat tedious; but it shortens proofs that use it, over an earlier version that required that the classes exist. (30-Jan-04) Eliminated the requirement that C be a set in elmap. fnopabfv is now a bidirectional result, making it more powerful. dmsnsn0 is a cute technical result that will be used in the future. Curiously dom 0 = dom {0} = dom{{0}} = 0, but dom {{{0}}} and beyond are not 0. (29-Jan-04) sb7 should make some logicians happy who don't like the x both free and bound in df-sb. However I still think df-sb is neat because it doesn't have any distinct variable requirements. (28-Jan-04) Self-explanatory. (27-Jan-04) I think sbidm is nice but I don't know if it will ever be useful. I simplified Raph's proof of nfunv, but later he wrote about my version, "Neat. I still kinda like my proof, because it's portable to second-order arithmetic with Cantor pairing (in which the universe is a relation, the complete relation to be specific)." (His Ghilbert HOL database has his version.) (26-Jan-04) cleq2tr is on the one hand trivial, on the other hand to me it seemed non-intuitive at first. As an exercise, try to convince yourself of it informally. (25-Jan-04) The very basic cleqtr had never been proved in the database before, and suprisingly it seems to have little use. Only one proof was shortened by it. alephnbtwn, on the other hand, is a pretty deep and basic "must have" fact about alephs that will be of much use in the future. (24-Jan-04) opnz is an easy theorem I thought should be there although I don't know if it will ever be useful. iunrab gets a basic fact of indexed unions out of the way for future use. Copyright terms for this file: Public domain (see http://us.metamath.org/copyright.html#pd).