Peter, Some time ago I tried to work out a suitable approach to treating what 'non-denoting' or 'undefined' expressions in an automated theorem proving situation. When I subsequently was referred to a number of papers on this subject I discovered that the approach I'd chosen pretty much equated to that used in LPF. Some of the advantages for using LPF seem to be covered pretty well in the papers I've read (Barringer Cheng & Jones, 1984, A Logic Covering Undefinedness in Program Proofs Cheng & Jones, 1991, On the usability of logics which handle partial functions Jones, 1996, TANSTAAFL (with partial functions) ); however what seemed to me the biggest advantage of using the LPF approach isn't discussed there to any extent. If we define rep (x, y) ("x is replaceable by y") to mean that if x is a denoting expression then it is equal to y, then, in LPF, we have rep (P, Q) implies rep (P|R, Q|R) rep (P, Q) implies rep (R|P, R|Q) rep (P, Q) implies rep (~P, ~Q) (and, as a consequence of the above) rep (P, Q) implies rep (P&R, Q&R) rep (P, Q) implies rep (R&P, R&Q) and, generally, rep (P, Q) implies rep (f(P,R), f(Q,R)) rep (P, Q) implies rep (f(R,P), f(R,Q)) for any function f which can be constructed using | and ~ Now, if we define other operators (eg the arithmetic operators) in such a way as to ensure that they preserve 'replaceability' in the same way (this is trivial when a non-denoting operand produces a non-denoting result), then this facilitates application of replacement rules without a lot of tedious checking of definedness conditions. For example, the algebraic rule a/a = 1 provided a is non-zero is simply rep (a/a, 1); and if we apply it to a sub-expression (eg a/a = b -> P becomes 1 = b -> P) without any checking that it is denoting, then the only way this can change the value of the whole expression is to change it from a non-denoting expression to a denoting one. That is, using a replacement rule of this sort does not change the value of a denoting expression. This fits in perfectly well with the model-theory / proof-theory link in LPF, which is that a valid proof is one which always takes true premises to true results, and may take non-denoting premises to any sort of result (eg the contr rule, E1, ~E1 --------- E2 when E1 is non-denoting and E2 is false). For example, the proof 2 * 0 / 0 = 2 * 0 / 0 (2 * 0) / 0 = 2 * (0 / 0) 0/0 = 2 * 1 1 = 2 which is produced by applying replacement rules to non-denoting terms, follows the same pattern (non-denoting premises and false conclusion). It seemed to me that being able to apply the usual replacements to an expression without having to worry about undefinedness must be a major convenience. Now the papers I've read (listed above) do talk about the logical connectives being monotone (which amounts to saying that if an operand changes from non-denoting to denoting, and this changes the result, then it changes from non-denoting to denoting; ie the operators preserve 'replaceability'). Apart from that, they don't mention anything much related to the above. I'm curious as to whether that is because it is too obvious, well-known or insignificant to be worth mentioning, or is it an additional practical advantage of LPF worth pointing out? I'd much appreciate your comments on the above. Regards, Jeremy Dawson Larry, Yes I was looking at something similar to hyp_subst_tac for a new logic, though I don't think it seems feasible. What I have done is put axioms for LPF, a logic which handles undefined terms (references follow) LPF references

LPF References

A Logic Covering Undefinedness in Program Proofs

by H. Barringer, J.H. Cheng and C. B. Jones; published in Acta Informatica, volume 21, pages 251 -269, 1984.

A Typed Logic of Partial Functions Reconstructed Classically

by C.B. Jones and C.A. Middelburg; published in April 1993 as a technical report from Utrecht University (Logic Group Preprint Series, number 89). To appear in Acta Informatica in 1994.

On the usability of logics which handle partial functions

by J. H. Cheng and C. B. Jones; pages 51 - 69 of 3rd Refinement Workshop, edited by C. Morgan and J. C. P. Woodcock; published by Springer-Verlag, 1991. This is also available as a technical report, UMCS-90-3-1.

Partial functions and logics: A warning

bu C. B. Jones; Information Processing Letters Vol 54, pp 65-67, 1995 has stirred up an interesting controversy on the newsgroups comp.specification.larch and comp.specification.z

into an Isabelle theory, and prove lots of derived theorems, together with a 'replaceability' predicate, where rep (P,Q) means that, if P is defined, then P equals Q. (This is useful since [| rep(P,Q) ; P |] ==> Q is a theorem of the system). (In LPF, a valid derivation takes true to true, and false or undefined to anything). This in turn means that, if a function f satisfies rep (p,q) ==> rep (f(p), f(q)) (*) then we have a sort of substitution rule [| rep(P,Q) ; f(P) |] ==> f(Q) Hence my interest in hyp_subst_tac. One has to prove for each expression f(_) that (*) holds, which is true if f is made up of operators each of which satisfies (*); and I can do this automatically (my tactic, when given the conclusion of the following thm, will create the theorem "(!!x. rep(P(x), Q(x))) ==> rep(ALL x. P(x) --> R(x), ALL x. Q(x) --> R(x))" : thm from a store of theorems which includes val rep_imp = "[| rep(?P1, ?Q1); rep(?R, ?S) |] ==> rep(?P1 --> ?R, ?Q1 --> ?S)" : thm and val rep_ex = "(!!x. rep(?P(x), ?Q(x))) ==> rep(EX x. ?P(x), EX x. ?Q(x))" : thm) One would like to have a tactic which would use the above to (e.g.) rewrite any assumption in a subgoal of the form f(a/a) to f(1), where f satisfies (*), using all the above. I don't think it would be easy. Regards, Jeremy Dawson Larry, Thanks, will do. > As I've told him, I suspect the right way to implement LPF is by exploiting > the fact (I think it's true!) that, for the subset of formulas that are > defined, LPF is identical to classical logic. The idea is to have ordinary > classical logic present, and use fast_tac as usual, and also have some rewrite > rules for mapping general LPF formulas to classical formulas once they are > shown to be defined. Otherwise you have to check, over and over again, that > particular subformulas are defined. Not quite sure what you mean here. Do you mean having two separate sets of operators, eg |, &, --> etc for LPF, and another lot of corresponding ones for classical, with some sort of guarantee that the latter ones would not be applied to undefined arguments? If so, how would this guarantee work (eg in the case of terms entered by the user as goals)? If you mean using the classical rules with extra assumptions (namely the definedness of the terms involved) (which assumptions would be satisfied automatically when the terms had been shown to be defined) then this would be consistent with what I've done so far - I'd have to write an LPF variant of fast_tac with the LPF rules, some of which do include a definedness check among the assumptions. As a matter of fact, a lot of the rules don't actually require any definedness assumption. For example we have the following conjI "[| P; Q |] ==> P&Q" conjunct1 "P&Q ==> P" conjunct2 "P&Q ==> Q" dnI "P ==> ~~P" dnD "~~P ==> P" disjI1 "P ==> P|Q" disjI2 "Q ==> P|Q" disjE "[| P|Q; P ==> R; Q ==> R |] ==> R" notE "[| ~P; P |] ==> Q" FalseE "False ==> P" dn_def "~~P == P" imp_or_def "P-->Q == ~P | Q" or_imp_def "P | Q == ~P-->Q" or_and_def "P | Q == ~ (~P & ~Q)" and_or_def "P & Q == ~ (~P | ~Q)" True_def "True == False-->False" not_imp_def "~P == P-->False" iff_def "P<->Q == (P-->Q) & (Q-->P)" eq_reflection "(x=y) ==> (x==y)" iff_reflection "(P<->Q) ==> (P==Q)" val disjD1 = "[| ?P | ?Q; ~ ?P |] ==> ?Q" : thm val disjD2 = "[| ?Q | ?P; ~ ?P |] ==> ?Q" : thm val conjE = "[| ?P & ?Q; [| ?P; ?Q |] ==> ?R |] ==> ?R" : thm val mp = "[| ?P1 --> ?Q; ?P1 |] ==> ?Q" : thm val mt = "[| ?P3 --> ?Q; ~ ?Q |] ==> ~ ?P3" : thm qed_goalw "impI" LPF.thy [defl_def, imp_or_def] "[| P ==> Q; defl(P) |] ==> P-->Q" val imp_refl = "defl(?P) ==> ?P --> ?P" : thm val imp_transm = "[| ?P1 --> ?Q2; ?Q2 ==> ?Q1 |] ==> ?P1 --> ?Q1 [logic]" : thm val imp_trans = "[| ?P1 --> ?Q2; ?Q2 --> ?Q1 |] ==> ?P1 --> ?Q1 [logic]" : thm val contrapos = "[| ?P7 ==> ?P; defl(?P7); ~ ?P |] ==> ~ ?P7 [logic]" : thm val excl_middle = "defl(?P) ==> ?P --> ?P" : thm val notI = "[| ?P1 ==> False; defl(?P1) |] ==> ~ ?P1" : thm qed_goalw "classical" LPF.thy [defl_def] "[| ~P ==> P; defl(P) |] ==> P" so it's a minority of LPF rules that need any definedness checking, and we'd want to be able to use the majority without any requirement that the terms involved be defined. Would that be possible under the approach you/ve got in mind? I suppose you could say that the line I'm working on is to reduce the impact of the need to do definedness checking by finding useful rules and tactics that don't require it. Thanks again for you comments. Jeremy (PS BTW, I won't be replying to any email for a fortnight now) > > The idea is to have ordinary > > classical logic present, and use fast_tac as usual, and also have some rewrite > > rules for mapping general LPF formulas to classical formulas once they are > > shown to be defined. > Do you mean having two separate sets of operators, eg > |, &, --> etc for LPF, and another lot of corresponding ones for > classical, with some sort of guarantee that the latter ones would > not be applied to undefined arguments? If so, how would this > guarantee work (eg in the case of terms entered by the user as goals)? I can't say that I have ever spelt out the details. In my application, the primary logic was classical HOL, and I needed reflection into a type of booleans that included undefined elements. So one normally expressed formulae in HOL. There was no need for separate versions of the connectives at the boolean level. With LPF one can work with formulae like x=0 | x/x=1. I'm not certain how to handle these. I hadn't realized that one could formalize LPF directly with so few definedness conditions. That being the case one might try to use the classical reasoner directly, using it to prove definedness conditions too. It is not so different from the way it already works, e.g. in ZF. In other words your approach might be better after all. -- Larry Jacob, Yes, thanks, I got your mail I've managed to get tactics which make use of the 'replaceable' relation I mentioned earlier. Using 'idemp_or' which is rep(P|P, P) I can essentially use that to simplify an expression involving P|P to one involving P, without having to do explicit definedness checks. It only can be used in a forward proof mode, so for the demo I put in an irrelevant goal "X". Although in fact P|P == P, so in this particular case, the result could be achieved using ordianry rewriting, the functions shown below would be equally applicable for rep (a,b) where a may be undefined when b is not, for example, replacing a/a by 1 (where they're not identical when a is be 0). Some of the code (which is a bit of a mess but I've included it in case you want to look at it), and then some examples, (labelled "(*demo") follow. Regards, Jeremy val oT = Type ("o", []) ; val Trpp = Const ("Trueprop", oT --> propT) ; (* dest_rep : term -> term * term, takes term of the form rep (a,b) and returns (a, b) *) fun dest_rep (Const ("Trueprop", _) $ (Const ("rep", _) $ l $ r)) = (l, r) | dest_rep (Const ("Trueprop", _) $ (Const ("rept", _) $ l $ r)) = (l, r) | dest_rep tm = raise TERM ("dest_rep: not rep(_,_) or rept(_,_)", [tm]); (* rep_to_def : term -> term, takes term of the form rep (a,b) and returns term of the form a==b *) val rep_to_def = Logic.mk_equals o dest_rep (* rdthm: thm -> thm, as above, from and to theorems *) fun rdthm rth = let val {sign, prop, ...} = rep_thm rth in trivial (cterm_of sign (rep_to_def prop)) end ; fun rt (Const ("Trueprop",_) $ eo) = eo; val st = Sign.string_of_term (sign_of thy); val tot = term_of o cprop_of ; (* mtcht : term * term -> (indexname * typ) list * (indexname * term) list, mtcht t1 t2 returns lists of assignments to type and term variables of t1 which will make it the same as t2 *) val mtcht = Pattern.match (#tsig (Sign.rep_sg (sign_of thy))) ; (* fun subst_whole pat rep obj = ren_inst (mtcht (pat, obj), rep, pat, obj); *) fun subst_whole pat rep obj = subst_vars (mtcht (pat, obj)) rep ; (* subst_rep : term -> term -> term -> string list -> term, subst_rep pat rep obj [] gives obj, with subterms corresponding to pat changed to rep *) fun subst_rep pat rep obj bounds = subst_whole pat rep obj handle Pattern.MATCH => case obj of t$u => subst_rep pat rep t bounds $ subst_rep pat rep u bounds | Abs(a,T,t) => let val b = variant bounds a val v = Free("." ^ b,T) val t' = subst_rep pat rep (subst_bounds([v],t)) (b::bounds) in Abs(a, T, abstract_over(v,t')) end | t => t ; fun mk_rep (Const ("Trueprop", _) $ t1) (Const ("Trueprop", _) $ t2) = Trpp $ (Const ("rep", [oT,oT] ---> oT) $ t1 $ t2) (* rep_sub : thm -> term -> term, rule is of the form rep (a,b), returns term rep (obj, sub), where sub is obj with instances of a changed to b *) fun rep_sub rule obj = let val (pat, rep) = dest_rep (concl_of rule) val sub = subst_rep pat rep obj [] in mk_rep obj sub end ; val idemp_or = prove_goal LPF.thy "rep (P|P, P)" (fn _ => [rtac repI 1, etac disjE 1, atac 1, atac 1, etac norD1 1]); (* rep_simp_thm : thm -> term -> thm *) fun rep_simp_thm reprule prem = let val tr = trivial (cterm_of (sign_of thy) (rep_sub reprule prem)) val u = tacthm (REPEAT (SOMEGOAL rep_tac)) tr val v = tacthm (REPEAT (SOMEGOAL (rtac reprule))) u in v RS repD end ; (* rep_simp_tac : thm -> int -> int -> thm -> thm Sequence.seq *) fun rep_simp_tac reprule sg pr state = let val (_,_,sgt,_) = dest_state (state, sg) val (prem::_,_) = Logic.strip_prems (pr,[],sgt) in dtac (rep_simp_thm reprule prem) sg state end ; (* rep_simp_rule : thm -> thm -> thm *) fun rep_simp_rule reprule thm = thm RS (rep_simp_thm reprule (concl_of thm)) ; EXAMPLES FOLLOW (* demo goal thy "X"; by (subgoal_tac "P | P <-> (Q | Q) | (R | R) & (S | S)" 1); val (_,_,sgt,_) = dest_state (topthm(), 1); val stsgt = st sgt; val (prem::_,_) = Logic.strip_prems (1,[],sgt); val stpr = st prem; val r = rep_sub idemp_or prem; val str = st r; val tr = trivial (cterm_of sign r); val u = tacthm (REPEAT (SOMEGOAL rep_tac)) tr; val v = tacthm (REPEAT (SOMEGOAL (rtac idemp_or))) u; val w = v RS repD; by (rep_simp_tac idemp_or 1 1); - Level 1 X 1. P | P <-> (Q | Q) | (R | R) & (S | S) ==> X 2. P | P <-> (Q | Q) | (R | R) & (S | S) val stsgt = "P | P <-> (Q | Q) | (R | R) & (S | S) ==> X" : string val stpr = "P | P <-> (Q | Q) | (R | R) & (S | S)" : string val str = "rep(P | P <-> (Q | Q) | (R | R) & (S | S), P <-> Q | R & S)" val tr = "rep(P | P <-> (Q | Q) | (R | R) & (S | S), P <-> Q | R & S) ==> rep(P | P <-> (Q | Q) | (R | R) & (S | S), P <-> Q | R & S)" : thm val u = "[| rep(P | P, P); rep(Q | Q, Q); rep(R | R, R); rep(S | S, S) |] ==> rep(P | P <-> (Q | Q) | (R | R) & (S | S), P <-> Q | R & S)" : thm val v = "rep(P | P <-> (Q | Q) | (R | R) & (S | S), P <-> Q | R & S)" : thm val w = "P | P <-> (Q | Q) | (R | R) & (S | S) ==> P <-> Q | R & S" : thm Level 2 X 1. P <-> Q | R & S ==> X 2. P | P <-> (Q | Q) | (R | R) & (S | S) *) (* demo val [p] = goal thy "P | P <-> (Q | Q) | (R | R) & (S | S) ==> X"; val rp = rep_simp_rule idemp_or p; val p = "P | P <-> (Q | Q) | (R | R) & (S | S) [P | P <-> (Q | Q) | (R | R) & (S | S)]" : thm val rp = "P <-> Q | R & S [P | P <-> (Q | Q) | (R | R) & (S | S)]" : thm *) Hallo Jeremy, some time ago ( I have to apologize, it is now nearly 9 months ago ), you asked me some more information on the research I was doing. Well, here it comes: - we try to implement a 3-valude logic, called PPC, as a object-logic for Isabelle - up to now, I could implement the propositional part of PPC and I started working on the classical reasoner - PPC ( the Partial Predicate Calculus ) is a 3-valued logic, based on the Kleene connectives (AND and NOT), the universal quantifier FORALL and a not-monotone operator DELTA to express that a formula is defined. - the 3 truth values are True, False and Undefined - PPC varies from LCF (Barringer, Chang and Jones) by its definition of VALIDITY 1) M is a model for a formula A if the validitation of A is TRUE 2) A is the consequence of a set of formulas S, ( S |= A ) if A is VALID which means that the validation of A is NOT FALSE ( so A can be true or undefined ) - the calculus used in PPC is based on the assumption calculus of HERMES, it is a mixture of sequent style and natural deduction style and the proofs are always done if forward style So I had to tune all the rule to natural deduction style in order to use Isabelle. Iff you need more information, please contact me. Can you also give me some information on the research you are doing ? Cheers, Dirk Dirk, Thanks for your email - it looks interesting. The research I was doing consisted, first, of implementing LPF (though I later discovered that Jacob Frost has done a good deal more in that direction); and secondly, of exploring the properties of the rep(_,_) operator, where rep(P,Q) means that if P is defined, then P = Q. Many operators, eg, |, &, -->, have a rep-preserving property, so, eg, rep(P1,P2) implies rep (P1&Q|R-->S, P2&Q|R-->S); this means that you can do automatic rewriting without checking definedness, eg a rewriting tactic can give you rep ((a/a)*b+c, 1*b+c) without any definedness checking along the way. But I won't be doing much more research - my job is to be abolished. regards, Jeremy Dawson