TSTP Solution File: ITP398_1 by Princess---230619
View Problem
- Process Solution
%------------------------------------------------------------------------------
% File : Princess---230619
% Problem : ITP001_1 : TPTP v8.1.2. Released v8.1.0.
% Transfm : none
% Format : tptp
% Command : princess -inputFormat=tptp +threads -portfolio=casc +printProof -timeoutSec=%d %s
% Computer : n010.cluster.edu
% Model : x86_64 x86_64
% CPU : Intel(R) Xeon(R) CPU E5-2620 v4 2.10GHz
% Memory : 8042.1875MB
% OS : Linux 3.10.0-693.el7.x86_64
% CPULimit : 300s
% WCLimit : 300s
% DateTime : Thu Aug 31 04:12:00 EDT 2023
% Result : Theorem 48.36s 7.23s
% Output : Proof 80.89s
% Verified :
% SZS Type : -
% Comments :
%------------------------------------------------------------------------------
%----WARNING: Could not form TPTP format derivation
%------------------------------------------------------------------------------
%----ORIGINAL SYSTEM OUTPUT
% 0.00/0.12 % Problem : ITP001_1 : TPTP v8.1.2. Released v8.1.0.
% 0.00/0.13 % Command : princess -inputFormat=tptp +threads -portfolio=casc +printProof -timeoutSec=%d %s
% 0.13/0.34 % Computer : n010.cluster.edu
% 0.13/0.34 % Model : x86_64 x86_64
% 0.13/0.34 % CPU : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
% 0.13/0.34 % Memory : 8042.1875MB
% 0.13/0.34 % OS : Linux 3.10.0-693.el7.x86_64
% 0.13/0.34 % CPULimit : 300
% 0.13/0.34 % WCLimit : 300
% 0.13/0.34 % DateTime : Sun Aug 27 15:27:04 EDT 2023
% 0.13/0.34 % CPUTime :
% 0.20/0.61 ________ _____
% 0.20/0.61 ___ __ \_________(_)________________________________
% 0.20/0.61 __ /_/ /_ ___/_ /__ __ \ ___/ _ \_ ___/_ ___/
% 0.20/0.61 _ ____/_ / _ / _ / / / /__ / __/(__ )_(__ )
% 0.20/0.61 /_/ /_/ /_/ /_/ /_/\___/ \___//____/ /____/
% 0.20/0.61
% 0.20/0.61 A Theorem Prover for First-Order Logic modulo Linear Integer Arithmetic
% 0.20/0.61 (2023-06-19)
% 0.20/0.61
% 0.20/0.61 (c) Philipp Rümmer, 2009-2023
% 0.20/0.61 Contributors: Peter Backeman, Peter Baumgartner, Angelo Brillout, Zafer Esen,
% 0.20/0.61 Amanda Stjerna.
% 0.20/0.61 Free software under BSD-3-Clause.
% 0.20/0.61
% 0.20/0.61 For more information, visit http://www.philipp.ruemmer.org/princess.shtml
% 0.20/0.61
% 0.20/0.61 Loading /export/starexec/sandbox2/benchmark/theBenchmark.p ...
% 0.20/0.62 Running up to 7 provers in parallel.
% 0.20/0.63 Prover 2: Options: +triggersInConjecture +genTotalityAxioms -tightFunctionScopes -clausifier=simple +reverseFunctionalityPropagation +boolFunsAsPreds -triggerStrategy=allMinimalAndEmpty -realRatSaturationRounds=1 -ignoreQuantifiers -constructProofs=never -generateTriggers=all -randomSeed=-1065072994
% 0.20/0.63 Prover 0: Options: +triggersInConjecture +genTotalityAxioms +tightFunctionScopes -clausifier=simple -reverseFunctionalityPropagation -boolFunsAsPreds -triggerStrategy=allUni -realRatSaturationRounds=0 -ignoreQuantifiers -constructProofs=never -generateTriggers=all -randomSeed=1042961893
% 0.20/0.64 Prover 1: Options: +triggersInConjecture -genTotalityAxioms -tightFunctionScopes -clausifier=none -reverseFunctionalityPropagation -boolFunsAsPreds -triggerStrategy=maximal -realRatSaturationRounds=0 +ignoreQuantifiers -constructProofs=always -generateTriggers=all -randomSeed=-1571432423
% 0.20/0.64 Prover 3: Options: +triggersInConjecture -genTotalityAxioms -tightFunctionScopes -clausifier=none -reverseFunctionalityPropagation -boolFunsAsPreds -triggerStrategy=maximal -realRatSaturationRounds=1 +ignoreQuantifiers -constructProofs=never -generateTriggers=all -randomSeed=1922548996
% 0.20/0.64 Prover 5: Options: +triggersInConjecture -genTotalityAxioms +tightFunctionScopes -clausifier=none +reverseFunctionalityPropagation +boolFunsAsPreds -triggerStrategy=allMaximal -realRatSaturationRounds=1 -ignoreQuantifiers -constructProofs=never -generateTriggers=complete -randomSeed=1259561288
% 0.20/0.64 Prover 4: Options: +triggersInConjecture -genTotalityAxioms -tightFunctionScopes -clausifier=simple -reverseFunctionalityPropagation -boolFunsAsPreds -triggerStrategy=allUni -realRatSaturationRounds=0 +ignoreQuantifiers -constructProofs=always -generateTriggers=all -randomSeed=1868514696
% 0.20/0.64 Prover 6: Options: -triggersInConjecture -genTotalityAxioms +tightFunctionScopes -clausifier=none +reverseFunctionalityPropagation -boolFunsAsPreds -triggerStrategy=maximalOutermost -realRatSaturationRounds=0 -ignoreQuantifiers -constructProofs=never -generateTriggers=all -randomSeed=-1399714365
% 16.33/3.05 Prover 3: Preprocessing ...
% 16.33/3.06 Prover 0: Preprocessing ...
% 17.73/3.11 Prover 2: Preprocessing ...
% 18.48/3.22 Prover 6: Preprocessing ...
% 18.48/3.23 Prover 1: Preprocessing ...
% 19.10/3.29 Prover 4: Preprocessing ...
% 19.10/3.31 Prover 5: Preprocessing ...
% 40.34/6.14 Prover 1: Warning: ignoring some quantifiers
% 40.34/6.26 Prover 3: Warning: ignoring some quantifiers
% 40.93/6.35 Prover 3: Constructing countermodel ...
% 40.93/6.37 Prover 6: Proving ...
% 40.93/6.37 Prover 1: Constructing countermodel ...
% 46.21/6.99 Prover 4: Warning: ignoring some quantifiers
% 48.36/7.23 Prover 3: proved (6584ms)
% 48.36/7.23
% 48.36/7.23 % SZS status Theorem for /export/starexec/sandbox2/benchmark/theBenchmark.p
% 48.36/7.23
% 48.36/7.23 Prover 7: Options: +triggersInConjecture -genTotalityAxioms +tightFunctionScopes -clausifier=simple +reverseFunctionalityPropagation +boolFunsAsPreds -triggerStrategy=allUni -realRatSaturationRounds=1 +ignoreQuantifiers -constructProofs=always -generateTriggers=all -randomSeed=-236303470
% 48.36/7.25 Prover 6: stopped
% 48.36/7.25 Prover 8: Options: +triggersInConjecture +genTotalityAxioms -tightFunctionScopes -clausifier=none -reverseFunctionalityPropagation -boolFunsAsPreds -triggerStrategy=maximal -realRatSaturationRounds=0 +ignoreQuantifiers -constructProofs=always -generateTriggers=all -randomSeed=-200781089
% 49.04/7.32 Prover 4: Constructing countermodel ...
% 52.19/7.78 Prover 5: Proving ...
% 52.19/7.78 Prover 5: stopped
% 52.97/7.84 Prover 10: Options: +triggersInConjecture -genTotalityAxioms +tightFunctionScopes -clausifier=simple -reverseFunctionalityPropagation +boolFunsAsPreds -triggerStrategy=maximal -realRatSaturationRounds=1 +ignoreQuantifiers -constructProofs=always -generateTriggers=all -randomSeed=919308125
% 54.61/8.03 Prover 0: Proving ...
% 54.61/8.03 Prover 0: stopped
% 54.61/8.05 Prover 11: Options: +triggersInConjecture -genTotalityAxioms +tightFunctionScopes -clausifier=simple -reverseFunctionalityPropagation -boolFunsAsPreds -triggerStrategy=allUni -realRatSaturationRounds=1 +ignoreQuantifiers -constructProofs=always -generateTriggers=all -randomSeed=-1509710984
% 57.13/8.50 Prover 7: Preprocessing ...
% 57.13/8.51 Prover 8: Preprocessing ...
% 60.62/8.84 Prover 10: Preprocessing ...
% 62.23/9.09 Prover 11: Preprocessing ...
% 62.23/9.10 Prover 2: Proving ...
% 62.23/9.10 Prover 2: stopped
% 62.23/9.11 Prover 13: Options: +triggersInConjecture -genTotalityAxioms -tightFunctionScopes -clausifier=simple -reverseFunctionalityPropagation +boolFunsAsPreds -triggerStrategy=maximal -realRatSaturationRounds=0 +ignoreQuantifiers -constructProofs=always -generateTriggers=complete -randomSeed=1138197443
% 69.58/9.99 Prover 13: Preprocessing ...
% 73.96/10.56 Prover 1: Found proof (size 123)
% 73.96/10.56 Prover 1: proved (9935ms)
% 73.96/10.57 Prover 4: stopped
% 73.96/10.59 Prover 8: Warning: ignoring some quantifiers
% 74.65/10.68 Prover 13: stopped
% 74.65/10.70 Prover 8: Constructing countermodel ...
% 74.65/10.71 Prover 8: stopped
% 75.56/10.80 Prover 10: Warning: ignoring some quantifiers
% 75.96/10.89 Prover 10: Constructing countermodel ...
% 75.96/10.90 Prover 10: stopped
% 76.98/11.10 Prover 7: Warning: ignoring some quantifiers
% 77.27/11.22 Prover 7: Constructing countermodel ...
% 77.27/11.23 Prover 7: stopped
% 78.80/11.48 Prover 11: Warning: ignoring some quantifiers
% 79.44/11.62 Prover 11: Constructing countermodel ...
% 79.44/11.63 Prover 11: stopped
% 79.44/11.63
% 79.44/11.63 % SZS status Theorem for /export/starexec/sandbox2/benchmark/theBenchmark.p
% 79.44/11.63
% 79.61/11.70 % SZS output start Proof for theBenchmark
% 79.61/11.74 Assumptions after simplification:
% 79.61/11.74 ---------------------------------
% 79.61/11.74
% 79.61/11.74 (axiom23)
% 79.61/11.76 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$(cblinfun_compose$)
% 79.61/11.76 & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$]
% 79.61/11.76 : ! [v2: Mem_ell2_ccsubspace$] : ! [v3:
% 79.61/11.76 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v4:
% 79.61/11.76 Mem_ell2_mem_ell2_cblinfun$] : ! [v5: Mem_ell2_ccsubspace$] : ( ~
% 79.61/11.76 (fun_app$b(cblinfun_compose$, v0) = v3) | ~ (fun_app$a(v3, v1) = v4) | ~
% 79.61/11.76 (cblinfun_image$(v4, v2) = v5) | ~ Mem_ell2_mem_ell2_cblinfun$(v1) | ~
% 79.61/11.76 Mem_ell2_mem_ell2_cblinfun$(v0) | ~ Mem_ell2_ccsubspace$(v2) | ? [v6:
% 79.61/11.76 Mem_ell2_ccsubspace$] : (cblinfun_image$(v1, v2) = v6 &
% 79.61/11.76 cblinfun_image$(v0, v6) = v5 & Mem_ell2_ccsubspace$(v6) &
% 79.61/11.76 Mem_ell2_ccsubspace$(v5)))
% 79.61/11.76
% 79.61/11.76 (axiom28)
% 79.61/11.77 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$(cblinfun_compose$)
% 79.61/11.77 & Mem_ell2_mem_ell2_cblinfun$(o7$) & Mem_ell2_mem_ell2_cblinfun$(o5$) &
% 79.61/11.77 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.77 &
% 79.61/11.77 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.77 & Bit_ell2_bit_ell2_cblinfun$(xz$) & ? [v0:
% 79.61/11.77 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v1:
% 79.61/11.77 Mem_ell2_mem_ell2_cblinfun$] : ? [v2:
% 79.61/11.77 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 79.61/11.77 (fun_app$b(cblinfun_compose$, v1) = v2 & fun_app$a(v2, o5$) = o7$ &
% 79.61/11.77 comp$(phi$, snd$) = v0 & fun_app$d(v0, xz$) = v1 &
% 79.61/11.77 Mem_ell2_mem_ell2_cblinfun$(v1) &
% 79.61/11.77 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 79.61/11.77 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2))
% 79.61/11.77
% 79.61/11.77 (axiom365)
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(nil$) &
% 79.61/11.78 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.78 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(cnot$) &
% 79.61/11.78 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.78 &
% 79.61/11.78 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.78 &
% 79.61/11.78 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(fst$)
% 79.61/11.78 & Bit$(one$) & Bit_ell2_bit_ell2_cblinfun$(hadamard$) &
% 79.61/11.78 Bit_ell2_bit_ell2_cblinfun$(pauliZ$) & Bit_ell2_bit_ell2_cblinfun$(pauliX$) &
% 79.61/11.78 Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) & ? [v0:
% 79.61/11.78 Bit_bit_mem_ell2_mem_ell2_cblinfun_list_fun_fun$] : ? [v1:
% 79.61/11.78 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v2:
% 79.61/11.78 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.78 : ? [v3: Mem_ell2_mem_ell2_cblinfun$] : ? [v4: Mem_ell2_mem_ell2_cblinfun$]
% 79.61/11.78 : ? [v5: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 79.61/11.78 [v6: Mem_ell2_mem_ell2_cblinfun$] : ? [v7: Mem_ell2_mem_ell2_cblinfun$] : ?
% 79.61/11.78 [v8: Mem_ell2_mem_ell2_cblinfun_list$] : ? [v9:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v10: Mem_ell2_mem_ell2_cblinfun$] :
% 79.61/11.78 ? [v11: Mem_ell2_mem_ell2_cblinfun_list$] : ? [v12:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v13:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v14:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : (register_pair$(x$, v1) = v2 &
% 79.61/11.78 cons$(v10, v11) = v14 & cons$(v10, v8) = v13 & cons$(v10, nil$) = v11 &
% 79.61/11.78 cons$(v7, nil$) = v8 & cons$(v6, v11) = v12 & cons$(v6, v8) = v9 &
% 79.61/11.78 teleport$(x$, phi$) = v0 & comp$(phi$, snd$) = v5 & comp$(phi$, fst$) = v1 &
% 79.61/11.78 apply$a(hadamard$, x$) = v4 & apply$a(pauliZ$, v5) = v7 & apply$a(pauliX$,
% 79.61/11.78 v5) = v6 & apply$a(id_cblinfun$, v5) = v10 & apply$(cnot$, v2) = v3 &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun$(v10) & Mem_ell2_mem_ell2_cblinfun$(v7) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun$(v6) & Mem_ell2_mem_ell2_cblinfun$(v4) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun$(v3) & Mem_ell2_mem_ell2_cblinfun_list$(v14) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v13) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v12) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v11) & Mem_ell2_mem_ell2_cblinfun_list$(v9)
% 79.61/11.78 & Mem_ell2_mem_ell2_cblinfun_list$(v8) &
% 79.61/11.78 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v5) &
% 79.61/11.78 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1) &
% 79.61/11.78 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2)
% 79.61/11.78 & Bit_bit_mem_ell2_mem_ell2_cblinfun_list_fun_fun$(v0) & ! [v15: Bit$] : !
% 79.61/11.78 [v16: Bit$] : ! [v17: Mem_ell2_mem_ell2_cblinfun$] : ! [v18:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun$] : ! [v19: Mem_ell2_mem_ell2_cblinfun_list$]
% 79.61/11.78 : ! [v20: Mem_ell2_mem_ell2_cblinfun_list$] : (v15 = one$ | ~ (cons$(v18,
% 79.61/11.78 v13) = v19) | ~ (cons$(v17, v19) = v20) | ~ (ifthen$b(v1, v15) =
% 79.61/11.78 v17) | ~ (ifthen$b(x$, v16) = v18) | ~ Bit$(v16) | ~ Bit$(v15) | ?
% 79.61/11.78 [v21: Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : ? [v22:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v23:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v24:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v25:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v26:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v27:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v28:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : (cons$(v18, v14) = v25 & cons$(v17,
% 79.61/11.78 v25) = v26 & cons$(v4, v26) = v27 & cons$(v4, v20) = v23 & cons$(v3,
% 79.61/11.78 v27) = v28 & cons$(v3, v23) = v24 & fun_app$k(v0, v15) = v21 &
% 79.61/11.78 fun_app$j(v21, v16) = v22 & Mem_ell2_mem_ell2_cblinfun_list$(v28) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v27) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v26) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v25) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v24) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v23) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v22) &
% 79.61/11.78 Bit_mem_ell2_mem_ell2_cblinfun_list_fun$(v21) & ( ~ (v16 = one$) | v24 =
% 79.61/11.78 v22) & (v28 = v22 | v16 = one$))) & ! [v15: Bit$] : ! [v16:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun$] : ! [v17: Mem_ell2_mem_ell2_cblinfun$] : !
% 79.61/11.78 [v18: Mem_ell2_mem_ell2_cblinfun_list$] : ! [v19:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ( ~ (cons$(v17, v9) = v18) | ~
% 79.61/11.78 (cons$(v16, v18) = v19) | ~ (ifthen$b(v1, one$) = v16) | ~ (ifthen$b(x$,
% 79.61/11.78 v15) = v17) | ~ Bit$(v15) | ? [v20:
% 79.61/11.78 Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : ? [v21:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v22:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v23:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v24:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v25:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v26:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : ? [v27:
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$] : (cons$(v17, v12) = v24 & cons$(v16,
% 79.61/11.78 v24) = v25 & cons$(v4, v25) = v26 & cons$(v4, v19) = v22 & cons$(v3,
% 79.61/11.78 v26) = v27 & cons$(v3, v22) = v23 & fun_app$k(v0, one$) = v20 &
% 79.61/11.78 fun_app$j(v20, v15) = v21 & Mem_ell2_mem_ell2_cblinfun_list$(v27) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v26) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v25) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v24) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v23) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v22) &
% 79.61/11.78 Mem_ell2_mem_ell2_cblinfun_list$(v21) &
% 79.61/11.78 Bit_mem_ell2_mem_ell2_cblinfun_list_fun$(v20) & ( ~ (v15 = one$) | v23 =
% 79.61/11.78 v21) & (v27 = v21 | v15 = one$))))
% 79.61/11.78
% 79.61/11.78 (axiom366)
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$(o6$) & Mem_ell2_mem_ell2_cblinfun$(o7$) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_list$(nil$) & Mem_ell2_ccsubspace$(pre$) &
% 79.61/11.79 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.79 &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.79 & Bit$(one$) & Bit$(b$a) & Bit_ell2_bit_ell2_cblinfun$(pauliZ$) &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) & ? [v0: Mem_ell2_ccsubspace$] : ?
% 79.61/11.79 [v1: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v2:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : ? [v3: Mem_ell2_mem_ell2_cblinfun_list$] :
% 79.61/11.79 ? [v4: Mem_ell2_ccsubspace$] : ? [v5: any] : ? [v6:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : ? [v7: Mem_ell2_mem_ell2_cblinfun_list$] :
% 79.61/11.79 ? [v8: any] : (cons$(v6, nil$) = v7 & cons$(v2, nil$) = v3 & hoare$(v0, v7,
% 79.61/11.79 v4) = v8 & hoare$(v0, v3, v4) = v5 & comp$(phi$, snd$) = v1 &
% 79.61/11.79 cblinfun_image$(o6$, pre$) = v0 & cblinfun_image$(o7$, pre$) = v4 &
% 79.61/11.79 apply$a(pauliZ$, v1) = v2 & apply$a(id_cblinfun$, v1) = v6 &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$(v6) & Mem_ell2_mem_ell2_cblinfun$(v2) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_list$(v7) & Mem_ell2_mem_ell2_cblinfun_list$(v3)
% 79.61/11.79 & Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1) &
% 79.61/11.79 Mem_ell2_ccsubspace$(v4) & Mem_ell2_ccsubspace$(v0) & ( ~ (one$ = b$a) | v5
% 79.61/11.79 = 0) & (v8 = 0 | one$ = b$a))
% 79.61/11.79
% 79.61/11.79 (axiom367)
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$(o6$) & Mem_ell2_mem_ell2_cblinfun$(o5$) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_list$(nil$) & Mem_ell2_ccsubspace$(pre$) &
% 79.61/11.79 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.79 &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.79 & Bit$(one$) & Bit$(a$a) & Bit_ell2_bit_ell2_cblinfun$(pauliX$) &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) & ? [v0: Mem_ell2_ccsubspace$] : ?
% 79.61/11.79 [v1: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v2:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : ? [v3: Mem_ell2_mem_ell2_cblinfun_list$] :
% 79.61/11.79 ? [v4: Mem_ell2_ccsubspace$] : ? [v5: any] : ? [v6:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : ? [v7: Mem_ell2_mem_ell2_cblinfun_list$] :
% 79.61/11.79 ? [v8: any] : (cons$(v6, nil$) = v7 & cons$(v2, nil$) = v3 & hoare$(v0, v7,
% 79.61/11.79 v4) = v8 & hoare$(v0, v3, v4) = v5 & comp$(phi$, snd$) = v1 &
% 79.61/11.79 cblinfun_image$(o6$, pre$) = v4 & cblinfun_image$(o5$, pre$) = v0 &
% 79.61/11.79 apply$a(pauliX$, v1) = v2 & apply$a(id_cblinfun$, v1) = v6 &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$(v6) & Mem_ell2_mem_ell2_cblinfun$(v2) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_list$(v7) & Mem_ell2_mem_ell2_cblinfun_list$(v3)
% 79.61/11.79 & Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1) &
% 79.61/11.79 Mem_ell2_ccsubspace$(v4) & Mem_ell2_ccsubspace$(v0) & ( ~ (one$ = a$a) | v5
% 79.61/11.79 = 0) & (v8 = 0 | one$ = a$a))
% 79.61/11.79
% 79.61/11.79 (axiom379)
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$(cblinfun_compose$)
% 79.61/11.79 & Mem_ell2_mem_ell2_cblinfun$(o6$) & Mem_ell2_mem_ell2_cblinfun$(o5$) &
% 79.61/11.79 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.79 &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.79 & Bit$(one$) & Bit$(a$a) & Bit_ell2_bit_ell2_cblinfun$(pauliX$) &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) & ? [v0:
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v1:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : ? [v2:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v3:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : ? [v4: Mem_ell2_mem_ell2_cblinfun$] : ?
% 79.61/11.79 [v5: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v6:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : (fun_app$b(cblinfun_compose$, v4) = v5 &
% 79.61/11.79 fun_app$b(cblinfun_compose$, v1) = v2 & fun_app$a(v5, o5$) = v6 &
% 79.61/11.79 fun_app$a(v2, o5$) = v3 & comp$(phi$, snd$) = v0 & fun_app$d(v0, pauliX$) =
% 79.61/11.79 v1 & fun_app$d(v0, id_cblinfun$) = v4 & Mem_ell2_mem_ell2_cblinfun$(v6) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$(v4) & Mem_ell2_mem_ell2_cblinfun$(v3) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$(v1) &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v5) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2) & ( ~ (one$ =
% 79.61/11.79 a$a) | v3 = o6$) & (v6 = o6$ | one$ = a$a))
% 79.61/11.79
% 79.61/11.79 (axiom380)
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$(cblinfun_compose$)
% 79.61/11.79 & Mem_ell2_mem_ell2_cblinfun$(o6$) & Mem_ell2_mem_ell2_cblinfun$(o7$) &
% 79.61/11.79 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.79 &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.79 & Bit$(one$) & Bit$(b$a) & Bit_ell2_bit_ell2_cblinfun$(pauliZ$) &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) & ? [v0:
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v1:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : ? [v2:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v3:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : ? [v4: Mem_ell2_mem_ell2_cblinfun$] : ?
% 79.61/11.79 [v5: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v6:
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$] : (fun_app$b(cblinfun_compose$, v4) = v5 &
% 79.61/11.79 fun_app$b(cblinfun_compose$, v1) = v2 & fun_app$a(v5, o6$) = v6 &
% 79.61/11.79 fun_app$a(v2, o6$) = v3 & comp$(phi$, snd$) = v0 & fun_app$d(v0, pauliZ$) =
% 79.61/11.79 v1 & fun_app$d(v0, id_cblinfun$) = v4 & Mem_ell2_mem_ell2_cblinfun$(v6) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$(v4) & Mem_ell2_mem_ell2_cblinfun$(v3) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun$(v1) &
% 79.61/11.79 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v5) &
% 79.61/11.79 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2) & ( ~ (one$ =
% 79.61/11.80 b$a) | v3 = o7$) & (v6 = o7$ | one$ = b$a))
% 79.61/11.80
% 79.61/11.80 (axiom436)
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.80 &
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.80 & Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) & ? [v0:
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v1:
% 79.61/11.80 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.80 : ? [v2:
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.61/11.80 : (register_pair$e(x$, phi$) = v1 & tensor_op$(id_cblinfun$) = v2 &
% 79.61/11.80 comp$(phi$, snd$) = v0 &
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(v2)
% 79.61/11.80 &
% 79.61/11.80 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 79.61/11.80 & ! [v3: Bit_ell2_bit_ell2_cblinfun$] : ! [v4:
% 79.61/11.80 Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$d(v0, v3) = v4) | ~
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun$(v3) | ? [v5:
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v6:
% 79.61/11.80 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] :
% 79.61/11.80 (fun_app$ao(v1, v6) = v4 & tensor_op$l(id_cblinfun$, v5) = v6 &
% 79.61/11.80 fun_app$e(v2, v3) = v5 & Mem_ell2_mem_ell2_cblinfun$(v4) &
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v5) &
% 79.61/11.80 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v6))))
% 79.61/11.80
% 79.61/11.80 (axiom439)
% 79.61/11.80 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(b$) &
% 79.61/11.80 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(a$) &
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.80 & Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$(id_cblinfun$c) &
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.80 & ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 79.61/11.80 [v1:
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.80 : ? [v2:
% 79.61/11.80 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.80 : (register_pair$i(a$, b$) = v2 & register_pair$(x$, v0) = v1 & comp$(phi$,
% 79.61/11.80 snd$) = v0 &
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 79.61/11.80 &
% 79.61/11.80 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2)
% 79.61/11.80 & ! [v3: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v4:
% 79.61/11.80 Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$(v1, v3) = v4) | ~
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v3) | ? [v5:
% 79.61/11.80 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 79.61/11.80 : (register_pair$h(v1, v2, v5) = v4 & tensor_op$m(v3, id_cblinfun$c) = v5
% 79.61/11.80 & Mem_ell2_mem_ell2_cblinfun$(v4) &
% 79.61/11.80 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v5))))
% 79.61/11.80
% 79.61/11.80 (axiom445)
% 79.61/11.80 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(b$) &
% 79.61/11.80 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(a$) &
% 79.61/11.80 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$(assoc$b)
% 79.61/11.80 & Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.80 &
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.80 &
% 79.61/11.80 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$(assoc$a)
% 79.61/11.80 & Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) & ? [v0:
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v1:
% 79.61/11.80 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.80 : ? [v2:
% 79.61/11.80 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.80 : ? [v3:
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.80 : ? [v4:
% 79.61/11.80 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.80 : (register_pair$i(a$, b$) = v4 & register_pair$(x$, v0) = v3 & comp$(phi$,
% 79.61/11.80 snd$) = v0 & register_pair$b(v0, a$) = v1 & register_pair$a(v1, b$) = v2 &
% 79.61/11.80 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3)
% 79.61/11.80 &
% 79.61/11.80 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2)
% 79.61/11.80 &
% 79.61/11.80 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 79.61/11.80 &
% 79.61/11.80 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4)
% 79.61/11.80 & ! [v5:
% 79.61/11.80 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 79.61/11.80 ! [v6: Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$c(v2, v5) = v6) | ~
% 79.61/11.80 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(v5)
% 79.61/11.80 | ? [v7:
% 79.61/11.80 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$]
% 79.61/11.80 : ? [v8:
% 79.61/11.80 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 79.61/11.80 : ? [v9:
% 79.61/11.80 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 79.61/11.80 : (fun_app$ar(assoc$b, v5) = v7 & fun_app$aq(assoc$a, v8) = v9 &
% 79.61/11.80 register_pair$h(v3, v4, v9) = v6 & tensor_op$j(id_cblinfun$, v7) = v8 &
% 79.61/11.80 Mem_ell2_mem_ell2_cblinfun$(v6) &
% 79.61/11.80 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$(v7)
% 79.61/11.80 &
% 79.61/11.80 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$(v8)
% 79.61/11.80 &
% 79.61/11.80 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v9))))
% 79.61/11.80
% 79.61/11.80 (axiom471)
% 79.61/11.80 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(b$) &
% 79.61/11.80 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(a$) &
% 79.61/11.80 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$(assoc$b)
% 79.61/11.80 & Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(swap$)
% 79.61/11.80 &
% 79.61/11.80 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.80 &
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.81 &
% 79.61/11.81 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$(assoc$a)
% 79.61/11.81 &
% 79.61/11.81 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$(id$)
% 79.61/11.81 & Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) & ? [v0:
% 79.61/11.81 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.81 : ? [v1:
% 79.61/11.81 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.81 : ? [v2: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 79.61/11.81 [v3:
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.81 : ? [v4:
% 79.61/11.81 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.81 : ? [v5:
% 79.61/11.81 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.61/11.81 : (register_tensor$(swap$, id$) = v5 & register_pair$i(a$, b$) = v4 &
% 79.61/11.81 register_pair$(x$, v2) = v3 & comp$(phi$, snd$) = v2 & register_pair$b(x$,
% 79.61/11.81 a$) = v0 & register_pair$a(v0, b$) = v1 &
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2) &
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3)
% 79.61/11.81 &
% 79.61/11.81 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 79.61/11.81 &
% 79.61/11.81 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$(v5)
% 79.61/11.81 &
% 79.61/11.81 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0)
% 79.61/11.81 &
% 79.61/11.81 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4)
% 79.61/11.81 & ! [v6:
% 79.61/11.81 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 79.61/11.81 ! [v7: Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$c(v1, v6) = v7) | ~
% 79.61/11.81 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(v6)
% 79.61/11.81 | ? [v8:
% 79.61/11.81 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$]
% 79.61/11.81 : ? [v9:
% 79.61/11.81 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 79.61/11.81 : ? [v10:
% 79.61/11.81 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 79.61/11.81 : ? [v11:
% 79.61/11.81 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 79.61/11.81 : (fun_app$at(v5, v10) = v11 & fun_app$ar(assoc$b, v6) = v8 &
% 79.61/11.81 fun_app$aq(assoc$a, v9) = v10 & register_pair$h(v3, v4, v11) = v7 &
% 79.61/11.81 tensor_op$j(id_cblinfun$, v8) = v9 & Mem_ell2_mem_ell2_cblinfun$(v7) &
% 79.61/11.81 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$(v8)
% 79.61/11.81 &
% 79.61/11.81 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$(v9)
% 79.61/11.81 &
% 79.61/11.81 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v11)
% 79.61/11.81 &
% 79.61/11.81 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v10))))
% 79.61/11.81
% 79.61/11.81 (axiom472)
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$(id$a) &
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(swap$)
% 79.61/11.81 &
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.81 &
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.81 & Bit_ell2_bit_ell2_cblinfun$(id_cblinfun$) &
% 79.61/11.81 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$(assoc$)
% 79.61/11.81 & ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 79.61/11.81 [v1:
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.81 : ? [v2:
% 79.61/11.81 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.81 : ? [v3:
% 79.61/11.81 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.61/11.81 : (register_tensor$a(id$a, swap$) = v3 & register_pair$e(x$, phi$) = v2 &
% 79.61/11.81 register_pair$(x$, v0) = v1 & comp$(phi$, snd$) = v0 &
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 79.61/11.81 &
% 79.61/11.81 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2)
% 79.61/11.81 &
% 79.61/11.81 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$(v3)
% 79.61/11.81 & ! [v4: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v5:
% 79.61/11.81 Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$(v1, v4) = v5) | ~
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v4) | ? [v6:
% 79.61/11.81 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ?
% 79.61/11.81 [v7: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ?
% 79.61/11.81 [v8: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] :
% 79.61/11.81 (fun_app$au(v3, v7) = v8 & fun_app$ap(assoc$, v6) = v7 & fun_app$ao(v2,
% 79.61/11.81 v8) = v5 & tensor_op$k(v4, id_cblinfun$) = v6 &
% 79.61/11.81 Mem_ell2_mem_ell2_cblinfun$(v5) &
% 79.61/11.81 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$(v6) &
% 79.61/11.81 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v8) &
% 79.61/11.81 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v7))))
% 79.61/11.81
% 79.61/11.81 (axiom605)
% 79.61/11.81 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$(cblinfun_compose$)
% 79.61/11.81 & Mem_ell2_mem_ell2_cblinfun$(o7$) &
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(uswap$) &
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.81 &
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.81 & Num$(one$c) & Bit_bit_prod_ell2$(beta_00$) & Complex$(one$b) & Bit$(a$a) &
% 79.61/11.81 Bit$(b$a) & Num_num_fun$(bit0$) & ? [v0: Num$] : ? [v1: Complex$] : ? [v2:
% 79.61/11.81 Complex$] : ? [v3:
% 79.61/11.81 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v4:
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v5:
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.81 : ? [v6: Mem_ell2_mem_ell2_cblinfun$] : ? [v7: Mem_ell2_mem_ell2_cblinfun$]
% 79.61/11.81 : ? [v8: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 79.61/11.81 [v9: Bit_ell2$] : ? [v10: Bit_ell2$] : ? [v11: Bit_bit_prod_ell2$] : ?
% 79.61/11.81 [v12: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v13:
% 79.61/11.81 Mem_ell2_mem_ell2_cblinfun$] : (numeral$(v0) = v1 & divide$(one$b, v1) = v2
% 79.61/11.81 & scaleC$(v2) = v3 & tensor_ell2$(v9, v10) = v11 & fun_app$w(bit0$, one$c) =
% 79.61/11.81 v0 & register_pair$(x$, v4) = v5 & fun_app$b(cblinfun_compose$, v7) = v8 &
% 79.61/11.81 fun_app$a(v8, v13) = o7$ & fun_app$a(v3, v6) = v7 & comp$(phi$, snd$) = v4 &
% 79.61/11.81 ket$b(a$a) = v9 & ket$b(b$a) = v10 & butterfly$a(v11, beta_00$) = v12 &
% 79.61/11.81 fun_app$(v5, uswap$) = v6 & fun_app$(phi$, v12) = v13 &
% 79.61/11.81 Mem_ell2_mem_ell2_cblinfun$(v13) & Mem_ell2_mem_ell2_cblinfun$(v7) &
% 79.61/11.81 Mem_ell2_mem_ell2_cblinfun$(v6) &
% 79.61/11.81 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4) &
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v12) & Bit_ell2$(v10) &
% 79.61/11.81 Bit_ell2$(v9) &
% 79.61/11.81 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v5)
% 79.61/11.81 & Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v8) &
% 79.61/11.81 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3) & Num$(v0) &
% 79.61/11.81 Bit_bit_prod_ell2$(v11) & Complex$(v2) & Complex$(v1))
% 79.61/11.81
% 79.61/11.81 (axiom607)
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$(cblinfun_compose$)
% 79.61/11.82 & Mem_ell2_mem_ell2_cblinfun$(o5$) &
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$(adj$) &
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(uswap$) &
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.82 &
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.82 & Num$(one$c) & Bit_bit_prod_ell2$(beta_00$) & Complex$(one$b) & Bit$(a$a) &
% 79.61/11.82 Bit$(b$a) & Bit_ell2_bit_ell2_cblinfun$(xz$) & Num_num_fun$(bit0$) & ? [v0:
% 79.61/11.82 Num$] : ? [v1: Complex$] : ? [v2: Complex$] : ? [v3:
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v4:
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v5:
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun$] : ? [v6: Mem_ell2_mem_ell2_cblinfun$] : ?
% 79.61/11.82 [v7: Mem_ell2_mem_ell2_cblinfun$] : ? [v8:
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v9:
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.82 : ? [v10: Mem_ell2_mem_ell2_cblinfun$] : ? [v11:
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun$] : ? [v12:
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v13:
% 79.61/11.82 Bit_ell2$] : ? [v14: Bit_ell2$] : ? [v15: Bit_bit_prod_ell2$] : ? [v16:
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v17:
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun$] : (numeral$(v0) = v1 & divide$(one$b, v1) = v2
% 79.61/11.82 & scaleC$(v2) = v3 & tensor_ell2$(v13, v14) = v15 & fun_app$w(bit0$, one$c)
% 79.61/11.82 = v0 & register_pair$(x$, v4) = v9 & fun_app$f(adj$, xz$) = v5 &
% 79.61/11.82 fun_app$b(cblinfun_compose$, v11) = v12 & fun_app$b(cblinfun_compose$, v7) =
% 79.61/11.82 v8 & fun_app$a(v12, v17) = o5$ & fun_app$a(v8, v10) = v11 & fun_app$a(v3,
% 79.61/11.82 v6) = v7 & comp$(phi$, snd$) = v4 & ket$b(a$a) = v13 & ket$b(b$a) = v14 &
% 79.61/11.82 butterfly$a(v15, beta_00$) = v16 & fun_app$d(v4, v5) = v6 & fun_app$(v9,
% 79.61/11.82 uswap$) = v10 & fun_app$(phi$, v16) = v17 &
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun$(v17) & Mem_ell2_mem_ell2_cblinfun$(v11) &
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun$(v10) & Mem_ell2_mem_ell2_cblinfun$(v7) &
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun$(v6) &
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4) &
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v16) & Bit_ell2$(v14) &
% 79.61/11.82 Bit_ell2$(v13) &
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v9)
% 79.61/11.82 & Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v12) &
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v8) &
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3) & Num$(v0) &
% 79.61/11.82 Bit_bit_prod_ell2$(v15) & Complex$(v2) & Complex$(v1) &
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun$(v5))
% 79.61/11.82
% 79.61/11.82 (conjecture0)
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$(cblinfun_compose$)
% 79.61/11.82 & Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(b$) &
% 79.61/11.82 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(a$) &
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(x$) &
% 79.61/11.82 Mem_ell2_ccsubspace$(top$) & Bit_atype_prod_btype_prod_ell2$(psi$) &
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(uswap$) &
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(phi$)
% 79.61/11.82 &
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(snd$)
% 79.61/11.82 & Bit_bit_prod_ell2$(beta_00$) & Bit$(a$a) & Bit$(b$a) & ? [v0:
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v1:
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.82 : ? [v2: Mem_ell2_mem_ell2_cblinfun$] : ? [v3:
% 79.61/11.82 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.82 : ? [v4:
% 79.61/11.82 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.61/11.82 : ? [v5:
% 79.61/11.82 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 79.61/11.82 ? [v6: Mem_ell2_mem_ell2_cblinfun$] : ? [v7:
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v8:
% 79.61/11.82 Bit_bit_prod$] : ? [v9: Bit_bit_prod_ell2$] : ? [v10:
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v11:
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun$] : ? [v12: Mem_ell2_mem_ell2_cblinfun$] : ?
% 79.61/11.82 [v13: Mem_ell2_ccsubspace$] : ? [v14: Mem_ell2_ccsubspace$] : ? [v15:
% 79.61/11.82 Mem_ell2_ccsubspace$] : ? [v16: Mem_ell2_ccsubspace$] : ? [v17:
% 79.61/11.82 Mem_ell2_ccsubspace$] : ( ~ (v17 = v14) & register_pair$(x$, v0) = v1 &
% 79.61/11.82 fun_app$b(cblinfun_compose$, v6) = v7 & fun_app$a(v7, v11) = v12 &
% 79.61/11.82 pair$(a$a, b$a) = v8 & comp$(phi$, snd$) = v0 & register_pair$b(x$, a$) = v3
% 79.61/11.82 & register_pair$a(v3, b$) = v4 & cblinfun_image$(v12, top$) = v13 &
% 79.61/11.82 cblinfun_image$(v11, top$) = v15 & cblinfun_image$(v6, v15) = v16 &
% 79.61/11.82 cblinfun_image$(v2, v16) = v17 & cblinfun_image$(v2, v13) = v14 & ket$(v8) =
% 79.61/11.82 v9 & butterfly$a(v9, beta_00$) = v10 & butterfly$(psi$, psi$) = v5 &
% 79.61/11.82 fun_app$c(v4, v5) = v6 & fun_app$(v1, uswap$) = v2 & fun_app$(phi$, v10) =
% 79.61/11.82 v11 & Mem_ell2_mem_ell2_cblinfun$(v12) & Mem_ell2_mem_ell2_cblinfun$(v11) &
% 79.61/11.82 Mem_ell2_mem_ell2_cblinfun$(v6) & Mem_ell2_mem_ell2_cblinfun$(v2) &
% 79.61/11.82 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 79.61/11.82 Mem_ell2_ccsubspace$(v17) & Mem_ell2_ccsubspace$(v16) &
% 79.61/11.82 Mem_ell2_ccsubspace$(v15) & Mem_ell2_ccsubspace$(v14) &
% 79.61/11.82 Mem_ell2_ccsubspace$(v13) &
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v10) &
% 79.61/11.82 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 79.61/11.82 &
% 79.61/11.82 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4)
% 79.61/11.82 & Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v7) &
% 79.61/11.82 Bit_bit_prod_ell2$(v9) &
% 79.61/11.82 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3)
% 79.61/11.82 &
% 79.61/11.82 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(v5)
% 79.61/11.82 & Bit_bit_prod$(v8))
% 79.61/11.82
% 79.61/11.82 (function-axioms)
% 79.98/11.91 ! [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v5: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (teleport_locale$d(v5, v4, v3, v2) = v1) | ~ (teleport_locale$d(v5,
% 79.98/11.91 v4, v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 79.98/11.91 MultipleValueBool] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v5: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (teleport_locale$c(v5, v4, v3, v2) = v1) | ~ (teleport_locale$c(v5,
% 79.98/11.91 v4, v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 79.98/11.91 MultipleValueBool] : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v5: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (teleport_locale$b(v5, v4, v3, v2) = v1) | ~ (teleport_locale$b(v5,
% 79.98/11.91 v4, v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 79.98/11.91 MultipleValueBool] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v5: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (teleport_locale$a(v5, v4, v3, v2) = v1) | ~ (teleport_locale$a(v5,
% 79.98/11.91 v4, v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 79.98/11.91 MultipleValueBool] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v5: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (teleport_locale$(v5, v4, v3, v2) = v1) | ~ (teleport_locale$(v5,
% 79.98/11.91 v4, v3, v2) = v0)) & ! [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_pair$x(v4, v3, v2) = v1) | ~ (register_pair$x(v4,
% 79.98/11.91 v3, v2) = v0)) & ! [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (register_pair$w(v4, v3, v2) = v1) | ~ (register_pair$w(v4, v3, v2)
% 79.98/11.91 = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_bit_atype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_bit_atype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_bit_atype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$e(v4, v3, v2) = v1) | ~
% 79.98/11.91 (register_tensor$e(v4, v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$c(v4, v3, v2) = v1) | ~
% 79.98/11.91 (register_tensor$c(v4, v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$b(v4, v3, v2) = v1) | ~
% 79.98/11.91 (register_tensor$b(v4, v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$]
% 79.98/11.91 : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_prod_atype_prod_ell2_bit_bit_prod_atype_prod_ell2_cblinfun$] : !
% 79.98/11.91 [v3: Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_pair$s(v4, v3, v2) = v1) | ~ (register_pair$s(v4,
% 79.98/11.91 v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_atype_prod_prod_btype_prod_ell2_bit_bit_atype_prod_prod_btype_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3: Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v4:
% 79.98/11.91 Bit_bit_atype_prod_prod_ell2_bit_bit_atype_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_pair$p(v4, v3, v2) = v1) | ~ (register_pair$p(v4,
% 79.98/11.91 v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_pair$h(v4, v3, v2) = v1) | ~ (register_pair$h(v4,
% 79.98/11.91 v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_pair$g(v4, v3, v2) = v1) | ~ (register_pair$g(v4,
% 79.98/11.91 v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_prod_bit_atype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_pair$f(v4, v3, v2) = v1) | ~ (register_pair$f(v4,
% 79.98/11.91 v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_pair$d(v4, v3, v2) = v1) | ~ (register_pair$d(v4,
% 79.98/11.91 v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v4:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_pair$c(v4, v3, v2) = v1) | ~ (register_pair$c(v4,
% 79.98/11.91 v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1: MultipleValueBool]
% 79.98/11.91 : ! [v2: Mem_ell2_ccsubspace$] : ! [v3: Mem_ell2_mem_ell2_cblinfun_list$] :
% 79.98/11.91 ! [v4: Mem_ell2_ccsubspace$] : (v1 = v0 | ~ (hoare$(v4, v3, v2) = v1) | ~
% 79.98/11.91 (hoare$(v4, v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun_list$] : !
% 79.98/11.91 [v1: Mem_ell2_mem_ell2_cblinfun_list$] : ! [v2:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun_list$] : (v1 = v0 | ~ (append$(v3, v2) = v1) |
% 79.98/11.91 ~ (append$(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (fold$(v3, v2) = v1) | ~ (fold$(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Complex$] : ! [v1: Complex$] : ! [v2: Complex$] : ! [v3: Complex$] : (v1
% 79.98/11.91 = v0 | ~ (divide$(v3, v2) = v1) | ~ (divide$(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2$] : ! [v1: Bit_bit_prod_ell2$] : ! [v2: Bit_ell2$] : !
% 79.98/11.91 [v3: Bit_ell2$] : (v1 = v0 | ~ (tensor_ell2$(v3, v2) = v1) | ~
% 79.98/11.91 (tensor_ell2$(v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 79.98/11.91 MultipleValueBool] : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (compatible$(v3, v2) = v1) | ~ (compatible$(v3, v2) = v0)) & !
% 79.98/11.91 [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (compatible$a(v3, v2) = v1) | ~ (compatible$a(v3, v2) = v0))
% 79.98/11.91 & ! [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 79.98/11.91 (compatible$b(v3, v2) = v1) | ~ (compatible$b(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (compatible$c(v3, v2) = v1) | ~ (compatible$c(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 79.98/11.91 (compatible$d(v3, v2) = v1) | ~ (compatible$d(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (compatible$e(v3, v2) = v1) | ~ (compatible$e(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (compatible$f(v3, v2) = v1) | ~ (compatible$f(v3, v2) = v0))
% 79.98/11.91 & ! [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3: Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 79.98/11.91 (v1 = v0 | ~ (compatible$g(v3, v2) = v1) | ~ (compatible$g(v3, v2) = v0)) &
% 79.98/11.91 ! [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (compatible$h(v3, v2) = v1) | ~ (compatible$h(v3, v2) = v0))
% 79.98/11.91 & ! [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3: Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 79.98/11.91 (v1 = v0 | ~ (compatible$i(v3, v2) = v1) | ~ (compatible$i(v3, v2) = v0)) &
% 79.98/11.91 ! [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (compatible$j(v3, v2) = v1) | ~ (compatible$j(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (compatible$k(v3, v2) = v1) | ~ (compatible$k(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (fun_app$be(v3, v2) = v1) | ~ (fun_app$be(v3, v2) = v0)) & !
% 79.98/11.91 [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1: Bit_ell2_bit_ell2_cblinfun$] : !
% 79.98/11.91 [v2: Mem_ell2_mem_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 79.98/11.91 (fun_app$bd(v3, v2) = v1) | ~ (fun_app$bd(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v3: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (register_pair$v(v3, v2) = v1) | ~ (register_pair$v(v3, v2) = v0)) & !
% 79.98/11.91 [v0: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun$] : ! [v3: Atype_ell2_atype_ell2_cblinfun$]
% 79.98/11.91 : (v1 = v0 | ~ (tensor_op$ae(v3, v2) = v1) | ~ (tensor_op$ae(v3, v2) = v0))
% 79.98/11.91 & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$]
% 79.98/11.91 : ! [v2: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (fun_app$bc(v3, v2) = v1) | ~ (fun_app$bc(v3, v2) = v0)) & !
% 79.98/11.91 [v0: Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun$] : ! [v3: Bit_ell2_bit_ell2_cblinfun$] :
% 79.98/11.91 (v1 = v0 | ~ (tensor_op$ad(v3, v2) = v1) | ~ (tensor_op$ad(v3, v2) = v0)) &
% 79.98/11.91 ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] :
% 79.98/11.91 ! [v2: Atype_ell2_atype_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (fun_app$bb(v3, v2) = v1) | ~ (fun_app$bb(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 79.98/11.91 [v2: Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (fun_app$az(v3, v2) = v1) | ~ (fun_app$az(v3, v2) = v0)) & !
% 79.98/11.91 [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 79.98/11.91 [v2: Btype_ell2_btype_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (fun_app$ba(v3, v2) = v1) | ~ (fun_app$ba(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$al(v3, v2) = v1) | ~ (comp$al(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$aj(v3, v2) = v1) | ~ (comp$aj(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$ak(v3, v2) = v1) | ~ (comp$ak(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$p(v3, v2) = v1) | ~ (register_tensor$p(v3,
% 79.98/11.91 v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$q(v3, v2) = v1) | ~ (register_tensor$q(v3,
% 79.98/11.91 v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$ai(v3, v2) = v1) | ~ (comp$ai(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (register_tensor$r(v3, v2) = v1) | ~ (register_tensor$r(v3, v2) = v0)) &
% 79.98/11.91 ! [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1: Bit_ell2_bit_ell2_cblinfun$] :
% 79.98/11.91 ! [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (fun_app$ay(v3, v2) = v1) | ~ (fun_app$ay(v3, v2) = v0)) & !
% 79.98/11.91 [v0: Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$ag(v3, v2) = v1) | ~ (comp$ag(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$ah(v3, v2) = v1) | ~ (comp$ah(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$ae(v3, v2) = v1) | ~ (comp$ae(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$af(v3, v2) = v1) | ~ (comp$af(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$ac(v3, v2) = v1) | ~ (comp$ac(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$ad(v3, v2) = v1) | ~ (comp$ad(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$aa(v3, v2) = v1) | ~ (comp$aa(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$ab(v3, v2) = v1) | ~ (comp$ab(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_btype_atype_prod_ell2_btype_atype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Btype_atype_prod_ell2_btype_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$x(v3, v2) = v1) | ~ (comp$x(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_btype_atype_prod_ell2_btype_atype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Btype_atype_prod_ell2_btype_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$y(v3, v2) = v1) | ~ (comp$y(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_btype_atype_prod_ell2_btype_atype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Btype_atype_prod_ell2_btype_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$z(v3, v2) = v1) | ~ (comp$z(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_atype_bit_prod_ell2_atype_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Atype_bit_prod_ell2_atype_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$u(v3, v2) = v1) | ~ (comp$u(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Atype_ell2_atype_ell2_cblinfun_atype_bit_prod_ell2_atype_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Atype_bit_prod_ell2_atype_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$v(v3, v2) = v1) | ~ (comp$v(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_atype_bit_prod_ell2_atype_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Atype_bit_prod_ell2_atype_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$w(v3, v2) = v1) | ~ (comp$w(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$r(v3, v2) = v1) | ~ (comp$r(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$s(v3, v2) = v1) | ~ (comp$s(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$t(v3, v2) = v1) | ~ (comp$t(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_btype_bit_atype_prod_prod_ell2_btype_bit_atype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Btype_bit_atype_prod_prod_ell2_btype_bit_atype_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$o(v3, v2) = v1) | ~ (comp$o(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Btype_ell2_btype_ell2_cblinfun_btype_bit_atype_prod_prod_ell2_btype_bit_atype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Btype_bit_atype_prod_prod_ell2_btype_bit_atype_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$p(v3, v2) = v1) | ~ (comp$p(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_btype_bit_atype_prod_prod_ell2_btype_bit_atype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Btype_bit_atype_prod_prod_ell2_btype_bit_atype_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$q(v3, v2) = v1) | ~ (comp$q(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun_mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun_mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (register_tensor$o(v3, v2) = v1) | ~ (register_tensor$o(v3, v2) =
% 79.98/11.91 v0)) & ! [v0:
% 79.98/11.91 Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun_mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun_mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v3: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (register_tensor$n(v3, v2) = v1) | ~ (register_tensor$n(v3, v2) = v0)) &
% 79.98/11.91 ! [v0:
% 79.98/11.91 Atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 79.98/11.91 : ! [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 79.98/11.91 (tensor_op$ac(v3, v2) = v1) | ~ (tensor_op$ac(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun_atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun_atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$m(v3, v2) = v1) | ~ (register_tensor$m(v3,
% 79.98/11.91 v2) = v0)) & ! [v0:
% 79.98/11.91 Atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun$] :
% 79.98/11.91 ! [v1:
% 79.98/11.91 Atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun$] :
% 79.98/11.91 ! [v2: Bit_ell2_bit_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 79.98/11.91 (tensor_op$ab(v3, v2) = v1) | ~ (tensor_op$ab(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun_atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun_atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$l(v3, v2) = v1) | ~ (register_tensor$l(v3,
% 79.98/11.91 v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun_bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun_bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$k(v3, v2) = v1) | ~ (register_tensor$k(v3,
% 79.98/11.91 v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun_bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun_bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v3: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (register_tensor$j(v3, v2) = v1) | ~ (register_tensor$j(v3, v2) = v0)) &
% 79.98/11.91 ! [v0:
% 79.98/11.91 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 79.98/11.91 ! [v1:
% 79.98/11.91 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 79.98/11.91 ! [v2: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$aa(v3, v2) = v1) |
% 79.98/11.91 ~ (tensor_op$aa(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (register_tensor$i(v3, v2) = v1) | ~ (register_tensor$i(v3, v2) =
% 79.98/11.91 v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v3:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (register_tensor$h(v3, v2) = v1) | ~ (register_tensor$h(v3,
% 79.98/11.91 v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v3: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 = v0 |
% 79.98/11.91 ~ (register_tensor$g(v3, v2) = v1) | ~ (register_tensor$g(v3, v2) = v0)) &
% 79.98/11.91 ! [v0:
% 79.98/11.91 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.91 v0 | ~ (register_tensor$f(v3, v2) = v1) | ~ (register_tensor$f(v3, v2) =
% 79.98/11.91 v0)) & ! [v0: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : !
% 79.98/11.91 [v2: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 79.98/11.91 (comp$n(v3, v2) = v1) | ~ (comp$n(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v1:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$m(v3, v2) = v1) | ~ (comp$m(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (comp$l(v3, v2) = v1) | ~ (comp$l(v3, v2) = v0)) & ! [v0:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun$] : ! [v1: Bit_ell2_bit_ell2_cblinfun$] : !
% 79.98/11.91 [v2: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (fun_app$ax(v3, v2) = v1) | ~ (fun_app$ax(v3, v2) = v0)) & !
% 79.98/11.91 [v0: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v1:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v2:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun$] : ! [v3:
% 79.98/11.91 Bit_ell2_bit_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : (v1 = v0 | ~ (fun_app$aw(v3, v2) = v1) | ~ (fun_app$aw(v3, v2) = v0)) & !
% 79.98/11.91 [v0:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v1:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v2:
% 79.98/11.91 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.91 : ! [v3:
% 79.98/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (comp$k(v3, v2) = v1) | ~ (comp$k(v3, v2) = v0)) & ! [v0:
% 79.98/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (register_tensor$d(v3, v2) = v1) | ~ (register_tensor$d(v3,
% 79.98/11.92 v2) = v0)) & ! [v0:
% 79.98/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (fun_app$av(v3, v2) = v1) | ~ (fun_app$av(v3, v2) = v0)) & !
% 79.98/11.92 [v0:
% 79.98/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (register_pair$u(v3, v2) = v1) | ~ (register_pair$u(v3, v2) =
% 79.98/11.92 v0)) & ! [v0:
% 79.98/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Btype_ell2_btype_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Atype_ell2_atype_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (register_pair$t(v3, v2) = v1) | ~ (register_pair$t(v3, v2) =
% 79.98/11.92 v0)) & ! [v0:
% 79.98/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.92 v0 | ~ (register_tensor$a(v3, v2) = v1) | ~ (register_tensor$a(v3, v2) =
% 79.98/11.92 v0)) & ! [v0:
% 79.98/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 79.98/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 79.98/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (fun_app$au(v3, v2) = v1) | ~ (fun_app$au(v3, v2) = v0)) & !
% 79.98/11.92 [v0:
% 79.98/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (register_tensor$(v3, v2) = v1) | ~ (register_tensor$(v3, v2)
% 79.98/11.92 = v0)) & ! [v0:
% 79.98/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (fun_app$at(v3, v2) = v1) | ~ (fun_app$at(v3, v2) = v0)) & !
% 79.98/11.92 [v0:
% 79.98/11.92 Bit_bit_bit_atype_prod_prod_prod_ell2_bit_bit_bit_atype_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_bit_atype_prod_prod_prod_ell2_bit_bit_bit_atype_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v2: Bit_bit_atype_prod_prod_ell2_bit_bit_atype_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v3: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$z(v3, v2) =
% 79.98/11.92 v1) | ~ (tensor_op$z(v3, v2) = v0)) & ! [v0:
% 79.98/11.92 Bit_bit_bit_bit_prod_prod_prod_ell2_bit_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_bit_bit_prod_prod_prod_ell2_bit_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v2: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] :
% 79.98/11.92 ! [v3: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$y(v3, v2) = v1)
% 79.98/11.92 | ~ (tensor_op$y(v3, v2) = v0)) & ! [v0:
% 79.98/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 79.98/11.92 [v3:
% 79.98/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (register_pair$r(v3, v2) = v1) | ~ (register_pair$r(v3, v2) =
% 79.98/11.92 v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 79.98/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 79.98/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v3:
% 79.98/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (fun_app$as(v3, v2) = v1) | ~ (fun_app$as(v3, v2) = v0)) & !
% 79.98/11.92 [v0:
% 79.98/11.92 Bit_bit_bit_bit_bit_prod_prod_prod_prod_ell2_bit_bit_bit_bit_bit_prod_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_bit_bit_bit_prod_prod_prod_prod_ell2_bit_bit_bit_bit_bit_prod_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_bit_bit_bit_prod_prod_prod_ell2_bit_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v3: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$x(v3, v2) =
% 79.98/11.92 v1) | ~ (tensor_op$x(v3, v2) = v0)) & ! [v0:
% 79.98/11.92 Bit_bit_bit_atype_prod_btype_prod_prod_prod_ell2_bit_bit_bit_atype_prod_btype_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_bit_atype_prod_btype_prod_prod_prod_ell2_bit_bit_bit_atype_prod_btype_prod_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_bit_atype_prod_btype_prod_prod_ell2_bit_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 79.98/11.92 : ! [v3: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$w(v3, v2) =
% 79.98/11.92 v1) | ~ (tensor_op$w(v3, v2) = v0)) & ! [v0:
% 79.98/11.92 Bit_bit_atype_prod_prod_ell2_bit_bit_atype_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_atype_prod_prod_ell2_bit_bit_atype_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 79.98/11.92 v0 | ~ (register_pair$q(v3, v2) = v1) | ~ (register_pair$q(v3, v2) = v0))
% 79.98/11.92 & ! [v0:
% 79.98/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (comp$j(v3, v2) = v1) | ~ (comp$j(v3, v2) = v0)) & ! [v0:
% 79.98/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (register_pair$o(v3, v2) = v1) | ~ (register_pair$o(v3, v2) =
% 79.98/11.92 v0)) & ! [v0:
% 79.98/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : (v1 = v0 | ~ (register_pair$n(v3, v2) = v1) | ~ (register_pair$n(v3, v2) =
% 79.98/11.92 v0)) & ! [v0:
% 79.98/11.92 Atype_ell2_atype_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v1:
% 79.98/11.92 Atype_ell2_atype_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v2:
% 79.98/11.92 Atype_ell2_atype_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 79.98/11.92 : ! [v3:
% 79.98/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (comp$h(v3, v2) = v1) | ~ (comp$h(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Atype_ell2_atype_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register_pair$m(v3, v2) = v1) | ~ (register_pair$m(v3, v2) =
% 80.68/11.92 v0)) & ! [v0:
% 80.68/11.92 Btype_ell2_btype_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Btype_ell2_btype_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Btype_ell2_btype_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (comp$i(v3, v2) = v1) | ~ (comp$i(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Btype_ell2_btype_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register_pair$l(v3, v2) = v1) | ~ (register_pair$l(v3, v2) =
% 80.68/11.92 v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register_pair$k(v3, v2) = v1) | ~ (register_pair$k(v3, v2) =
% 80.68/11.92 v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (comp$g(v3, v2) = v1) | ~ (comp$g(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register_pair$j(v3, v2) = v1) | ~ (register_pair$j(v3, v2) =
% 80.68/11.92 v0)) & ! [v0:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v1:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v2:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v3:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$ar(v3, v2) = v1) | ~ (fun_app$ar(v3, v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$aq(v3, v2) = v1) | ~ (fun_app$aq(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v3:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$ap(v3, v2) = v1) | ~ (fun_app$ap(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v3:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$ao(v3, v2) = v1) | ~ (fun_app$ao(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$an(v3, v2) = v1) | ~ (fun_app$an(v3, v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (cblinfun_compose$v(v3, v2) = v1) | ~ (cblinfun_compose$v(v3,
% 80.68/11.92 v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v1:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v2:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v3:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (cblinfun_compose$u(v3, v2) = v1) | ~ (cblinfun_compose$u(v3,
% 80.68/11.92 v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (cblinfun_compose$t(v3, v2) = v1) | ~ (cblinfun_compose$t(v3,
% 80.68/11.92 v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : (v1 = v0
% 80.68/11.92 | ~ (cblinfun_compose$s(v3, v2) = v1) | ~ (cblinfun_compose$s(v3, v2) =
% 80.68/11.92 v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (tensor_op$v(v3, v2) = v1) | ~ (tensor_op$v(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun$] : (v1 = v0
% 80.68/11.92 | ~ (cblinfun_compose$r(v3, v2) = v1) | ~ (cblinfun_compose$r(v3, v2) =
% 80.68/11.92 v0)) & ! [v0:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : (v1 = v0
% 80.68/11.92 | ~ (cblinfun_compose$q(v3, v2) = v1) | ~ (cblinfun_compose$q(v3, v2) =
% 80.68/11.92 v0)) & ! [v0:
% 80.68/11.92 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$u(v3, v2) = v1) | ~
% 80.68/11.92 (tensor_op$u(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun$] : (v1 = v0
% 80.68/11.92 | ~ (cblinfun_compose$p(v3, v2) = v1) | ~ (cblinfun_compose$p(v3, v2) =
% 80.68/11.92 v0)) & ! [v0: Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v3: Bit_ell2_bit_ell2_cblinfun$] : (v1 =
% 80.68/11.92 v0 | ~ (tensor_op$t(v3, v2) = v1) | ~ (tensor_op$t(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$o(v3, v2) = v1) | ~ (cblinfun_compose$o(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : ! [v3: Mem_ell2_mem_ell2_cblinfun$] : (v1 =
% 80.68/11.92 v0 | ~ (tensor_op$s(v3, v2) = v1) | ~ (tensor_op$s(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$n(v3, v2) = v1) | ~ (cblinfun_compose$n(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v3: Mem_ell2_mem_ell2_cblinfun$] : (v1 =
% 80.68/11.92 v0 | ~ (tensor_op$r(v3, v2) = v1) | ~ (tensor_op$r(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Mem_mem_prod_ell2_mem_mem_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$m(v3, v2) = v1) | ~ (cblinfun_compose$m(v3, v2) = v0)) &
% 80.68/11.92 ! [v0:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 80.68/11.92 [v3: Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 80.68/11.92 v0 | ~ (register_pair$i(v3, v2) = v1) | ~ (register_pair$i(v3, v2) = v0))
% 80.68/11.92 & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (tensor_op$q(v3, v2) = v1) | ~ (tensor_op$q(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_atype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_atype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2: Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (tensor_op$p(v3, v2) = v1) | ~ (tensor_op$p(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 80.68/11.92 v0 | ~ (register_pair$e(v3, v2) = v1) | ~ (register_pair$e(v3, v2) = v0))
% 80.68/11.92 & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v3: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (tensor_op$o(v3, v2) = v1) | ~ (tensor_op$o(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v3: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (tensor_op$n(v3, v2) = v1) | ~ (tensor_op$n(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$am(v3, v2) = v1) | ~ (fun_app$am(v3, v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$al(v3, v2) = v1) | ~ (fun_app$al(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$ak(v3, v2) = v1) | ~ (fun_app$ak(v3, v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$aj(v3, v2) = v1) | ~ (fun_app$aj(v3, v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (tensor_op$m(v3, v2) = v1) | ~ (tensor_op$m(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$ai(v3, v2) = v1) | ~ (fun_app$ai(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$l(v3, v2) = v1) | ~
% 80.68/11.92 (tensor_op$l(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$ah(v3, v2) = v1) | ~ (fun_app$ah(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (tensor_op$k(v3, v2) = v1) | ~ (tensor_op$k(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v3: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$j(v3, v2) = v1)
% 80.68/11.92 | ~ (tensor_op$j(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_atype_prod_ell2$] : ! [v3: Bit_atype_prod_ell2$] : (v1 = v0 | ~
% 80.68/11.92 (butterfly$m(v3, v2) = v1) | ~ (butterfly$m(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Btype_ell2_btype_ell2_cblinfun$] : ! [v1: Btype_ell2_btype_ell2_cblinfun$]
% 80.68/11.92 : ! [v2: Btype_ell2$] : ! [v3: Btype_ell2$] : (v1 = v0 | ~ (butterfly$n(v3,
% 80.68/11.92 v2) = v1) | ~ (butterfly$n(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v1:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v2: Btype_ell2_btype_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (tensor_op$i(v3, v2) = v1) | ~ (tensor_op$i(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_atype_prod_btype_prod$] : ! [v1: Bit_atype_prod_btype_prod$] : ! [v2:
% 80.68/11.92 Btype$] : ! [v3: Bit_atype_prod$] : (v1 = v0 | ~ (pair$q(v3, v2) = v1) |
% 80.68/11.92 ~ (pair$q(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$h(v3, v2) =
% 80.68/11.92 v1) | ~ (tensor_op$h(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2$] : ! [v3: Bit_bit_prod_ell2$] : (v1 =
% 80.68/11.92 v0 | ~ (butterfly$l(v3, v2) = v1) | ~ (butterfly$l(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Bit_bit_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$g(v3, v2) =
% 80.68/11.92 v1) | ~ (tensor_op$g(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod$] : ! [v1: Bit_bit_prod_bit_bit_prod_prod$]
% 80.68/11.92 : ! [v2: Bit_bit_prod$] : ! [v3: Bit_bit_prod$] : (v1 = v0 | ~ (pair$p(v3,
% 80.68/11.92 v2) = v1) | ~ (pair$p(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2$] : ! [v3: Bit_bit_prod_bit_bit_prod_prod_ell2$] : (v1 =
% 80.68/11.92 v0 | ~ (butterfly$k(v3, v2) = v1) | ~ (butterfly$k(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_prod_bit_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Bit_bit_prod_bit_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$f(v3, v2) =
% 80.68/11.92 v1) | ~ (tensor_op$f(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2$] : ! [v3: Bit_bit_bit_prod_prod_ell2$] : (v1 =
% 80.68/11.92 v0 | ~ (butterfly$j(v3, v2) = v1) | ~ (butterfly$j(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_bit_prod_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Bit_bit_bit_prod_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_bit_prod_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$e(v3, v2) =
% 80.68/11.92 v1) | ~ (tensor_op$e(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2$] : ! [v3: Bit_bit_prod_bit_prod_ell2$] : (v1 =
% 80.68/11.92 v0 | ~ (butterfly$i(v3, v2) = v1) | ~ (butterfly$i(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_bit_prod_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : ! [v3: Bit_ell2_bit_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (tensor_op$d(v3, v2) = v1) | ~ (tensor_op$d(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_bit_bit_prod_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2$] : ! [v3: Bit_bit_prod_ell2$] : (v1 = v0 | ~
% 80.68/11.92 (butterfly$h(v3, v2) = v1) | ~ (butterfly$h(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3: Bit_ell2_bit_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (tensor_op$c(v3, v2) = v1) | ~ (tensor_op$c(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_bit_bit_prod_prod$] : ! [v1: Bit_bit_bit_prod_prod$] : ! [v2:
% 80.68/11.92 Bit_bit_prod$] : ! [v3: Bit$] : (v1 = v0 | ~ (pair$o(v3, v2) = v1) | ~
% 80.68/11.92 (pair$o(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2$] : ! [v3: Bit_bit_bit_prod_prod_ell2$] : (v1 = v0 | ~
% 80.68/11.92 (butterfly$g(v3, v2) = v1) | ~ (butterfly$g(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : ! [v2: Bit_bit_prod_ell2$] : !
% 80.68/11.92 [v3: Bit_ell2$] : (v1 = v0 | ~ (butterfly$e(v3, v2) = v1) | ~
% 80.68/11.92 (butterfly$e(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : ! [v3: Bit_bit_prod_ell2_bit_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (tensor_op$b(v3, v2) = v1) | ~ (tensor_op$b(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_bit_prod_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2$] : ! [v3: Bit_bit_prod_ell2$] : (v1 = v0 | ~
% 80.68/11.92 (butterfly$f(v3, v2) = v1) | ~ (butterfly$f(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2: Bit_ell2$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2$] : (v1 = v0 | ~ (butterfly$c(v3, v2) = v1) | ~
% 80.68/11.92 (butterfly$c(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : ! [v3: Bit_ell2_bit_bit_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (tensor_op$a(v3, v2) = v1) | ~ (tensor_op$a(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_bit_prod_bit_prod$] : ! [v1: Bit_bit_prod_bit_prod$] : ! [v2:
% 80.68/11.92 Bit$] : ! [v3: Bit_bit_prod$] : (v1 = v0 | ~ (pair$n(v3, v2) = v1) | ~
% 80.68/11.92 (pair$n(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2$] : ! [v3: Bit_bit_prod_bit_prod_ell2$] : (v1 = v0 | ~
% 80.68/11.92 (butterfly$d(v3, v2) = v1) | ~ (butterfly$d(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_mem_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Mem_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$k(v3, v2) = v1) | ~ (cblinfun_compose$k(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] :
% 80.68/11.92 ! [v2: Mem_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Atype_btype_prod_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$l(v3, v2) = v1) | ~ (cblinfun_compose$l(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_mem_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Mem_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$i(v3, v2) = v1) | ~ (cblinfun_compose$i(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] :
% 80.68/11.92 ! [v2: Mem_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$j(v3, v2) = v1) | ~ (cblinfun_compose$j(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1: Bit_ell2_bit_ell2_cblinfun$] :
% 80.68/11.92 ! [v2: Bit_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Atype_btype_prod_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$g(v3, v2) = v1) | ~ (cblinfun_compose$g(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$h(v3, v2) = v1) | ~ (cblinfun_compose$h(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1: Bit_ell2_bit_ell2_cblinfun$] :
% 80.68/11.92 ! [v2: Bit_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$e(v3, v2) = v1) | ~ (cblinfun_compose$e(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$f(v3, v2) = v1) | ~ (cblinfun_compose$f(v3, v2) = v0)) &
% 80.68/11.92 ! [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1: Bit_ell2_bit_ell2_cblinfun$] :
% 80.68/11.92 ! [v2: Bit_ell2_mem_ell2_cblinfun$] : ! [v3: Mem_ell2_bit_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (cblinfun_compose$c(v3, v2) = v1) | ~ (cblinfun_compose$c(v3,
% 80.68/11.92 v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v2: Mem_ell2_bit_ell2_cblinfun$] : !
% 80.68/11.92 [v3: Bit_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~ (cblinfun_compose$d(v3, v2)
% 80.68/11.92 = v1) | ~ (cblinfun_compose$d(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Atype_btype_prod_ell2_ccsubspace$] : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_ccsubspace$] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_ccsubspace$] : ! [v3:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_image$c(v3, v2) = v1) | ~ (cblinfun_image$c(v3, v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_bool_fun_fun_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_bool_fun_fun_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod$] : !
% 80.68/11.92 [v3: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_bool_fun_fun$] :
% 80.68/11.92 (v1 = v0 | ~ (pair$m(v3, v2) = v1) | ~ (pair$m(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod$] : !
% 80.68/11.92 [v3: Mem_ell2_mem_ell2_cblinfun_list$] : (v1 = v0 | ~ (pair$l(v3, v2) = v1) |
% 80.68/11.92 ~ (pair$l(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v3: Mem_ell2_mem_ell2_cblinfun_list$] : (v1 = v0 | ~ (pair$k(v3, v2) =
% 80.68/11.92 v1) | ~ (pair$k(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod_prod$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod$]
% 80.68/11.92 : ! [v3: Mem_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~ (pair$j(v3, v2) = v1) |
% 80.68/11.92 ~ (pair$j(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod_prod_prod$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod_prod$]
% 80.68/11.92 : ! [v3: Mem_ell2_mem_ell2_cblinfun_list$] : (v1 = v0 | ~ (pair$i(v3, v2) =
% 80.68/11.92 v1) | ~ (pair$i(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_list_prod$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_list_prod$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3: Mem_ell2_mem_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (pair$h(v3, v2) = v1) | ~ (pair$h(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_option_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_option_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod$] : !
% 80.68/11.92 [v3: Mem_ell2_mem_ell2_cblinfun_option$] : (v1 = v0 | ~ (pair$g(v3, v2) = v1)
% 80.68/11.92 | ~ (pair$g(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Nat_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Nat_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod$] : !
% 80.68/11.92 [v3: Nat$] : (v1 = v0 | ~ (pair$f(v3, v2) = v1) | ~ (pair$f(v3, v2) = v0)) &
% 80.68/11.92 ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_option_nat_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_option_nat_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod_prod$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Nat_mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v3: Mem_ell2_mem_ell2_cblinfun_option$] : (v1 = v0 | ~ (pair$e(v3, v2)
% 80.68/11.92 = v1) | ~ (pair$e(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_bool_fun_fun_mem_ell2_mem_ell2_cblinfun_list_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_bool_fun_fun_mem_ell2_mem_ell2_cblinfun_list_prod$]
% 80.68/11.92 : ! [v2: Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_bool_fun_fun$] : (v1 =
% 80.68/11.92 v0 | ~ (pair$d(v3, v2) = v1) | ~ (pair$d(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_list$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_list$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_list$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : (v1 = v0 | ~ (cons$a(v3, v2) = v1) | ~
% 80.68/11.92 (cons$a(v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun_list_bool_fun$]
% 80.68/11.92 : ! [v1: Mem_ell2_mem_ell2_cblinfun_list_bool_fun$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_bool_fun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$ag(v3, v2) = v1) | ~ (fun_app$ag(v3, v2) = v0)) & !
% 80.68/11.92 [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_bool_fun$] : (v1 = v0 | ~ (fun_app$af(v3,
% 80.68/11.92 v2) = v1) | ~ (fun_app$af(v3, v2) = v0)) & ! [v0: Num$] : ! [v1:
% 80.68/11.92 Num$] : ! [v2: Mem_ell2_ccsubspace$] : ! [v3:
% 80.68/11.92 Mem_ell2_ccsubspace_num_fun$] : (v1 = v0 | ~ (fun_app$ae(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$ae(v3, v2) = v0)) & ! [v0: Mem_ell2_ccsubspace$] : ! [v1:
% 80.68/11.92 Mem_ell2_ccsubspace$] : ! [v2: Num$] : ! [v3:
% 80.68/11.92 Num_mem_ell2_ccsubspace_fun$] : (v1 = v0 | ~ (fun_app$ad(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$ad(v3, v2) = v0)) & ! [v0: int] : ! [v1: int] : ! [v2: Enat$] :
% 80.68/11.92 ! [v3: Enat_int_fun$] : (v1 = v0 | ~ (fun_app$ac(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$ac(v3, v2) = v0)) & ! [v0: int] : ! [v1: int] : ! [v2: Num$] :
% 80.68/11.92 ! [v3: Num_int_fun$] : (v1 = v0 | ~ (fun_app$ab(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$ab(v3, v2) = v0)) & ! [v0: Enat$] : ! [v1: Enat$] : ! [v2: int]
% 80.68/11.92 : ! [v3: Int_enat_fun$] : (v1 = v0 | ~ (fun_app$aa(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$aa(v3, v2) = v0)) & ! [v0: Enat$] : ! [v1: Enat$] : ! [v2: Num$]
% 80.68/11.92 : ! [v3: Num_enat_fun$] : (v1 = v0 | ~ (fun_app$z(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$z(v3, v2) = v0)) & ! [v0: Num$] : ! [v1: Num$] : ! [v2: int] :
% 80.68/11.92 ! [v3: Int_num_fun$] : (v1 = v0 | ~ (fun_app$y(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$y(v3, v2) = v0)) & ! [v0: Num$] : ! [v1: Num$] : ! [v2: Enat$] :
% 80.68/11.92 ! [v3: Enat_num_fun$] : (v1 = v0 | ~ (fun_app$x(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$x(v3, v2) = v0)) & ! [v0: Num$] : ! [v1: Num$] : ! [v2: Num$] :
% 80.68/11.92 ! [v3: Num_num_fun$] : (v1 = v0 | ~ (fun_app$w(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$w(v3, v2) = v0)) & ! [v0: Int_bool_fun$] : ! [v1: Int_bool_fun$]
% 80.68/11.92 : ! [v2: int] : ! [v3: Int_int_bool_fun_fun$] : (v1 = v0 | ~ (fun_app$v(v3,
% 80.68/11.92 v2) = v1) | ~ (fun_app$v(v3, v2) = v0)) & ! [v0: MultipleValueBool] :
% 80.68/11.92 ! [v1: MultipleValueBool] : ! [v2: int] : ! [v3: Int_bool_fun$] : (v1 = v0 |
% 80.68/11.92 ~ (fun_app$u(v3, v2) = v1) | ~ (fun_app$u(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Int_int_fun$] : ! [v1: Int_int_fun$] : ! [v2: int] : ! [v3:
% 80.68/11.92 Int_int_int_fun_fun$] : (v1 = v0 | ~ (fun_app$t(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$t(v3, v2) = v0)) & ! [v0: int] : ! [v1: int] : ! [v2: int] : !
% 80.68/11.92 [v3: Int_int_fun$] : (v1 = v0 | ~ (fun_app$s(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$s(v3, v2) = v0)) & ! [v0: Enat_enat_fun$] : ! [v1:
% 80.68/11.92 Enat_enat_fun$] : ! [v2: Enat$] : ! [v3: Enat_enat_enat_fun_fun$] : (v1 =
% 80.68/11.92 v0 | ~ (fun_app$r(v3, v2) = v1) | ~ (fun_app$r(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Enat$] : ! [v1: Enat$] : ! [v2: Enat$] : ! [v3: Enat_enat_fun$] : (v1 =
% 80.68/11.92 v0 | ~ (fun_app$q(v3, v2) = v1) | ~ (fun_app$q(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Enat_bool_fun$] : ! [v1: Enat_bool_fun$] : ! [v2: Enat$] : ! [v3:
% 80.68/11.92 Enat_enat_bool_fun_fun$] : (v1 = v0 | ~ (fun_app$p(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$p(v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2: Enat$] : ! [v3: Enat_bool_fun$] : (v1 = v0 |
% 80.68/11.92 ~ (fun_app$o(v3, v2) = v1) | ~ (fun_app$o(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Num_bool_fun$] : ! [v1: Num_bool_fun$] : ! [v2: Num$] : ! [v3:
% 80.68/11.92 Num_num_bool_fun_fun$] : (v1 = v0 | ~ (fun_app$n(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$n(v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2: Num$] : ! [v3: Num_bool_fun$] : (v1 = v0 | ~
% 80.68/11.92 (fun_app$m(v3, v2) = v1) | ~ (fun_app$m(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Char_list_char_list_fun_mem_ell2_mem_ell2_cblinfun_list_prod$] : ! [v1:
% 80.68/11.92 Char_list_char_list_fun_mem_ell2_mem_ell2_cblinfun_list_prod$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3: Char_list_char_list_fun$] : (v1
% 80.68/11.92 = v0 | ~ (pair$c(v3, v2) = v1) | ~ (pair$c(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_char_list_char_list_fun_fun_char_list_char_list_fun_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_char_list_char_list_fun_fun_char_list_char_list_fun_mem_ell2_mem_ell2_cblinfun_list_prod_prod$]
% 80.68/11.92 : ! [v2: Char_list_char_list_fun_mem_ell2_mem_ell2_cblinfun_list_prod$] : !
% 80.68/11.92 [v3: Mem_ell2_mem_ell2_cblinfun_char_list_char_list_fun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (pair$b(v3, v2) = v1) | ~ (pair$b(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod$] : !
% 80.68/11.92 [v1: Mem_ell2_mem_ell2_cblinfun_list_mem_ell2_mem_ell2_cblinfun_list_prod$] :
% 80.68/11.92 ! [v2: Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : (v1 = v0 | ~ (pair$a(v3, v2) = v1) | ~
% 80.68/11.92 (pair$a(v3, v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2: Mem_ell2_ccsubspace$] : ! [v3:
% 80.68/11.92 Mem_ell2_ccsubspace$] : (v1 = v0 | ~ (less_eq$(v3, v2) = v1) | ~
% 80.68/11.92 (less_eq$(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 80.68/11.92 [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 80.68/11.92 ~ (register_pair$(v3, v2) = v1) | ~ (register_pair$(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Mem_ell2_mem_ell2_cblinfun_list$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v3: Mem_ell2_mem_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (cons$(v3, v2) = v1) | ~ (cons$(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (comp$e(v3, v2) = v1) | ~ (comp$e(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (comp$f(v3, v2) = v1) | ~ (comp$f(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (comp$d(v3, v2) = v1) | ~ (comp$d(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 80.68/11.92 v0 | ~ (comp$c(v3, v2) = v1) | ~ (comp$c(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (comp$a(v3, v2) = v1) | ~ (comp$a(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : !
% 80.68/11.92 [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (comp$b(v3, v2) = v1) | ~ (comp$b(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2: Bit_bit_prod$] :
% 80.68/11.92 ! [v3: Bit_bit_prod_bool_fun$] : (v1 = v0 | ~ (fun_app$l(v3, v2) = v1) | ~
% 80.68/11.92 (fun_app$l(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_mem_ell2_mem_ell2_cblinfun_list_fun_fun$] : ! [v1:
% 80.68/11.92 Bit_bit_mem_ell2_mem_ell2_cblinfun_list_fun_fun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 =
% 80.68/11.92 v0 | ~ (teleport$(v3, v2) = v1) | ~ (teleport$(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : ! [v1:
% 80.68/11.92 Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : ! [v2: Bit$] : ! [v3:
% 80.68/11.92 Bit_bit_mem_ell2_mem_ell2_cblinfun_list_fun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (fun_app$k(v3, v2) = v1) | ~ (fun_app$k(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list$] : ! [v2: Bit$] : ! [v3:
% 80.68/11.92 Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : (v1 = v0 | ~ (fun_app$j(v3, v2)
% 80.68/11.92 = v1) | ~ (fun_app$j(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_ccsubspace_mem_ell2_ccsubspace_fun$] : ! [v1:
% 80.68/11.92 Mem_ell2_ccsubspace_mem_ell2_ccsubspace_fun$] : ! [v2:
% 80.68/11.92 Mem_ell2_ccsubspace$] : ! [v3:
% 80.68/11.92 Mem_ell2_ccsubspace_mem_ell2_ccsubspace_mem_ell2_ccsubspace_fun_fun$] : (v1
% 80.68/11.92 = v0 | ~ (fun_app$i(v3, v2) = v1) | ~ (fun_app$i(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_ccsubspace$] : ! [v1: Mem_ell2_ccsubspace$] : ! [v2:
% 80.68/11.92 Mem_ell2_ccsubspace$] : ! [v3:
% 80.68/11.92 Mem_ell2_ccsubspace_mem_ell2_ccsubspace_fun$] : (v1 = v0 | ~ (fun_app$h(v3,
% 80.68/11.92 v2) = v1) | ~ (fun_app$h(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_ccsubspace$] : ! [v1: Bit_bit_prod_ell2_ccsubspace$] : !
% 80.68/11.92 [v2: Bit_bit_prod_ell2_ccsubspace$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_image$b(v3, v2) = v1) | ~ (cblinfun_image$b(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$g(v3, v2) = v1) | ~ (fun_app$g(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_ell2_ccsubspace$] : ! [v1: Bit_ell2_ccsubspace$] : ! [v2:
% 80.68/11.92 Bit_ell2_ccsubspace$] : ! [v3: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_image$a(v3, v2) = v1) | ~ (cblinfun_image$a(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1: Bit_ell2_bit_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (fun_app$f(v3, v2) = v1) | ~ (fun_app$f(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$b(v3, v2) = v1) | ~ (fun_app$b(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Mem_ell2_mem_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (fun_app$a(v3, v2) = v1) | ~ (fun_app$a(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod$] : ! [v1: Bit_bit_prod$] : ! [v2: Bit$] : ! [v3: Bit$] :
% 80.68/11.92 (v1 = v0 | ~ (pair$(v3, v2) = v1) | ~ (pair$(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$e(v3, v2) = v1) | ~ (fun_app$e(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (comp$(v3, v2) = v1) | ~ (comp$(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 80.68/11.92 [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 80.68/11.92 ~ (register_pair$b(v3, v2) = v1) | ~ (register_pair$b(v3, v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 80.68/11.92 [v3:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register_pair$a(v3, v2) = v1) | ~ (register_pair$a(v3, v2) =
% 80.68/11.92 v0)) & ! [v0: Mem_ell2_ccsubspace$] : ! [v1: Mem_ell2_ccsubspace$] : !
% 80.68/11.92 [v2: Bit_ell2$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (eq$b(v3, v2) = v1) | ~ (eq$b(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_ccsubspace$] : ! [v1: Mem_ell2_ccsubspace$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (eq$a(v3, v2) = v1) | ~ (eq$a(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_ccsubspace$] : ! [v1: Mem_ell2_ccsubspace$] : ! [v2:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2$] : ! [v3:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (eq$(v3, v2) = v1) | ~ (eq$(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_ccsubspace$] : ! [v1: Mem_ell2_ccsubspace$] : ! [v2:
% 80.68/11.92 Mem_ell2_ccsubspace$] : ! [v3: Mem_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_image$(v3, v2) = v1) | ~ (cblinfun_image$(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1: Bit_ell2_bit_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_ell2$] : ! [v3: Bit_ell2$] : (v1 = v0 | ~ (butterfly$b(v3, v2) =
% 80.68/11.92 v1) | ~ (butterfly$b(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (ifthen$b(v3, v2) = v1) | ~ (ifthen$b(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2: Bit_bit_prod_ell2$]
% 80.68/11.92 : ! [v3: Bit_bit_prod_ell2$] : (v1 = v0 | ~ (butterfly$a(v3, v2) = v1) | ~
% 80.68/11.92 (butterfly$a(v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v2: Bit_bit_prod$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (ifthen$a(v3, v2) = v1) | ~ (ifthen$a(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v2: Bit_atype_prod_btype_prod_ell2$] : ! [v3:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2$] : (v1 = v0 | ~ (butterfly$(v3, v2) = v1) |
% 80.68/11.92 ~ (butterfly$(v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$] :
% 80.68/11.92 ! [v3:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$c(v3, v2) = v1) | ~ (fun_app$c(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_atype_prod_btype_prod$] : ! [v3:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (ifthen$(v3, v2) = v1) | ~ (ifthen$(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v3:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (apply$a(v3, v2) = v1) | ~
% 80.68/11.92 (apply$a(v3, v2) = v0)) & ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v2: Bit_ell2_bit_ell2_cblinfun$] : !
% 80.68/11.92 [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 80.68/11.92 ~ (fun_app$d(v3, v2) = v1) | ~ (fun_app$d(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v3: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (apply$(v3, v2) = v1) | ~ (apply$(v3, v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (fun_app$(v3, v2) = v1) | ~ (fun_app$(v3, v2) = v0)) & !
% 80.68/11.92 [v0: Nat$] : ! [v1: Nat$] : ! [v2: int] : (v1 = v0 | ~ (nat$(v2) = v1) | ~
% 80.68/11.92 (nat$(v2) = v0)) & ! [v0: Enat$] : ! [v1: Enat$] : ! [v2: Nat$] : (v1 =
% 80.68/11.92 v0 | ~ (enat$(v2) = v1) | ~ (enat$(v2) = v0)) & ! [v0: int] : ! [v1:
% 80.68/11.92 int] : ! [v2: Nat$] : (v1 = v0 | ~ (of_nat$(v2) = v1) | ~ (of_nat$(v2) =
% 80.68/11.92 v0)) & ! [v0: Complex$] : ! [v1: Complex$] : ! [v2: Num$] : (v1 = v0 |
% 80.68/11.92 ~ (numeral$(v2) = v1) | ~ (numeral$(v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 80.68/11.92 Complex$] : (v1 = v0 | ~ (scaleC$(v2) = v1) | ~ (scaleC$(v2) = v0)) & !
% 80.68/11.92 [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Mem_ell2_mem_ell2_cblinfun_list$] : (v1 = v0 | ~ (program$(v2) = v1) |
% 80.68/11.92 ~ (program$(v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$s(v2) = v1) | ~ (register$s(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (register$r(v2) = v1) | ~ (register$r(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (register$q(v2) = v1) | ~ (register$q(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$p(v2) = v1) | ~ (register$p(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$o(v2) = v1) | ~ (register$o(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$n(v2) = v1) | ~ (register$n(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$m(v2) = v1) | ~ (register$m(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (register$l(v2) = v1) | ~ (register$l(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Atype_ell2_atype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 80.68/11.92 ~ (register$k(v2) = v1) | ~ (register$k(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 | ~
% 80.68/11.92 (register$j(v2) = v1) | ~ (register$j(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Btype_ell2_btype_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : (v1 = v0 |
% 80.68/11.92 ~ (register$i(v2) = v1) | ~ (register$i(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$h(v2) = v1) | ~ (register$h(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$g(v2) = v1) | ~ (register$g(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$f(v2) = v1) | ~ (register$f(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$e(v2) = v1) | ~ (register$e(v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_list$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_list$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_list_list$] : (v1 = v0 | ~ (product_lists$(v2) =
% 80.68/11.92 v1) | ~ (product_lists$(v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2:
% 80.68/11.92 Btype_atype_prod_ell2_btype_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$d(v2) = v1) | ~ (register$d(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Atype_bit_prod_ell2_atype_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$c(v2) = v1) | ~ (register$c(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$b(v2) = v1) | ~ (register$b(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$a(v2) = v1) | ~ (register$a(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Btype_bit_atype_prod_prod_ell2_btype_bit_atype_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.92 : (v1 = v0 | ~ (register$(v2) = v1) | ~ (register$(v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun_mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun_mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Mem_bit_bit_prod_prod_ell2_mem_bit_bit_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (sandwich$o(v2) = v1) | ~ (sandwich$o(v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun_mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun_mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Mem_bit_prod_ell2_mem_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (sandwich$n(v2) = v1) | ~ (sandwich$n(v2) = v0)) & ! [v0:
% 80.68/11.92 Atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun_atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun_atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Atype_btype_prod_bit_bit_prod_prod_ell2_atype_btype_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (sandwich$m(v2) = v1) | ~ (sandwich$m(v2) = v0)) & ! [v0:
% 80.68/11.92 Atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun_atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun_atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Atype_btype_prod_bit_prod_ell2_atype_btype_prod_bit_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (sandwich$l(v2) = v1) | ~ (sandwich$l(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (unitary$d(v2) = v1) | ~ (unitary$d(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun_bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun_bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_bit_prod_mem_prod_ell2_bit_bit_prod_mem_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (sandwich$k(v2) = v1) | ~ (sandwich$k(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun_bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun_bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_mem_prod_ell2_bit_mem_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (sandwich$j(v2) = v1) | ~ (sandwich$j(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (sandwich$i(v2) = v1) | ~ (sandwich$i(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (sandwich$h(v2) = v1) | ~ (sandwich$h(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (sandwich$g(v2) = v1) | ~ (sandwich$g(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (sandwich$f(v2) = v1) | ~ (sandwich$f(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (sandwich$e(v2) = v1) | ~ (sandwich$e(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (sandwich$d(v2) = v1) | ~ (sandwich$d(v2) = v0)) & ! [v0:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v1:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~ (sandwich$c(v2) = v1) | ~
% 80.68/11.92 (sandwich$c(v2) = v0)) & ! [v0:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 |
% 80.68/11.92 ~ (sandwich$b(v2) = v1) | ~ (sandwich$b(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (sandwich$a(v2) = v1) | ~ (sandwich$a(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (sandwich$(v2) = v1) | ~
% 80.68/11.92 (sandwich$(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_atype_prod_prod_ell2_bit_bit_atype_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v1: Bit_bit_atype_prod_prod_ell2_bit_bit_atype_prod_prod_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Bit_bit_prod_atype_prod_ell2_bit_bit_prod_atype_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (assoc$i(v2) = v1) | ~ (assoc$i(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_atype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_atype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_bit_atype_prod_prod_prod_ell2_bit_bit_bit_atype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (assoc$h(v2) = v1) | ~ (assoc$h(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_bit_bit_prod_prod_prod_ell2_bit_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (assoc$g(v2) = v1) | ~ (assoc$g(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_bit_bit_prod_prod_prod_ell2_bit_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_bit_bit_prod_prod_prod_ell2_bit_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2_bit_bit_prod_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (assoc$f(v2) = v1) | ~ (assoc$f(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_bit_bit_prod_bit_bit_bit_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_bit_bit_bit_prod_prod_prod_prod_ell2_bit_bit_bit_bit_bit_prod_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (assoc$e(v2) = v1) | ~ (assoc$e(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_atype_prod_btype_prod_prod_ell2_bit_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_atype_prod_btype_prod_prod_ell2_bit_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_atype_prod_prod_btype_prod_ell2_bit_bit_atype_prod_prod_btype_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (assoc$d(v2) = v1) | ~ (assoc$d(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_bit_bit_prod_bit_atype_prod_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_bit_bit_atype_prod_btype_prod_prod_prod_ell2_bit_bit_bit_atype_prod_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (assoc$c(v2) = v1) | ~ (assoc$c(v2) = v0)) & ! [v0:
% 80.68/11.92 Atype_btype_prod_ell2_mem_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_mem_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Mem_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_inv$j(v2) = v1) | ~ (cblinfun_inv$j(v2) = v0)) & ! [v0:
% 80.68/11.92 Atype_btype_prod_ell2_bit_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_bit_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_inv$i(v2) = v1) | ~ (cblinfun_inv$i(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_mem_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_mem_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Mem_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~ (cblinfun_inv$h(v2) =
% 80.68/11.92 v1) | ~ (cblinfun_inv$h(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~ (cblinfun_inv$g(v2) =
% 80.68/11.92 v1) | ~ (cblinfun_inv$g(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_ell2_atype_btype_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_inv$f(v2) = v1) | ~ (cblinfun_inv$f(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.92 Bit_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (cblinfun_inv$e(v2) =
% 80.68/11.92 v1) | ~ (cblinfun_inv$e(v2) = v0)) & ! [v0: Mem_ell2_bit_ell2_cblinfun$]
% 80.68/11.92 : ! [v1: Mem_ell2_bit_ell2_cblinfun$] : ! [v2: Bit_ell2_mem_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (cblinfun_inv$d(v2) = v1) | ~ (cblinfun_inv$d(v2) = v0)) & !
% 80.68/11.92 [v0: Bit_ell2_mem_ell2_cblinfun$] : ! [v1: Bit_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.92 [v2: Mem_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (cblinfun_inv$c(v2) = v1) |
% 80.68/11.92 ~ (cblinfun_inv$c(v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$]
% 80.68/11.92 : (v1 = v0 | ~ (iso_cblinfun$j(v2) = v1) | ~ (iso_cblinfun$j(v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 |
% 80.68/11.92 ~ (cblinfun_compose$w(v2) = v1) | ~ (cblinfun_compose$w(v2) = v0)) & !
% 80.68/11.92 [v0:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2:
% 80.68/11.92 Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$] :
% 80.68/11.92 (v1 = v0 | ~ (snd$b(v2) = v1) | ~ (snd$b(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.92 : ! [v2: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (fst$a(v2) = v1) | ~
% 80.68/11.92 (fst$a(v2) = v0)) & ! [v0: Bit_atype_prod_ell2$] : ! [v1:
% 80.68/11.92 Bit_atype_prod_ell2$] : ! [v2: Bit_atype_prod$] : (v1 = v0 | ~ (ket$f(v2)
% 80.68/11.92 = v1) | ~ (ket$f(v2) = v0)) & ! [v0: Btype_ell2$] : ! [v1: Btype_ell2$]
% 80.68/11.92 : ! [v2: Btype$] : (v1 = v0 | ~ (ket$g(v2) = v1) | ~ (ket$g(v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_prod_bit_bit_prod_prod_ell2$] : ! [v1:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod_ell2$] : ! [v2:
% 80.68/11.92 Bit_bit_prod_bit_bit_prod_prod$] : (v1 = v0 | ~ (ket$e(v2) = v1) | ~
% 80.68/11.92 (ket$e(v2) = v0)) & ! [v0: Bit_bit_bit_prod_prod_ell2$] : ! [v1:
% 80.68/11.92 Bit_bit_bit_prod_prod_ell2$] : ! [v2: Bit_bit_bit_prod_prod$] : (v1 = v0 |
% 80.68/11.92 ~ (ket$d(v2) = v1) | ~ (ket$d(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_bit_prod_ell2$] : ! [v1: Bit_bit_prod_bit_prod_ell2$] : !
% 80.68/11.92 [v2: Bit_bit_prod_bit_prod$] : (v1 = v0 | ~ (ket$c(v2) = v1) | ~ (ket$c(v2)
% 80.68/11.92 = v0)) & ! [v0: MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Mem_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (iso_cblinfun$i(v2) = v1) | ~ (iso_cblinfun$i(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (iso_cblinfun$h(v2) = v1) | ~ (iso_cblinfun$h(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Mem_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~ (iso_cblinfun$g(v2) =
% 80.68/11.92 v1) | ~ (iso_cblinfun$g(v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2: Bit_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 =
% 80.68/11.92 v0 | ~ (iso_cblinfun$f(v2) = v1) | ~ (iso_cblinfun$f(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (iso_cblinfun$e(v2) = v1) | ~ (iso_cblinfun$e(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (iso_cblinfun$d(v2) =
% 80.68/11.92 v1) | ~ (iso_cblinfun$d(v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (iso_cblinfun$c(v2) = v1) | ~ (iso_cblinfun$c(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Mem_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~ (iso_cblinfun$b(v2) = v1) | ~
% 80.68/11.92 (iso_cblinfun$b(v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2: Bit_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (iso_cblinfun$a(v2) = v1) | ~ (iso_cblinfun$a(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Mem_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (iso_cblinfun$(v2) = v1) | ~
% 80.68/11.92 (iso_cblinfun$(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (tensor_op$(v2) = v1) |
% 80.68/11.92 ~ (tensor_op$(v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2:
% 80.68/11.92 Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (unitary$c(v2) = v1) | ~ (unitary$c(v2) = v0)) & ! [v0: MultipleValueBool]
% 80.68/11.92 : ! [v1: MultipleValueBool] : ! [v2: Mem_ell2_mem_ell2_cblinfun$] : (v1 = v0
% 80.68/11.92 | ~ (isometry$b(v2) = v1) | ~ (isometry$b(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (isometry$a(v2) = v1) | ~ (isometry$a(v2) = v0)) & ! [v0:
% 80.68/11.92 MultipleValueBool] : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (isometry$(v2) = v1) | ~
% 80.68/11.92 (isometry$(v2) = v0)) & ! [v0: MultipleValueBool] : ! [v1:
% 80.68/11.92 MultipleValueBool] : ! [v2: Mem_ell2_mem_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (unitary$b(v2) = v1) | ~ (unitary$b(v2) = v0)) & ! [v0: MultipleValueBool]
% 80.68/11.92 : ! [v1: MultipleValueBool] : ! [v2:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (unitary$a(v2) = v1) | ~ (unitary$a(v2) = v0)) & ! [v0: MultipleValueBool]
% 80.68/11.92 : ! [v1: MultipleValueBool] : ! [v2: Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0
% 80.68/11.92 | ~ (unitary$(v2) = v1) | ~ (unitary$(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v1:
% 80.68/11.92 Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.92 : ! [v2: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : (v1 = v0 | ~
% 80.68/11.92 (cblinfun_compose$b(v2) = v1) | ~ (cblinfun_compose$b(v2) = v0)) & ! [v0:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v1:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun_bit_ell2_bit_ell2_cblinfun_fun$] : ! [v2:
% 80.68/11.92 Bit_ell2_bit_ell2_cblinfun$] : (v1 = v0 | ~ (cblinfun_compose$a(v2) = v1) |
% 80.68/11.92 ~ (cblinfun_compose$a(v2) = v0)) & ! [v0: Bit_ell2$] : ! [v1: Bit_ell2$]
% 80.68/11.92 : ! [v2: Bit$] : (v1 = v0 | ~ (ket$b(v2) = v1) | ~ (ket$b(v2) = v0)) & !
% 80.68/11.92 [v0: Bit_bit_prod_ell2$] : ! [v1: Bit_bit_prod_ell2$] : ! [v2:
% 80.68/11.92 Bit_bit_prod$] : (v1 = v0 | ~ (ket$(v2) = v1) | ~ (ket$(v2) = v0)) & !
% 80.68/11.92 [v0: Bit_atype_prod_btype_prod_ell2$] : ! [v1:
% 80.68/11.92 Bit_atype_prod_btype_prod_ell2$] : ! [v2: Bit_atype_prod_btype_prod$] : (v1
% 80.68/11.92 = v0 | ~ (ket$a(v2) = v1) | ~ (ket$a(v2) = v0))
% 80.68/11.93
% 80.68/11.93 Further assumptions not needed in the proof:
% 80.68/11.93 --------------------------------------------
% 80.68/11.93 axiom1, axiom10, axiom100, axiom101, axiom102, axiom103, axiom104, axiom105,
% 80.68/11.93 axiom106, axiom107, axiom108, axiom109, axiom11, axiom110, axiom111, axiom112,
% 80.68/11.93 axiom113, axiom114, axiom115, axiom116, axiom117, axiom118, axiom119, axiom12,
% 80.68/11.93 axiom120, axiom121, axiom122, axiom123, axiom124, axiom125, axiom126, axiom127,
% 80.68/11.93 axiom128, axiom129, axiom13, axiom130, axiom131, axiom132, axiom133, axiom134,
% 80.68/11.93 axiom135, axiom136, axiom137, axiom138, axiom139, axiom14, axiom140, axiom141,
% 80.68/11.93 axiom142, axiom143, axiom144, axiom145, axiom146, axiom147, axiom148, axiom149,
% 80.68/11.93 axiom15, axiom150, axiom151, axiom152, axiom153, axiom154, axiom155, axiom156,
% 80.68/11.93 axiom157, axiom158, axiom159, axiom16, axiom160, axiom161, axiom162, axiom163,
% 80.68/11.93 axiom164, axiom165, axiom166, axiom167, axiom168, axiom169, axiom17, axiom170,
% 80.68/11.93 axiom171, axiom172, axiom173, axiom174, axiom175, axiom176, axiom177, axiom178,
% 80.68/11.93 axiom179, axiom18, axiom180, axiom181, axiom182, axiom183, axiom184, axiom185,
% 80.68/11.93 axiom186, axiom187, axiom188, axiom189, axiom19, axiom190, axiom191, axiom192,
% 80.68/11.93 axiom193, axiom194, axiom195, axiom196, axiom197, axiom198, axiom199, axiom2,
% 80.68/11.93 axiom20, axiom200, axiom201, axiom202, axiom203, axiom204, axiom205, axiom206,
% 80.68/11.93 axiom207, axiom208, axiom209, axiom21, axiom210, axiom211, axiom212, axiom213,
% 80.68/11.93 axiom214, axiom215, axiom216, axiom217, axiom218, axiom219, axiom22, axiom220,
% 80.68/11.93 axiom221, axiom222, axiom223, axiom224, axiom225, axiom226, axiom227, axiom228,
% 80.68/11.93 axiom229, axiom230, axiom231, axiom232, axiom233, axiom234, axiom235, axiom236,
% 80.68/11.93 axiom237, axiom238, axiom239, axiom24, axiom240, axiom241, axiom242, axiom243,
% 80.68/11.93 axiom244, axiom245, axiom246, axiom247, axiom248, axiom249, axiom25, axiom250,
% 80.68/11.93 axiom251, axiom252, axiom253, axiom254, axiom255, axiom256, axiom257, axiom258,
% 80.68/11.93 axiom259, axiom26, axiom260, axiom261, axiom262, axiom263, axiom264, axiom265,
% 80.68/11.93 axiom266, axiom267, axiom268, axiom269, axiom27, axiom270, axiom271, axiom272,
% 80.68/11.93 axiom273, axiom274, axiom275, axiom276, axiom277, axiom278, axiom279, axiom280,
% 80.68/11.93 axiom281, axiom282, axiom283, axiom284, axiom285, axiom286, axiom287, axiom288,
% 80.68/11.93 axiom289, axiom29, axiom290, axiom291, axiom292, axiom293, axiom294, axiom295,
% 80.68/11.93 axiom296, axiom297, axiom298, axiom299, axiom3, axiom30, axiom300, axiom301,
% 80.68/11.93 axiom302, axiom303, axiom304, axiom305, axiom306, axiom307, axiom308, axiom309,
% 80.68/11.93 axiom31, axiom310, axiom311, axiom312, axiom313, axiom314, axiom315, axiom316,
% 80.68/11.93 axiom317, axiom318, axiom319, axiom32, axiom320, axiom321, axiom322, axiom323,
% 80.68/11.93 axiom324, axiom325, axiom326, axiom327, axiom328, axiom329, axiom33, axiom330,
% 80.68/11.93 axiom331, axiom332, axiom333, axiom334, axiom335, axiom336, axiom337, axiom338,
% 80.68/11.93 axiom339, axiom34, axiom340, axiom341, axiom342, axiom343, axiom344, axiom345,
% 80.68/11.93 axiom346, axiom347, axiom348, axiom349, axiom35, axiom350, axiom351, axiom352,
% 80.68/11.93 axiom353, axiom354, axiom355, axiom356, axiom357, axiom358, axiom359, axiom36,
% 80.68/11.93 axiom360, axiom361, axiom362, axiom363, axiom364, axiom368, axiom369, axiom37,
% 80.68/11.93 axiom370, axiom371, axiom372, axiom373, axiom374, axiom375, axiom376, axiom377,
% 80.68/11.93 axiom378, axiom38, axiom381, axiom382, axiom383, axiom384, axiom385, axiom386,
% 80.68/11.93 axiom387, axiom388, axiom389, axiom39, axiom390, axiom391, axiom392, axiom393,
% 80.68/11.93 axiom394, axiom395, axiom396, axiom397, axiom398, axiom399, axiom4, axiom40,
% 80.68/11.93 axiom400, axiom401, axiom402, axiom403, axiom404, axiom405, axiom406, axiom407,
% 80.68/11.93 axiom408, axiom409, axiom41, axiom410, axiom411, axiom412, axiom413, axiom414,
% 80.68/11.93 axiom415, axiom416, axiom417, axiom418, axiom419, axiom42, axiom420, axiom421,
% 80.68/11.93 axiom422, axiom423, axiom424, axiom425, axiom426, axiom427, axiom428, axiom429,
% 80.68/11.93 axiom43, axiom430, axiom431, axiom432, axiom433, axiom434, axiom435, axiom437,
% 80.68/11.93 axiom438, axiom44, axiom440, axiom441, axiom442, axiom443, axiom444, axiom446,
% 80.68/11.93 axiom447, axiom448, axiom449, axiom45, axiom450, axiom451, axiom452, axiom453,
% 80.68/11.93 axiom454, axiom455, axiom456, axiom457, axiom458, axiom459, axiom46, axiom460,
% 80.68/11.93 axiom461, axiom462, axiom463, axiom464, axiom465, axiom466, axiom467, axiom468,
% 80.68/11.93 axiom469, axiom47, axiom470, axiom473, axiom474, axiom475, axiom476, axiom477,
% 80.68/11.93 axiom478, axiom479, axiom48, axiom480, axiom481, axiom482, axiom483, axiom484,
% 80.68/11.93 axiom485, axiom486, axiom487, axiom488, axiom489, axiom49, axiom490, axiom491,
% 80.68/11.93 axiom492, axiom493, axiom494, axiom495, axiom496, axiom497, axiom498, axiom499,
% 80.68/11.93 axiom5, axiom50, axiom500, axiom501, axiom502, axiom503, axiom504, axiom505,
% 80.68/11.93 axiom506, axiom507, axiom508, axiom509, axiom51, axiom510, axiom511, axiom512,
% 80.68/11.93 axiom513, axiom514, axiom515, axiom516, axiom517, axiom518, axiom519, axiom52,
% 80.68/11.93 axiom520, axiom521, axiom522, axiom523, axiom524, axiom525, axiom526, axiom527,
% 80.68/11.93 axiom528, axiom529, axiom53, axiom530, axiom531, axiom532, axiom533, axiom534,
% 80.68/11.93 axiom535, axiom536, axiom537, axiom538, axiom539, axiom54, axiom540, axiom541,
% 80.68/11.93 axiom542, axiom543, axiom544, axiom545, axiom546, axiom547, axiom548, axiom549,
% 80.68/11.93 axiom55, axiom550, axiom551, axiom552, axiom553, axiom554, axiom555, axiom556,
% 80.68/11.93 axiom557, axiom558, axiom559, axiom56, axiom560, axiom561, axiom562, axiom563,
% 80.68/11.93 axiom564, axiom565, axiom566, axiom567, axiom568, axiom569, axiom57, axiom570,
% 80.68/11.93 axiom571, axiom572, axiom573, axiom574, axiom575, axiom576, axiom577, axiom578,
% 80.68/11.93 axiom579, axiom58, axiom580, axiom581, axiom582, axiom583, axiom584, axiom585,
% 80.68/11.93 axiom586, axiom587, axiom588, axiom589, axiom59, axiom590, axiom591, axiom592,
% 80.68/11.93 axiom593, axiom594, axiom595, axiom596, axiom597, axiom598, axiom599, axiom6,
% 80.68/11.93 axiom60, axiom600, axiom601, axiom602, axiom603, axiom604, axiom606, axiom608,
% 80.68/11.93 axiom609, axiom61, axiom610, axiom611, axiom612, axiom613, axiom614, axiom615,
% 80.68/11.93 axiom616, axiom617, axiom618, axiom619, axiom62, axiom620, axiom621, axiom622,
% 80.68/11.93 axiom623, axiom624, axiom63, axiom64, axiom65, axiom66, axiom67, axiom68,
% 80.68/11.93 axiom69, axiom7, axiom70, axiom71, axiom72, axiom73, axiom74, axiom75, axiom76,
% 80.68/11.93 axiom77, axiom78, axiom79, axiom8, axiom80, axiom81, axiom82, axiom83, axiom84,
% 80.68/11.93 axiom85, axiom86, axiom87, axiom88, axiom89, axiom9, axiom90, axiom91, axiom92,
% 80.68/11.93 axiom93, axiom94, axiom95, axiom96, axiom97, axiom98, axiom99
% 80.68/11.93
% 80.68/11.93 Those formulas are unsatisfiable:
% 80.68/11.93 ---------------------------------
% 80.68/11.93
% 80.68/11.93 Begin of proof
% 80.68/11.93 |
% 80.68/11.93 | ALPHA: (axiom23) implies:
% 80.68/11.93 | (1) ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 80.68/11.93 | Mem_ell2_mem_ell2_cblinfun$] : ! [v2: Mem_ell2_ccsubspace$] : !
% 80.68/11.93 | [v3: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : !
% 80.68/11.93 | [v4: Mem_ell2_mem_ell2_cblinfun$] : ! [v5: Mem_ell2_ccsubspace$] : ( ~
% 80.68/11.93 | (fun_app$b(cblinfun_compose$, v0) = v3) | ~ (fun_app$a(v3, v1) = v4)
% 80.68/11.93 | | ~ (cblinfun_image$(v4, v2) = v5) | ~
% 80.68/11.93 | Mem_ell2_mem_ell2_cblinfun$(v1) | ~ Mem_ell2_mem_ell2_cblinfun$(v0)
% 80.68/11.93 | | ~ Mem_ell2_ccsubspace$(v2) | ? [v6: Mem_ell2_ccsubspace$] :
% 80.68/11.93 | (cblinfun_image$(v1, v2) = v6 & cblinfun_image$(v0, v6) = v5 &
% 80.68/11.93 | Mem_ell2_ccsubspace$(v6) & Mem_ell2_ccsubspace$(v5)))
% 80.68/11.93 |
% 80.68/11.93 | ALPHA: (axiom28) implies:
% 80.68/11.93 | (2) ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.93 | ? [v1: Mem_ell2_mem_ell2_cblinfun$] : ? [v2:
% 80.68/11.93 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.93 | (fun_app$b(cblinfun_compose$, v1) = v2 & fun_app$a(v2, o5$) = o7$ &
% 80.68/11.93 | comp$(phi$, snd$) = v0 & fun_app$d(v0, xz$) = v1 &
% 80.68/11.93 | Mem_ell2_mem_ell2_cblinfun$(v1) &
% 80.68/11.93 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 80.68/11.93 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2))
% 80.68/11.93 |
% 80.68/11.93 | ALPHA: (axiom365) implies:
% 80.68/11.94 | (3) ? [v0: Bit_bit_mem_ell2_mem_ell2_cblinfun_list_fun_fun$] : ? [v1:
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v2:
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.94 | : ? [v3: Mem_ell2_mem_ell2_cblinfun$] : ? [v4:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ? [v5:
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v6:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ? [v7: Mem_ell2_mem_ell2_cblinfun$] :
% 80.68/11.94 | ? [v8: Mem_ell2_mem_ell2_cblinfun_list$] : ? [v9:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v10:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ? [v11:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v12:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v13:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v14:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : (register_pair$(x$, v1) = v2 &
% 80.68/11.94 | cons$(v10, v11) = v14 & cons$(v10, v8) = v13 & cons$(v10, nil$) = v11
% 80.68/11.94 | & cons$(v7, nil$) = v8 & cons$(v6, v11) = v12 & cons$(v6, v8) = v9 &
% 80.68/11.94 | teleport$(x$, phi$) = v0 & comp$(phi$, snd$) = v5 & comp$(phi$, fst$)
% 80.68/11.94 | = v1 & apply$a(hadamard$, x$) = v4 & apply$a(pauliZ$, v5) = v7 &
% 80.68/11.94 | apply$a(pauliX$, v5) = v6 & apply$a(id_cblinfun$, v5) = v10 &
% 80.68/11.94 | apply$(cnot$, v2) = v3 & Mem_ell2_mem_ell2_cblinfun$(v10) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$(v7) & Mem_ell2_mem_ell2_cblinfun$(v6) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$(v4) & Mem_ell2_mem_ell2_cblinfun$(v3) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v14) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v13) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v12) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v11) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v9) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v8) &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v5) &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1) &
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2)
% 80.68/11.94 | & Bit_bit_mem_ell2_mem_ell2_cblinfun_list_fun_fun$(v0) & ! [v15:
% 80.68/11.94 | Bit$] : ! [v16: Bit$] : ! [v17: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.68/11.94 | [v18: Mem_ell2_mem_ell2_cblinfun$] : ! [v19:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ! [v20:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : (v15 = one$ | ~ (cons$(v18,
% 80.68/11.94 | v13) = v19) | ~ (cons$(v17, v19) = v20) | ~ (ifthen$b(v1,
% 80.68/11.94 | v15) = v17) | ~ (ifthen$b(x$, v16) = v18) | ~ Bit$(v16) | ~
% 80.68/11.94 | Bit$(v15) | ? [v21: Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : ?
% 80.68/11.94 | [v22: Mem_ell2_mem_ell2_cblinfun_list$] : ? [v23:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v24:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v25:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v26:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v27:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v28:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : (cons$(v18, v14) = v25 &
% 80.68/11.94 | cons$(v17, v25) = v26 & cons$(v4, v26) = v27 & cons$(v4, v20) =
% 80.68/11.94 | v23 & cons$(v3, v27) = v28 & cons$(v3, v23) = v24 & fun_app$k(v0,
% 80.68/11.94 | v15) = v21 & fun_app$j(v21, v16) = v22 &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v28) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v27) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v26) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v25) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v24) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v23) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v22) &
% 80.68/11.94 | Bit_mem_ell2_mem_ell2_cblinfun_list_fun$(v21) & ( ~ (v16 = one$)
% 80.68/11.94 | | v24 = v22) & (v28 = v22 | v16 = one$))) & ! [v15: Bit$] : !
% 80.68/11.94 | [v16: Mem_ell2_mem_ell2_cblinfun$] : ! [v17:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ! [v18:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ! [v19:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ( ~ (cons$(v17, v9) = v18) | ~
% 80.68/11.94 | (cons$(v16, v18) = v19) | ~ (ifthen$b(v1, one$) = v16) | ~
% 80.68/11.94 | (ifthen$b(x$, v15) = v17) | ~ Bit$(v15) | ? [v20:
% 80.68/11.94 | Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : ? [v21:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v22:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v23:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v24:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v25:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v26:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v27:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : (cons$(v17, v12) = v24 &
% 80.68/11.94 | cons$(v16, v24) = v25 & cons$(v4, v25) = v26 & cons$(v4, v19) =
% 80.68/11.94 | v22 & cons$(v3, v26) = v27 & cons$(v3, v22) = v23 & fun_app$k(v0,
% 80.68/11.94 | one$) = v20 & fun_app$j(v20, v15) = v21 &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v27) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v26) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v25) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v24) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v23) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v22) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v21) &
% 80.68/11.94 | Bit_mem_ell2_mem_ell2_cblinfun_list_fun$(v20) & ( ~ (v15 = one$)
% 80.68/11.94 | | v23 = v21) & (v27 = v21 | v15 = one$))))
% 80.68/11.94 |
% 80.68/11.94 | ALPHA: (axiom366) implies:
% 80.68/11.94 | (4) ? [v0: Mem_ell2_ccsubspace$] : ? [v1:
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v2:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ? [v3:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v4: Mem_ell2_ccsubspace$] :
% 80.68/11.94 | ? [v5: any] : ? [v6: Mem_ell2_mem_ell2_cblinfun$] : ? [v7:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v8: any] : (cons$(v6, nil$) =
% 80.68/11.94 | v7 & cons$(v2, nil$) = v3 & hoare$(v0, v7, v4) = v8 & hoare$(v0, v3,
% 80.68/11.94 | v4) = v5 & comp$(phi$, snd$) = v1 & cblinfun_image$(o6$, pre$) = v0
% 80.68/11.94 | & cblinfun_image$(o7$, pre$) = v4 & apply$a(pauliZ$, v1) = v2 &
% 80.68/11.94 | apply$a(id_cblinfun$, v1) = v6 & Mem_ell2_mem_ell2_cblinfun$(v6) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$(v2) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v7) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v3) &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1) &
% 80.68/11.94 | Mem_ell2_ccsubspace$(v4) & Mem_ell2_ccsubspace$(v0) & ( ~ (one$ =
% 80.68/11.94 | b$a) | v5 = 0) & (v8 = 0 | one$ = b$a))
% 80.68/11.94 |
% 80.68/11.94 | ALPHA: (axiom367) implies:
% 80.68/11.94 | (5) ? [v0: Mem_ell2_ccsubspace$] : ? [v1:
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v2:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ? [v3:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v4: Mem_ell2_ccsubspace$] :
% 80.68/11.94 | ? [v5: any] : ? [v6: Mem_ell2_mem_ell2_cblinfun$] : ? [v7:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v8: any] : (cons$(v6, nil$) =
% 80.68/11.94 | v7 & cons$(v2, nil$) = v3 & hoare$(v0, v7, v4) = v8 & hoare$(v0, v3,
% 80.68/11.94 | v4) = v5 & comp$(phi$, snd$) = v1 & cblinfun_image$(o6$, pre$) = v4
% 80.68/11.94 | & cblinfun_image$(o5$, pre$) = v0 & apply$a(pauliX$, v1) = v2 &
% 80.68/11.94 | apply$a(id_cblinfun$, v1) = v6 & Mem_ell2_mem_ell2_cblinfun$(v6) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$(v2) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v7) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_list$(v3) &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1) &
% 80.68/11.94 | Mem_ell2_ccsubspace$(v4) & Mem_ell2_ccsubspace$(v0) & ( ~ (one$ =
% 80.68/11.94 | a$a) | v5 = 0) & (v8 = 0 | one$ = a$a))
% 80.68/11.94 |
% 80.68/11.94 | ALPHA: (axiom379) implies:
% 80.68/11.94 | (6) ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.94 | ? [v1: Mem_ell2_mem_ell2_cblinfun$] : ? [v2:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v3:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ? [v4: Mem_ell2_mem_ell2_cblinfun$] :
% 80.68/11.94 | ? [v5: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.94 | ? [v6: Mem_ell2_mem_ell2_cblinfun$] : (fun_app$b(cblinfun_compose$, v4)
% 80.68/11.94 | = v5 & fun_app$b(cblinfun_compose$, v1) = v2 & fun_app$a(v5, o5$) =
% 80.68/11.94 | v6 & fun_app$a(v2, o5$) = v3 & comp$(phi$, snd$) = v0 & fun_app$d(v0,
% 80.68/11.94 | pauliX$) = v1 & fun_app$d(v0, id_cblinfun$) = v4 &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$(v6) & Mem_ell2_mem_ell2_cblinfun$(v4) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$(v3) & Mem_ell2_mem_ell2_cblinfun$(v1) &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v5) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2) & ( ~
% 80.68/11.94 | (one$ = a$a) | v3 = o6$) & (v6 = o6$ | one$ = a$a))
% 80.68/11.94 |
% 80.68/11.94 | ALPHA: (axiom380) implies:
% 80.68/11.94 | (7) ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.94 | ? [v1: Mem_ell2_mem_ell2_cblinfun$] : ? [v2:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ? [v3:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ? [v4: Mem_ell2_mem_ell2_cblinfun$] :
% 80.68/11.94 | ? [v5: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.94 | ? [v6: Mem_ell2_mem_ell2_cblinfun$] : (fun_app$b(cblinfun_compose$, v4)
% 80.68/11.94 | = v5 & fun_app$b(cblinfun_compose$, v1) = v2 & fun_app$a(v5, o6$) =
% 80.68/11.94 | v6 & fun_app$a(v2, o6$) = v3 & comp$(phi$, snd$) = v0 & fun_app$d(v0,
% 80.68/11.94 | pauliZ$) = v1 & fun_app$d(v0, id_cblinfun$) = v4 &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$(v6) & Mem_ell2_mem_ell2_cblinfun$(v4) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$(v3) & Mem_ell2_mem_ell2_cblinfun$(v1) &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v5) &
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2) & ( ~
% 80.68/11.94 | (one$ = b$a) | v3 = o7$) & (v6 = o7$ | one$ = b$a))
% 80.68/11.94 |
% 80.68/11.94 | ALPHA: (axiom436) implies:
% 80.68/11.94 | (8) ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.94 | ? [v1:
% 80.68/11.94 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.94 | : ? [v2:
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.94 | : (register_pair$e(x$, phi$) = v1 & tensor_op$(id_cblinfun$) = v2 &
% 80.68/11.94 | comp$(phi$, snd$) = v0 &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(v2)
% 80.68/11.94 | &
% 80.68/11.94 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 80.68/11.94 | & ! [v3: Bit_ell2_bit_ell2_cblinfun$] : ! [v4:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$d(v0, v3) = v4) | ~
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun$(v3) | ? [v5:
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v6:
% 80.68/11.94 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.94 | : (fun_app$ao(v1, v6) = v4 & tensor_op$l(id_cblinfun$, v5) = v6 &
% 80.68/11.94 | fun_app$e(v2, v3) = v5 & Mem_ell2_mem_ell2_cblinfun$(v4) &
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v5) &
% 80.68/11.94 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v6))))
% 80.68/11.94 |
% 80.68/11.94 | ALPHA: (axiom439) implies:
% 80.68/11.94 | (9) ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.94 | ? [v1:
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.94 | : ? [v2:
% 80.68/11.94 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.94 | : (register_pair$i(a$, b$) = v2 & register_pair$(x$, v0) = v1 &
% 80.68/11.94 | comp$(phi$, snd$) = v0 &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 80.68/11.94 | &
% 80.68/11.94 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2)
% 80.68/11.94 | & ! [v3: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v4:
% 80.68/11.94 | Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$(v1, v3) = v4) | ~
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v3) | ? [v5:
% 80.68/11.94 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.94 | : (register_pair$h(v1, v2, v5) = v4 & tensor_op$m(v3,
% 80.68/11.94 | id_cblinfun$c) = v5 & Mem_ell2_mem_ell2_cblinfun$(v4) &
% 80.68/11.94 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v5))))
% 80.68/11.94 |
% 80.68/11.94 | ALPHA: (axiom445) implies:
% 80.68/11.94 | (10) ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.94 | ? [v1:
% 80.68/11.94 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.94 | : ? [v2:
% 80.68/11.94 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.94 | : ? [v3:
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.94 | : ? [v4:
% 80.68/11.94 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.94 | : (register_pair$i(a$, b$) = v4 & register_pair$(x$, v0) = v3 &
% 80.68/11.94 | comp$(phi$, snd$) = v0 & register_pair$b(v0, a$) = v1 &
% 80.68/11.94 | register_pair$a(v1, b$) = v2 &
% 80.68/11.94 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 80.68/11.94 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3)
% 80.68/11.94 | &
% 80.68/11.94 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2)
% 80.68/11.94 | &
% 80.68/11.94 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 80.68/11.94 | &
% 80.68/11.94 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4)
% 80.68/11.94 | & ! [v5:
% 80.68/11.94 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$]
% 80.68/11.94 | : ! [v6: Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$c(v2, v5) =
% 80.68/11.94 | v6) | ~
% 80.68/11.94 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(v5)
% 80.68/11.94 | | ? [v7:
% 80.68/11.94 | Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.94 | : ? [v8:
% 80.68/11.94 | Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.94 | : ? [v9:
% 80.68/11.94 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.94 | : (fun_app$ar(assoc$b, v5) = v7 & fun_app$aq(assoc$a, v8) = v9 &
% 80.68/11.94 | register_pair$h(v3, v4, v9) = v6 & tensor_op$j(id_cblinfun$, v7)
% 80.68/11.94 | = v8 & Mem_ell2_mem_ell2_cblinfun$(v6) &
% 80.68/11.94 | Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$(v7)
% 80.68/11.94 | &
% 80.68/11.95 | Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$(v8)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v9))))
% 80.68/11.95 |
% 80.68/11.95 | ALPHA: (axiom471) implies:
% 80.68/11.95 | (11) ? [v0:
% 80.68/11.95 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v1:
% 80.68/11.95 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v2: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v3:
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v4:
% 80.68/11.95 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v5:
% 80.68/11.95 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.95 | : (register_tensor$(swap$, id$) = v5 & register_pair$i(a$, b$) = v4 &
% 80.68/11.95 | register_pair$(x$, v2) = v3 & comp$(phi$, snd$) = v2 &
% 80.68/11.95 | register_pair$b(x$, a$) = v0 & register_pair$a(v0, b$) = v1 &
% 80.68/11.95 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2) &
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$(v5)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0)
% 80.68/11.95 | &
% 80.68/11.95 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4)
% 80.68/11.95 | & ! [v6:
% 80.68/11.95 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$]
% 80.68/11.95 | : ! [v7: Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$c(v1, v6) =
% 80.68/11.95 | v7) | ~
% 80.68/11.95 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(v6)
% 80.68/11.95 | | ? [v8:
% 80.68/11.95 | Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.95 | : ? [v9:
% 80.68/11.95 | Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.68/11.95 | : ? [v10:
% 80.68/11.95 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.95 | : ? [v11:
% 80.68/11.95 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.95 | : (fun_app$at(v5, v10) = v11 & fun_app$ar(assoc$b, v6) = v8 &
% 80.68/11.95 | fun_app$aq(assoc$a, v9) = v10 & register_pair$h(v3, v4, v11) =
% 80.68/11.95 | v7 & tensor_op$j(id_cblinfun$, v8) = v9 &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$(v7) &
% 80.68/11.95 | Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$(v8)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$(v9)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v11)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v10))))
% 80.68/11.95 |
% 80.68/11.95 | ALPHA: (axiom472) implies:
% 80.68/11.95 | (12) ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.95 | ? [v1:
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v2:
% 80.68/11.95 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v3:
% 80.68/11.95 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$]
% 80.68/11.95 | : (register_tensor$a(id$a, swap$) = v3 & register_pair$e(x$, phi$) =
% 80.68/11.95 | v2 & register_pair$(x$, v0) = v1 & comp$(phi$, snd$) = v0 &
% 80.68/11.95 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v2)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$(v3)
% 80.68/11.95 | & ! [v4: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v5:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$(v1, v4) = v5) | ~
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v4) | ? [v6:
% 80.68/11.95 | Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$]
% 80.68/11.95 | : ? [v7:
% 80.68/11.95 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.95 | : ? [v8:
% 80.68/11.95 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$]
% 80.68/11.95 | : (fun_app$au(v3, v7) = v8 & fun_app$ap(assoc$, v6) = v7 &
% 80.68/11.95 | fun_app$ao(v2, v8) = v5 & tensor_op$k(v4, id_cblinfun$) = v6 &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$(v5) &
% 80.68/11.95 | Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$(v6)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v8)
% 80.68/11.95 | &
% 80.68/11.95 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v7))))
% 80.68/11.95 |
% 80.68/11.95 | ALPHA: (axiom605) implies:
% 80.68/11.95 | (13) ? [v0: Num$] : ? [v1: Complex$] : ? [v2: Complex$] : ? [v3:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 80.68/11.95 | [v4: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 80.68/11.95 | [v5:
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v6: Mem_ell2_mem_ell2_cblinfun$] : ? [v7:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$] : ? [v8:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 80.68/11.95 | [v9: Bit_ell2$] : ? [v10: Bit_ell2$] : ? [v11: Bit_bit_prod_ell2$] :
% 80.68/11.95 | ? [v12: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v13:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$] : (numeral$(v0) = v1 & divide$(one$b,
% 80.68/11.95 | v1) = v2 & scaleC$(v2) = v3 & tensor_ell2$(v9, v10) = v11 &
% 80.68/11.95 | fun_app$w(bit0$, one$c) = v0 & register_pair$(x$, v4) = v5 &
% 80.68/11.95 | fun_app$b(cblinfun_compose$, v7) = v8 & fun_app$a(v8, v13) = o7$ &
% 80.68/11.95 | fun_app$a(v3, v6) = v7 & comp$(phi$, snd$) = v4 & ket$b(a$a) = v9 &
% 80.68/11.95 | ket$b(b$a) = v10 & butterfly$a(v11, beta_00$) = v12 & fun_app$(v5,
% 80.68/11.95 | uswap$) = v6 & fun_app$(phi$, v12) = v13 &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$(v13) & Mem_ell2_mem_ell2_cblinfun$(v7) &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$(v6) &
% 80.68/11.95 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4) &
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v12) & Bit_ell2$(v10)
% 80.68/11.95 | & Bit_ell2$(v9) &
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v5)
% 80.68/11.95 | & Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v8) &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3) &
% 80.68/11.95 | Num$(v0) & Bit_bit_prod_ell2$(v11) & Complex$(v2) & Complex$(v1))
% 80.68/11.95 |
% 80.68/11.95 | ALPHA: (axiom607) implies:
% 80.68/11.95 | (14) ? [v0: Num$] : ? [v1: Complex$] : ? [v2: Complex$] : ? [v3:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 80.68/11.95 | [v4: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 80.68/11.95 | [v5: Bit_ell2_bit_ell2_cblinfun$] : ? [v6:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$] : ? [v7: Mem_ell2_mem_ell2_cblinfun$]
% 80.68/11.95 | : ? [v8: Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v9:
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v10: Mem_ell2_mem_ell2_cblinfun$] : ? [v11:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$] : ? [v12:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 80.68/11.95 | [v13: Bit_ell2$] : ? [v14: Bit_ell2$] : ? [v15: Bit_bit_prod_ell2$]
% 80.68/11.95 | : ? [v16: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v17:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$] : (numeral$(v0) = v1 & divide$(one$b,
% 80.68/11.95 | v1) = v2 & scaleC$(v2) = v3 & tensor_ell2$(v13, v14) = v15 &
% 80.68/11.95 | fun_app$w(bit0$, one$c) = v0 & register_pair$(x$, v4) = v9 &
% 80.68/11.95 | fun_app$f(adj$, xz$) = v5 & fun_app$b(cblinfun_compose$, v11) = v12
% 80.68/11.95 | & fun_app$b(cblinfun_compose$, v7) = v8 & fun_app$a(v12, v17) = o5$
% 80.68/11.95 | & fun_app$a(v8, v10) = v11 & fun_app$a(v3, v6) = v7 & comp$(phi$,
% 80.68/11.95 | snd$) = v4 & ket$b(a$a) = v13 & ket$b(b$a) = v14 &
% 80.68/11.95 | butterfly$a(v15, beta_00$) = v16 & fun_app$d(v4, v5) = v6 &
% 80.68/11.95 | fun_app$(v9, uswap$) = v10 & fun_app$(phi$, v16) = v17 &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$(v17) & Mem_ell2_mem_ell2_cblinfun$(v11)
% 80.68/11.95 | & Mem_ell2_mem_ell2_cblinfun$(v10) & Mem_ell2_mem_ell2_cblinfun$(v7)
% 80.68/11.95 | & Mem_ell2_mem_ell2_cblinfun$(v6) &
% 80.68/11.95 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4) &
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v16) & Bit_ell2$(v14)
% 80.68/11.95 | & Bit_ell2$(v13) &
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v9)
% 80.68/11.95 | & Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v12) &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v8) &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3) &
% 80.68/11.95 | Num$(v0) & Bit_bit_prod_ell2$(v15) & Complex$(v2) & Complex$(v1) &
% 80.68/11.95 | Bit_ell2_bit_ell2_cblinfun$(v5))
% 80.68/11.95 |
% 80.68/11.95 | ALPHA: (conjecture0) implies:
% 80.68/11.95 | (15) Mem_ell2_ccsubspace$(top$)
% 80.68/11.95 | (16) ? [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.95 | ? [v1:
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v2: Mem_ell2_mem_ell2_cblinfun$] : ? [v3:
% 80.68/11.95 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v4:
% 80.68/11.95 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.95 | : ? [v5:
% 80.68/11.95 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$]
% 80.68/11.95 | : ? [v6: Mem_ell2_mem_ell2_cblinfun$] : ? [v7:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] : ?
% 80.68/11.95 | [v8: Bit_bit_prod$] : ? [v9: Bit_bit_prod_ell2$] : ? [v10:
% 80.68/11.95 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v11:
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$] : ? [v12: Mem_ell2_mem_ell2_cblinfun$]
% 80.68/11.95 | : ? [v13: Mem_ell2_ccsubspace$] : ? [v14: Mem_ell2_ccsubspace$] : ?
% 80.68/11.95 | [v15: Mem_ell2_ccsubspace$] : ? [v16: Mem_ell2_ccsubspace$] : ?
% 80.68/11.95 | [v17: Mem_ell2_ccsubspace$] : ( ~ (v17 = v14) & register_pair$(x$, v0)
% 80.68/11.95 | = v1 & fun_app$b(cblinfun_compose$, v6) = v7 & fun_app$a(v7, v11) =
% 80.68/11.95 | v12 & pair$(a$a, b$a) = v8 & comp$(phi$, snd$) = v0 &
% 80.68/11.95 | register_pair$b(x$, a$) = v3 & register_pair$a(v3, b$) = v4 &
% 80.68/11.95 | cblinfun_image$(v12, top$) = v13 & cblinfun_image$(v11, top$) = v15
% 80.68/11.95 | & cblinfun_image$(v6, v15) = v16 & cblinfun_image$(v2, v16) = v17 &
% 80.68/11.95 | cblinfun_image$(v2, v13) = v14 & ket$(v8) = v9 & butterfly$a(v9,
% 80.68/11.95 | beta_00$) = v10 & butterfly$(psi$, psi$) = v5 & fun_app$c(v4, v5)
% 80.68/11.95 | = v6 & fun_app$(v1, uswap$) = v2 & fun_app$(phi$, v10) = v11 &
% 80.68/11.95 | Mem_ell2_mem_ell2_cblinfun$(v12) & Mem_ell2_mem_ell2_cblinfun$(v11)
% 80.68/11.95 | & Mem_ell2_mem_ell2_cblinfun$(v6) & Mem_ell2_mem_ell2_cblinfun$(v2)
% 80.68/11.95 | & Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v0) &
% 80.68/11.95 | Mem_ell2_ccsubspace$(v17) & Mem_ell2_ccsubspace$(v16) &
% 80.68/11.95 | Mem_ell2_ccsubspace$(v15) & Mem_ell2_ccsubspace$(v14) &
% 80.68/11.95 | Mem_ell2_ccsubspace$(v13) &
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v10) &
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v1)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v4)
% 80.68/11.96 | & Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v7) &
% 80.68/11.96 | Bit_bit_prod_ell2$(v9) &
% 80.68/11.96 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(v3)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(v5)
% 80.68/11.96 | & Bit_bit_prod$(v8))
% 80.68/11.96 |
% 80.68/11.96 | ALPHA: (function-axioms) implies:
% 80.68/11.96 | (17) ! [v0: Mem_ell2_mem_ell2_cblinfun$] : ! [v1:
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$] : ! [v2:
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v3:
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.96 | : (v1 = v0 | ~ (fun_app$(v3, v2) = v1) | ~ (fun_app$(v3, v2) = v0))
% 80.68/11.96 | (18) ! [v0: Mem_ell2_ccsubspace$] : ! [v1: Mem_ell2_ccsubspace$] : !
% 80.68/11.96 | [v2: Mem_ell2_ccsubspace$] : ! [v3: Mem_ell2_mem_ell2_cblinfun$] :
% 80.68/11.96 | (v1 = v0 | ~ (cblinfun_image$(v3, v2) = v1) | ~ (cblinfun_image$(v3,
% 80.68/11.96 | v2) = v0))
% 80.68/11.96 | (19) ! [v0: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.96 | ! [v1: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$] :
% 80.68/11.96 | ! [v2:
% 80.68/11.96 | Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$]
% 80.68/11.96 | : ! [v3:
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.96 | : (v1 = v0 | ~ (comp$(v3, v2) = v1) | ~ (comp$(v3, v2) = v0))
% 80.68/11.96 | (20) ! [v0:
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.96 | : ! [v1:
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.96 | : ! [v2: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.96 | : ! [v3: Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$]
% 80.68/11.96 | : (v1 = v0 | ~ (register_pair$(v3, v2) = v1) | ~ (register_pair$(v3,
% 80.68/11.96 | v2) = v0))
% 80.68/11.96 |
% 80.68/11.96 | DELTA: instantiating (2) with fresh symbols all_927_0, all_927_1, all_927_2
% 80.68/11.96 | gives:
% 80.68/11.96 | (21) fun_app$b(cblinfun_compose$, all_927_1) = all_927_0 &
% 80.68/11.96 | fun_app$a(all_927_0, o5$) = o7$ & comp$(phi$, snd$) = all_927_2 &
% 80.68/11.96 | fun_app$d(all_927_2, xz$) = all_927_1 &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$(all_927_1) &
% 80.68/11.96 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_927_2)
% 80.68/11.96 | &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_927_0)
% 80.68/11.96 |
% 80.68/11.96 | ALPHA: (21) implies:
% 80.68/11.96 | (22) comp$(phi$, snd$) = all_927_2
% 80.68/11.96 |
% 80.68/11.96 | DELTA: instantiating (9) with fresh symbols all_959_0, all_959_1, all_959_2
% 80.68/11.96 | gives:
% 80.68/11.96 | (23) register_pair$i(a$, b$) = all_959_0 & register_pair$(x$, all_959_2) =
% 80.68/11.96 | all_959_1 & comp$(phi$, snd$) = all_959_2 &
% 80.68/11.96 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_959_2)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_959_1)
% 80.68/11.96 | &
% 80.68/11.96 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_959_0)
% 80.68/11.96 | & ! [v0: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$(all_959_1, v0) = v1) |
% 80.68/11.96 | ~ Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v0) | ? [v2:
% 80.68/11.96 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.68/11.96 | : (register_pair$h(all_959_1, all_959_0, v2) = v1 & tensor_op$m(v0,
% 80.68/11.96 | id_cblinfun$c) = v2 & Mem_ell2_mem_ell2_cblinfun$(v1) &
% 80.68/11.96 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v2)))
% 80.68/11.96 |
% 80.68/11.96 | ALPHA: (23) implies:
% 80.68/11.96 | (24) comp$(phi$, snd$) = all_959_2
% 80.68/11.96 | (25) register_pair$(x$, all_959_2) = all_959_1
% 80.68/11.96 |
% 80.68/11.96 | DELTA: instantiating (8) with fresh symbols all_972_0, all_972_1, all_972_2
% 80.68/11.96 | gives:
% 80.68/11.96 | (26) register_pair$e(x$, phi$) = all_972_1 & tensor_op$(id_cblinfun$) =
% 80.68/11.96 | all_972_0 & comp$(phi$, snd$) = all_972_2 &
% 80.68/11.96 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_972_2)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_ell2_bit_ell2_cblinfun_bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_fun$(all_972_0)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_972_1)
% 80.68/11.96 | & ! [v0: Bit_ell2_bit_ell2_cblinfun$] : ! [v1:
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$d(all_972_2, v0) = v1) |
% 80.68/11.96 | ~ Bit_ell2_bit_ell2_cblinfun$(v0) | ? [v2:
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ? [v3:
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] :
% 80.68/11.96 | (fun_app$ao(all_972_1, v3) = v1 & tensor_op$l(id_cblinfun$, v2) = v3
% 80.68/11.96 | & fun_app$e(all_972_0, v0) = v2 & Mem_ell2_mem_ell2_cblinfun$(v1)
% 80.68/11.96 | & Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v2) &
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v3)))
% 80.68/11.96 |
% 80.68/11.96 | ALPHA: (26) implies:
% 80.68/11.96 | (27) comp$(phi$, snd$) = all_972_2
% 80.68/11.96 |
% 80.68/11.96 | DELTA: instantiating (7) with fresh symbols all_1005_0, all_1005_1,
% 80.68/11.96 | all_1005_2, all_1005_3, all_1005_4, all_1005_5, all_1005_6 gives:
% 80.68/11.96 | (28) fun_app$b(cblinfun_compose$, all_1005_2) = all_1005_1 &
% 80.68/11.96 | fun_app$b(cblinfun_compose$, all_1005_5) = all_1005_4 &
% 80.68/11.96 | fun_app$a(all_1005_1, o6$) = all_1005_0 & fun_app$a(all_1005_4, o6$) =
% 80.68/11.96 | all_1005_3 & comp$(phi$, snd$) = all_1005_6 & fun_app$d(all_1005_6,
% 80.68/11.96 | pauliZ$) = all_1005_5 & fun_app$d(all_1005_6, id_cblinfun$) =
% 80.68/11.96 | all_1005_2 & Mem_ell2_mem_ell2_cblinfun$(all_1005_0) &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$(all_1005_2) &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$(all_1005_3) &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$(all_1005_5) &
% 80.68/11.96 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1005_6)
% 80.68/11.96 | &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1005_1)
% 80.68/11.96 | &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1005_4)
% 80.68/11.96 | & ( ~ (one$ = b$a) | all_1005_3 = o7$) & (all_1005_0 = o7$ | one$ =
% 80.68/11.96 | b$a)
% 80.68/11.96 |
% 80.68/11.96 | ALPHA: (28) implies:
% 80.68/11.96 | (29) comp$(phi$, snd$) = all_1005_6
% 80.68/11.96 |
% 80.68/11.96 | DELTA: instantiating (6) with fresh symbols all_1007_0, all_1007_1,
% 80.68/11.96 | all_1007_2, all_1007_3, all_1007_4, all_1007_5, all_1007_6 gives:
% 80.68/11.96 | (30) fun_app$b(cblinfun_compose$, all_1007_2) = all_1007_1 &
% 80.68/11.96 | fun_app$b(cblinfun_compose$, all_1007_5) = all_1007_4 &
% 80.68/11.96 | fun_app$a(all_1007_1, o5$) = all_1007_0 & fun_app$a(all_1007_4, o5$) =
% 80.68/11.96 | all_1007_3 & comp$(phi$, snd$) = all_1007_6 & fun_app$d(all_1007_6,
% 80.68/11.96 | pauliX$) = all_1007_5 & fun_app$d(all_1007_6, id_cblinfun$) =
% 80.68/11.96 | all_1007_2 & Mem_ell2_mem_ell2_cblinfun$(all_1007_0) &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$(all_1007_2) &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$(all_1007_3) &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$(all_1007_5) &
% 80.68/11.96 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1007_6)
% 80.68/11.96 | &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1007_1)
% 80.68/11.96 | &
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1007_4)
% 80.68/11.96 | & ( ~ (one$ = a$a) | all_1007_3 = o6$) & (all_1007_0 = o6$ | one$ =
% 80.68/11.96 | a$a)
% 80.68/11.96 |
% 80.68/11.96 | ALPHA: (30) implies:
% 80.68/11.96 | (31) comp$(phi$, snd$) = all_1007_6
% 80.68/11.96 |
% 80.68/11.96 | DELTA: instantiating (12) with fresh symbols all_1009_0, all_1009_1,
% 80.68/11.96 | all_1009_2, all_1009_3 gives:
% 80.68/11.96 | (32) register_tensor$a(id$a, swap$) = all_1009_0 & register_pair$e(x$,
% 80.68/11.96 | phi$) = all_1009_1 & register_pair$(x$, all_1009_3) = all_1009_2 &
% 80.68/11.96 | comp$(phi$, snd$) = all_1009_3 &
% 80.68/11.96 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1009_3)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1009_2)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1009_1)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun_fun$(all_1009_0)
% 80.68/11.96 | & ! [v0: Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$] : ! [v1:
% 80.68/11.96 | Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$(all_1009_2, v0) = v1) |
% 80.68/11.96 | ~ Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(v0) | ? [v2:
% 80.68/11.96 | Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$] :
% 80.68/11.96 | ? [v3:
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] :
% 80.68/11.96 | ? [v4:
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$] :
% 80.68/11.96 | (fun_app$au(all_1009_0, v3) = v4 & fun_app$ap(assoc$, v2) = v3 &
% 80.68/11.96 | fun_app$ao(all_1009_1, v4) = v1 & tensor_op$k(v0, id_cblinfun$) =
% 80.68/11.96 | v2 & Mem_ell2_mem_ell2_cblinfun$(v1) &
% 80.68/11.96 | Bit_bit_prod_bit_prod_ell2_bit_bit_prod_bit_prod_ell2_cblinfun$(v2)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v4)
% 80.68/11.96 | &
% 80.68/11.96 | Bit_bit_bit_prod_prod_ell2_bit_bit_bit_prod_prod_ell2_cblinfun$(v3)))
% 80.68/11.96 |
% 80.68/11.96 | ALPHA: (32) implies:
% 80.68/11.96 | (33) comp$(phi$, snd$) = all_1009_3
% 80.68/11.96 |
% 80.68/11.96 | DELTA: instantiating (5) with fresh symbols all_1015_0, all_1015_1,
% 80.68/11.96 | all_1015_2, all_1015_3, all_1015_4, all_1015_5, all_1015_6, all_1015_7,
% 80.68/11.96 | all_1015_8 gives:
% 80.89/11.97 | (34) cons$(all_1015_2, nil$) = all_1015_1 & cons$(all_1015_6, nil$) =
% 80.89/11.97 | all_1015_5 & hoare$(all_1015_8, all_1015_1, all_1015_4) = all_1015_0 &
% 80.89/11.97 | hoare$(all_1015_8, all_1015_5, all_1015_4) = all_1015_3 & comp$(phi$,
% 80.89/11.97 | snd$) = all_1015_7 & cblinfun_image$(o6$, pre$) = all_1015_4 &
% 80.89/11.97 | cblinfun_image$(o5$, pre$) = all_1015_8 & apply$a(pauliX$, all_1015_7)
% 80.89/11.97 | = all_1015_6 & apply$a(id_cblinfun$, all_1015_7) = all_1015_2 &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1015_2) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1015_6) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun_list$(all_1015_1) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun_list$(all_1015_5) &
% 80.89/11.97 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1015_7)
% 80.89/11.97 | & Mem_ell2_ccsubspace$(all_1015_4) & Mem_ell2_ccsubspace$(all_1015_8)
% 80.89/11.97 | & ( ~ (one$ = a$a) | all_1015_3 = 0) & (all_1015_0 = 0 | one$ = a$a)
% 80.89/11.97 |
% 80.89/11.97 | ALPHA: (34) implies:
% 80.89/11.97 | (35) comp$(phi$, snd$) = all_1015_7
% 80.89/11.97 |
% 80.89/11.97 | DELTA: instantiating (10) with fresh symbols all_1017_0, all_1017_1,
% 80.89/11.97 | all_1017_2, all_1017_3, all_1017_4 gives:
% 80.89/11.97 | (36) register_pair$i(a$, b$) = all_1017_0 & register_pair$(x$, all_1017_4)
% 80.89/11.97 | = all_1017_1 & comp$(phi$, snd$) = all_1017_4 &
% 80.89/11.97 | register_pair$b(all_1017_4, a$) = all_1017_3 &
% 80.89/11.97 | register_pair$a(all_1017_3, b$) = all_1017_2 &
% 80.89/11.97 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1017_4)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1017_1)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1017_2)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1017_3)
% 80.89/11.97 | &
% 80.89/11.97 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1017_0)
% 80.89/11.97 | & ! [v0:
% 80.89/11.97 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$]
% 80.89/11.97 | : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$c(all_1017_2,
% 80.89/11.97 | v0) = v1) | ~
% 80.89/11.97 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(v0)
% 80.89/11.97 | | ? [v2:
% 80.89/11.97 | Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$]
% 80.89/11.97 | : ? [v3:
% 80.89/11.97 | Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.89/11.97 | : ? [v4:
% 80.89/11.97 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.89/11.97 | : (fun_app$ar(assoc$b, v0) = v2 & fun_app$aq(assoc$a, v3) = v4 &
% 80.89/11.97 | register_pair$h(all_1017_1, all_1017_0, v4) = v1 &
% 80.89/11.97 | tensor_op$j(id_cblinfun$, v2) = v3 &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(v1) &
% 80.89/11.97 | Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$(v2)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$(v3)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v4)))
% 80.89/11.97 |
% 80.89/11.97 | ALPHA: (36) implies:
% 80.89/11.97 | (37) comp$(phi$, snd$) = all_1017_4
% 80.89/11.97 |
% 80.89/11.97 | DELTA: instantiating (4) with fresh symbols all_1020_0, all_1020_1,
% 80.89/11.97 | all_1020_2, all_1020_3, all_1020_4, all_1020_5, all_1020_6, all_1020_7,
% 80.89/11.97 | all_1020_8 gives:
% 80.89/11.97 | (38) cons$(all_1020_2, nil$) = all_1020_1 & cons$(all_1020_6, nil$) =
% 80.89/11.97 | all_1020_5 & hoare$(all_1020_8, all_1020_1, all_1020_4) = all_1020_0 &
% 80.89/11.97 | hoare$(all_1020_8, all_1020_5, all_1020_4) = all_1020_3 & comp$(phi$,
% 80.89/11.97 | snd$) = all_1020_7 & cblinfun_image$(o6$, pre$) = all_1020_8 &
% 80.89/11.97 | cblinfun_image$(o7$, pre$) = all_1020_4 & apply$a(pauliZ$, all_1020_7)
% 80.89/11.97 | = all_1020_6 & apply$a(id_cblinfun$, all_1020_7) = all_1020_2 &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1020_2) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1020_6) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun_list$(all_1020_1) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun_list$(all_1020_5) &
% 80.89/11.97 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1020_7)
% 80.89/11.97 | & Mem_ell2_ccsubspace$(all_1020_4) & Mem_ell2_ccsubspace$(all_1020_8)
% 80.89/11.97 | & ( ~ (one$ = b$a) | all_1020_3 = 0) & (all_1020_0 = 0 | one$ = b$a)
% 80.89/11.97 |
% 80.89/11.97 | ALPHA: (38) implies:
% 80.89/11.97 | (39) comp$(phi$, snd$) = all_1020_7
% 80.89/11.97 |
% 80.89/11.97 | DELTA: instantiating (11) with fresh symbols all_1029_0, all_1029_1,
% 80.89/11.97 | all_1029_2, all_1029_3, all_1029_4, all_1029_5 gives:
% 80.89/11.97 | (40) register_tensor$(swap$, id$) = all_1029_0 & register_pair$i(a$, b$) =
% 80.89/11.97 | all_1029_1 & register_pair$(x$, all_1029_3) = all_1029_2 & comp$(phi$,
% 80.89/11.97 | snd$) = all_1029_3 & register_pair$b(x$, a$) = all_1029_5 &
% 80.89/11.97 | register_pair$a(all_1029_5, b$) = all_1029_4 &
% 80.89/11.97 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1029_3)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1029_2)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1029_4)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun_fun$(all_1029_0)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1029_5)
% 80.89/11.97 | &
% 80.89/11.97 | Atype_btype_prod_ell2_atype_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1029_1)
% 80.89/11.97 | & ! [v0:
% 80.89/11.97 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$]
% 80.89/11.97 | : ! [v1: Mem_ell2_mem_ell2_cblinfun$] : ( ~ (fun_app$c(all_1029_4,
% 80.89/11.97 | v0) = v1) | ~
% 80.89/11.97 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(v0)
% 80.89/11.97 | | ? [v2:
% 80.89/11.97 | Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$]
% 80.89/11.97 | : ? [v3:
% 80.89/11.97 | Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$]
% 80.89/11.97 | : ? [v4:
% 80.89/11.97 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.89/11.97 | : ? [v5:
% 80.89/11.97 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$]
% 80.89/11.97 | : (fun_app$at(all_1029_0, v4) = v5 & fun_app$ar(assoc$b, v0) = v2 &
% 80.89/11.97 | fun_app$aq(assoc$a, v3) = v4 & register_pair$h(all_1029_2,
% 80.89/11.97 | all_1029_1, v5) = v1 & tensor_op$j(id_cblinfun$, v2) = v3 &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(v1) &
% 80.89/11.97 | Bit_atype_btype_prod_prod_ell2_bit_atype_btype_prod_prod_ell2_cblinfun$(v2)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_bit_atype_btype_prod_prod_prod_ell2_bit_bit_atype_btype_prod_prod_prod_ell2_cblinfun$(v3)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v5)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_bit_prod_atype_btype_prod_prod_ell2_bit_bit_prod_atype_btype_prod_prod_ell2_cblinfun$(v4)))
% 80.89/11.97 |
% 80.89/11.97 | ALPHA: (40) implies:
% 80.89/11.97 | (41) comp$(phi$, snd$) = all_1029_3
% 80.89/11.97 |
% 80.89/11.97 | DELTA: instantiating (13) with fresh symbols all_1038_0, all_1038_1,
% 80.89/11.97 | all_1038_2, all_1038_3, all_1038_4, all_1038_5, all_1038_6, all_1038_7,
% 80.89/11.97 | all_1038_8, all_1038_9, all_1038_10, all_1038_11, all_1038_12,
% 80.89/11.97 | all_1038_13 gives:
% 80.89/11.97 | (42) numeral$(all_1038_13) = all_1038_12 & divide$(one$b, all_1038_12) =
% 80.89/11.97 | all_1038_11 & scaleC$(all_1038_11) = all_1038_10 &
% 80.89/11.97 | tensor_ell2$(all_1038_4, all_1038_3) = all_1038_2 & fun_app$w(bit0$,
% 80.89/11.97 | one$c) = all_1038_13 & register_pair$(x$, all_1038_9) = all_1038_8 &
% 80.89/11.97 | fun_app$b(cblinfun_compose$, all_1038_6) = all_1038_5 &
% 80.89/11.97 | fun_app$a(all_1038_5, all_1038_0) = o7$ & fun_app$a(all_1038_10,
% 80.89/11.97 | all_1038_7) = all_1038_6 & comp$(phi$, snd$) = all_1038_9 &
% 80.89/11.97 | ket$b(a$a) = all_1038_4 & ket$b(b$a) = all_1038_3 &
% 80.89/11.97 | butterfly$a(all_1038_2, beta_00$) = all_1038_1 & fun_app$(all_1038_8,
% 80.89/11.97 | uswap$) = all_1038_7 & fun_app$(phi$, all_1038_1) = all_1038_0 &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1038_0) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1038_6) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1038_7) &
% 80.89/11.97 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1038_9)
% 80.89/11.97 | & Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(all_1038_1) &
% 80.89/11.97 | Bit_ell2$(all_1038_3) & Bit_ell2$(all_1038_4) &
% 80.89/11.97 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1038_8)
% 80.89/11.97 | &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1038_5)
% 80.89/11.97 | &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1038_10)
% 80.89/11.97 | & Num$(all_1038_13) & Bit_bit_prod_ell2$(all_1038_2) &
% 80.89/11.97 | Complex$(all_1038_11) & Complex$(all_1038_12)
% 80.89/11.97 |
% 80.89/11.97 | ALPHA: (42) implies:
% 80.89/11.97 | (43) fun_app$(all_1038_8, uswap$) = all_1038_7
% 80.89/11.97 | (44) comp$(phi$, snd$) = all_1038_9
% 80.89/11.97 | (45) register_pair$(x$, all_1038_9) = all_1038_8
% 80.89/11.97 |
% 80.89/11.97 | DELTA: instantiating (16) with fresh symbols all_1044_0, all_1044_1,
% 80.89/11.97 | all_1044_2, all_1044_3, all_1044_4, all_1044_5, all_1044_6, all_1044_7,
% 80.89/11.97 | all_1044_8, all_1044_9, all_1044_10, all_1044_11, all_1044_12,
% 80.89/11.97 | all_1044_13, all_1044_14, all_1044_15, all_1044_16, all_1044_17 gives:
% 80.89/11.97 | (46) ~ (all_1044_0 = all_1044_3) & register_pair$(x$, all_1044_17) =
% 80.89/11.97 | all_1044_16 & fun_app$b(cblinfun_compose$, all_1044_11) = all_1044_10
% 80.89/11.97 | & fun_app$a(all_1044_10, all_1044_6) = all_1044_5 & pair$(a$a, b$a) =
% 80.89/11.97 | all_1044_9 & comp$(phi$, snd$) = all_1044_17 & register_pair$b(x$, a$)
% 80.89/11.97 | = all_1044_14 & register_pair$a(all_1044_14, b$) = all_1044_13 &
% 80.89/11.97 | cblinfun_image$(all_1044_5, top$) = all_1044_4 &
% 80.89/11.97 | cblinfun_image$(all_1044_6, top$) = all_1044_2 &
% 80.89/11.97 | cblinfun_image$(all_1044_11, all_1044_2) = all_1044_1 &
% 80.89/11.97 | cblinfun_image$(all_1044_15, all_1044_1) = all_1044_0 &
% 80.89/11.97 | cblinfun_image$(all_1044_15, all_1044_4) = all_1044_3 &
% 80.89/11.97 | ket$(all_1044_9) = all_1044_8 & butterfly$a(all_1044_8, beta_00$) =
% 80.89/11.97 | all_1044_7 & butterfly$(psi$, psi$) = all_1044_12 &
% 80.89/11.97 | fun_app$c(all_1044_13, all_1044_12) = all_1044_11 &
% 80.89/11.97 | fun_app$(all_1044_16, uswap$) = all_1044_15 & fun_app$(phi$,
% 80.89/11.97 | all_1044_7) = all_1044_6 & Mem_ell2_mem_ell2_cblinfun$(all_1044_5) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1044_6) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1044_11) &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun$(all_1044_15) &
% 80.89/11.97 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1044_17)
% 80.89/11.97 | & Mem_ell2_ccsubspace$(all_1044_0) & Mem_ell2_ccsubspace$(all_1044_1)
% 80.89/11.97 | & Mem_ell2_ccsubspace$(all_1044_2) & Mem_ell2_ccsubspace$(all_1044_3)
% 80.89/11.97 | & Mem_ell2_ccsubspace$(all_1044_4) &
% 80.89/11.97 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(all_1044_7) &
% 80.89/11.97 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1044_16)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1044_13)
% 80.89/11.97 | &
% 80.89/11.97 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1044_10)
% 80.89/11.97 | & Bit_bit_prod_ell2$(all_1044_8) &
% 80.89/11.97 | Bit_atype_prod_ell2_bit_atype_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1044_14)
% 80.89/11.97 | &
% 80.89/11.97 | Bit_atype_prod_btype_prod_ell2_bit_atype_prod_btype_prod_ell2_cblinfun$(all_1044_12)
% 80.89/11.97 | & Bit_bit_prod$(all_1044_9)
% 80.89/11.97 |
% 80.89/11.97 | ALPHA: (46) implies:
% 80.89/11.97 | (47) ~ (all_1044_0 = all_1044_3)
% 80.89/11.97 | (48) Mem_ell2_mem_ell2_cblinfun$(all_1044_11)
% 80.89/11.97 | (49) Mem_ell2_mem_ell2_cblinfun$(all_1044_6)
% 80.89/11.97 | (50) fun_app$(all_1044_16, uswap$) = all_1044_15
% 80.89/11.97 | (51) cblinfun_image$(all_1044_15, all_1044_4) = all_1044_3
% 80.89/11.97 | (52) cblinfun_image$(all_1044_15, all_1044_1) = all_1044_0
% 80.89/11.97 | (53) cblinfun_image$(all_1044_11, all_1044_2) = all_1044_1
% 80.89/11.97 | (54) cblinfun_image$(all_1044_6, top$) = all_1044_2
% 80.89/11.97 | (55) cblinfun_image$(all_1044_5, top$) = all_1044_4
% 80.89/11.98 | (56) comp$(phi$, snd$) = all_1044_17
% 80.89/11.98 | (57) fun_app$a(all_1044_10, all_1044_6) = all_1044_5
% 80.89/11.98 | (58) fun_app$b(cblinfun_compose$, all_1044_11) = all_1044_10
% 80.89/11.98 | (59) register_pair$(x$, all_1044_17) = all_1044_16
% 80.89/11.98 |
% 80.89/11.98 | DELTA: instantiating (14) with fresh symbols all_1046_0, all_1046_1,
% 80.89/11.98 | all_1046_2, all_1046_3, all_1046_4, all_1046_5, all_1046_6, all_1046_7,
% 80.89/11.98 | all_1046_8, all_1046_9, all_1046_10, all_1046_11, all_1046_12,
% 80.89/11.98 | all_1046_13, all_1046_14, all_1046_15, all_1046_16, all_1046_17 gives:
% 80.89/11.98 | (60) numeral$(all_1046_17) = all_1046_16 & divide$(one$b, all_1046_16) =
% 80.89/11.98 | all_1046_15 & scaleC$(all_1046_15) = all_1046_14 &
% 80.89/11.98 | tensor_ell2$(all_1046_4, all_1046_3) = all_1046_2 & fun_app$w(bit0$,
% 80.89/11.98 | one$c) = all_1046_17 & register_pair$(x$, all_1046_13) = all_1046_8
% 80.89/11.98 | & fun_app$f(adj$, xz$) = all_1046_12 & fun_app$b(cblinfun_compose$,
% 80.89/11.98 | all_1046_6) = all_1046_5 & fun_app$b(cblinfun_compose$, all_1046_10)
% 80.89/11.98 | = all_1046_9 & fun_app$a(all_1046_5, all_1046_0) = o5$ &
% 80.89/11.98 | fun_app$a(all_1046_9, all_1046_7) = all_1046_6 &
% 80.89/11.98 | fun_app$a(all_1046_14, all_1046_11) = all_1046_10 & comp$(phi$, snd$)
% 80.89/11.98 | = all_1046_13 & ket$b(a$a) = all_1046_4 & ket$b(b$a) = all_1046_3 &
% 80.89/11.98 | butterfly$a(all_1046_2, beta_00$) = all_1046_1 &
% 80.89/11.98 | fun_app$d(all_1046_13, all_1046_12) = all_1046_11 &
% 80.89/11.98 | fun_app$(all_1046_8, uswap$) = all_1046_7 & fun_app$(phi$, all_1046_1)
% 80.89/11.98 | = all_1046_0 & Mem_ell2_mem_ell2_cblinfun$(all_1046_0) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun$(all_1046_6) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun$(all_1046_7) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun$(all_1046_10) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun$(all_1046_11) &
% 80.89/11.98 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1046_13)
% 80.89/11.98 | & Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun$(all_1046_1) &
% 80.89/11.98 | Bit_ell2$(all_1046_3) & Bit_ell2$(all_1046_4) &
% 80.89/11.98 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1046_8)
% 80.89/11.98 | &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1046_5)
% 80.89/11.98 | &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1046_9)
% 80.89/11.98 | &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1046_14)
% 80.89/11.98 | & Num$(all_1046_17) & Bit_bit_prod_ell2$(all_1046_2) &
% 80.89/11.98 | Complex$(all_1046_15) & Complex$(all_1046_16) &
% 80.89/11.98 | Bit_ell2_bit_ell2_cblinfun$(all_1046_12)
% 80.89/11.98 |
% 80.89/11.98 | ALPHA: (60) implies:
% 80.89/11.98 | (61) fun_app$(all_1046_8, uswap$) = all_1046_7
% 80.89/11.98 | (62) comp$(phi$, snd$) = all_1046_13
% 80.89/11.98 | (63) register_pair$(x$, all_1046_13) = all_1046_8
% 80.89/11.98 |
% 80.89/11.98 | DELTA: instantiating (3) with fresh symbols all_1050_0, all_1050_1,
% 80.89/11.98 | all_1050_2, all_1050_3, all_1050_4, all_1050_5, all_1050_6, all_1050_7,
% 80.89/11.98 | all_1050_8, all_1050_9, all_1050_10, all_1050_11, all_1050_12,
% 80.89/11.98 | all_1050_13, all_1050_14 gives:
% 80.89/11.98 | (64) register_pair$(x$, all_1050_13) = all_1050_12 & cons$(all_1050_4,
% 80.89/11.98 | all_1050_3) = all_1050_0 & cons$(all_1050_4, all_1050_6) =
% 80.89/11.98 | all_1050_1 & cons$(all_1050_4, nil$) = all_1050_3 & cons$(all_1050_7,
% 80.89/11.98 | nil$) = all_1050_6 & cons$(all_1050_8, all_1050_3) = all_1050_2 &
% 80.89/11.98 | cons$(all_1050_8, all_1050_6) = all_1050_5 & teleport$(x$, phi$) =
% 80.89/11.98 | all_1050_14 & comp$(phi$, snd$) = all_1050_9 & comp$(phi$, fst$) =
% 80.89/11.98 | all_1050_13 & apply$a(hadamard$, x$) = all_1050_10 & apply$a(pauliZ$,
% 80.89/11.98 | all_1050_9) = all_1050_7 & apply$a(pauliX$, all_1050_9) = all_1050_8
% 80.89/11.98 | & apply$a(id_cblinfun$, all_1050_9) = all_1050_4 & apply$(cnot$,
% 80.89/11.98 | all_1050_12) = all_1050_11 & Mem_ell2_mem_ell2_cblinfun$(all_1050_4)
% 80.89/11.98 | & Mem_ell2_mem_ell2_cblinfun$(all_1050_7) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun$(all_1050_8) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun$(all_1050_10) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun$(all_1050_11) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(all_1050_0) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(all_1050_1) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(all_1050_2) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(all_1050_3) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(all_1050_5) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(all_1050_6) &
% 80.89/11.98 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1050_9)
% 80.89/11.98 | &
% 80.89/11.98 | Bit_ell2_bit_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1050_13)
% 80.89/11.98 | &
% 80.89/11.98 | Bit_bit_prod_ell2_bit_bit_prod_ell2_cblinfun_mem_ell2_mem_ell2_cblinfun_fun$(all_1050_12)
% 80.89/11.98 | & Bit_bit_mem_ell2_mem_ell2_cblinfun_list_fun_fun$(all_1050_14) & !
% 80.89/11.98 | [v0: Bit$] : ! [v1: Bit$] : ! [v2: Mem_ell2_mem_ell2_cblinfun$] : !
% 80.89/11.98 | [v3: Mem_ell2_mem_ell2_cblinfun$] : ! [v4:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ! [v5:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : (v0 = one$ | ~ (cons$(v3,
% 80.89/11.98 | all_1050_1) = v4) | ~ (cons$(v2, v4) = v5) | ~
% 80.89/11.98 | (ifthen$b(all_1050_13, v0) = v2) | ~ (ifthen$b(x$, v1) = v3) | ~
% 80.89/11.98 | Bit$(v1) | ~ Bit$(v0) | ? [v6:
% 80.89/11.98 | Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : ? [v7:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v8:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v9:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v10:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v11:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v12:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v13:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : (cons$(v3, all_1050_0) = v10 &
% 80.89/11.98 | cons$(v2, v10) = v11 & cons$(all_1050_10, v11) = v12 &
% 80.89/11.98 | cons$(all_1050_10, v5) = v8 & cons$(all_1050_11, v12) = v13 &
% 80.89/11.98 | cons$(all_1050_11, v8) = v9 & fun_app$k(all_1050_14, v0) = v6 &
% 80.89/11.98 | fun_app$j(v6, v1) = v7 & Mem_ell2_mem_ell2_cblinfun_list$(v13) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v12) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v11) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v10) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v9) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v8) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v7) &
% 80.89/11.98 | Bit_mem_ell2_mem_ell2_cblinfun_list_fun$(v6) & ( ~ (v1 = one$) |
% 80.89/11.98 | v9 = v7) & (v13 = v7 | v1 = one$))) & ! [v0: Bit$] : ! [v1:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun$] : ! [v2: Mem_ell2_mem_ell2_cblinfun$]
% 80.89/11.98 | : ! [v3: Mem_ell2_mem_ell2_cblinfun_list$] : ! [v4:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ( ~ (cons$(v2, all_1050_5) = v3)
% 80.89/11.98 | | ~ (cons$(v1, v3) = v4) | ~ (ifthen$b(all_1050_13, one$) = v1) |
% 80.89/11.98 | ~ (ifthen$b(x$, v0) = v2) | ~ Bit$(v0) | ? [v5:
% 80.89/11.98 | Bit_mem_ell2_mem_ell2_cblinfun_list_fun$] : ? [v6:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v7:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v8:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v9:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v10:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v11:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : ? [v12:
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$] : (cons$(v2, all_1050_2) = v9 &
% 80.89/11.98 | cons$(v1, v9) = v10 & cons$(all_1050_10, v10) = v11 &
% 80.89/11.98 | cons$(all_1050_10, v4) = v7 & cons$(all_1050_11, v11) = v12 &
% 80.89/11.98 | cons$(all_1050_11, v7) = v8 & fun_app$k(all_1050_14, one$) = v5 &
% 80.89/11.98 | fun_app$j(v5, v0) = v6 & Mem_ell2_mem_ell2_cblinfun_list$(v12) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v11) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v10) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v9) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v8) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v7) &
% 80.89/11.98 | Mem_ell2_mem_ell2_cblinfun_list$(v6) &
% 80.89/11.98 | Bit_mem_ell2_mem_ell2_cblinfun_list_fun$(v5) & ( ~ (v0 = one$) |
% 80.89/11.98 | v8 = v6) & (v12 = v6 | v0 = one$)))
% 80.89/11.98 |
% 80.89/11.98 | ALPHA: (64) implies:
% 80.89/11.98 | (65) comp$(phi$, snd$) = all_1050_9
% 80.89/11.98 |
% 80.89/11.98 | GROUND_INST: instantiating (19) with all_1015_7, all_1020_7, snd$, phi$,
% 80.89/11.98 | simplifying with (35), (39) gives:
% 80.89/11.98 | (66) all_1020_7 = all_1015_7
% 80.89/11.98 |
% 80.89/11.98 | GROUND_INST: instantiating (19) with all_959_2, all_1020_7, snd$, phi$,
% 80.89/11.98 | simplifying with (24), (39) gives:
% 80.89/11.98 | (67) all_1020_7 = all_959_2
% 80.89/11.98 |
% 80.89/11.98 | GROUND_INST: instantiating (19) with all_1005_6, all_1029_3, snd$, phi$,
% 80.89/11.98 | simplifying with (29), (41) gives:
% 80.89/11.98 | (68) all_1029_3 = all_1005_6
% 80.89/11.98 |
% 80.89/11.98 | GROUND_INST: instantiating (19) with all_972_2, all_1029_3, snd$, phi$,
% 80.89/11.98 | simplifying with (27), (41) gives:
% 80.89/11.98 | (69) all_1029_3 = all_972_2
% 80.89/11.98 |
% 80.89/11.98 | GROUND_INST: instantiating (19) with all_1015_7, all_1038_9, snd$, phi$,
% 80.89/11.98 | simplifying with (35), (44) gives:
% 80.89/11.98 | (70) all_1038_9 = all_1015_7
% 80.89/11.98 |
% 80.89/11.98 | GROUND_INST: instantiating (19) with all_1009_3, all_1038_9, snd$, phi$,
% 80.89/11.98 | simplifying with (33), (44) gives:
% 80.89/11.98 | (71) all_1038_9 = all_1009_3
% 80.89/11.98 |
% 80.89/11.98 | GROUND_INST: instantiating (19) with all_1029_3, all_1044_17, snd$, phi$,
% 80.89/11.98 | simplifying with (41), (56) gives:
% 80.89/11.98 | (72) all_1044_17 = all_1029_3
% 80.89/11.98 |
% 80.89/11.99 | GROUND_INST: instantiating (19) with all_1017_4, all_1044_17, snd$, phi$,
% 80.89/11.99 | simplifying with (37), (56) gives:
% 80.89/11.99 | (73) all_1044_17 = all_1017_4
% 80.89/11.99 |
% 80.89/11.99 | GROUND_INST: instantiating (19) with all_1038_9, all_1046_13, snd$, phi$,
% 80.89/11.99 | simplifying with (44), (62) gives:
% 80.89/11.99 | (74) all_1046_13 = all_1038_9
% 80.89/11.99 |
% 80.89/11.99 | GROUND_INST: instantiating (19) with all_1029_3, all_1046_13, snd$, phi$,
% 80.89/11.99 | simplifying with (41), (62) gives:
% 80.89/11.99 | (75) all_1046_13 = all_1029_3
% 80.89/11.99 |
% 80.89/11.99 | GROUND_INST: instantiating (19) with all_1007_6, all_1046_13, snd$, phi$,
% 80.89/11.99 | simplifying with (31), (62) gives:
% 80.89/11.99 | (76) all_1046_13 = all_1007_6
% 80.89/11.99 |
% 80.89/11.99 | GROUND_INST: instantiating (19) with all_972_2, all_1050_9, snd$, phi$,
% 80.89/11.99 | simplifying with (27), (65) gives:
% 80.89/11.99 | (77) all_1050_9 = all_972_2
% 80.89/11.99 |
% 80.89/11.99 | GROUND_INST: instantiating (19) with all_927_2, all_1050_9, snd$, phi$,
% 80.89/11.99 | simplifying with (22), (65) gives:
% 80.89/11.99 | (78) all_1050_9 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (77), (78) imply:
% 80.89/11.99 | (79) all_972_2 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (79) implies:
% 80.89/11.99 | (80) all_972_2 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (75), (76) imply:
% 80.89/11.99 | (81) all_1029_3 = all_1007_6
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (81) implies:
% 80.89/11.99 | (82) all_1029_3 = all_1007_6
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (74), (76) imply:
% 80.89/11.99 | (83) all_1038_9 = all_1007_6
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (83) implies:
% 80.89/11.99 | (84) all_1038_9 = all_1007_6
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (72), (73) imply:
% 80.89/11.99 | (85) all_1029_3 = all_1017_4
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (85) implies:
% 80.89/11.99 | (86) all_1029_3 = all_1017_4
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (70), (71) imply:
% 80.89/11.99 | (87) all_1015_7 = all_1009_3
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (87) implies:
% 80.89/11.99 | (88) all_1015_7 = all_1009_3
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (71), (84) imply:
% 80.89/11.99 | (89) all_1009_3 = all_1007_6
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (82), (86) imply:
% 80.89/11.99 | (90) all_1017_4 = all_1007_6
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (69), (86) imply:
% 80.89/11.99 | (91) all_1017_4 = all_972_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (68), (86) imply:
% 80.89/11.99 | (92) all_1017_4 = all_1005_6
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (66), (67) imply:
% 80.89/11.99 | (93) all_1015_7 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (93) implies:
% 80.89/11.99 | (94) all_1015_7 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (91), (92) imply:
% 80.89/11.99 | (95) all_1005_6 = all_972_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (90), (92) imply:
% 80.89/11.99 | (96) all_1007_6 = all_1005_6
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (96) implies:
% 80.89/11.99 | (97) all_1007_6 = all_1005_6
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (88), (94) imply:
% 80.89/11.99 | (98) all_1009_3 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (98) implies:
% 80.89/11.99 | (99) all_1009_3 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (89), (99) imply:
% 80.89/11.99 | (100) all_1007_6 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (100) implies:
% 80.89/11.99 | (101) all_1007_6 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (97), (101) imply:
% 80.89/11.99 | (102) all_1005_6 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (102) implies:
% 80.89/11.99 | (103) all_1005_6 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (95), (103) imply:
% 80.89/11.99 | (104) all_972_2 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (104) implies:
% 80.89/11.99 | (105) all_972_2 = all_959_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (80), (105) imply:
% 80.89/11.99 | (106) all_959_2 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (103), (106) imply:
% 80.89/11.99 | (107) all_1005_6 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (101), (106) imply:
% 80.89/11.99 | (108) all_1007_6 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (99), (106) imply:
% 80.89/11.99 | (109) all_1009_3 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (92), (107) imply:
% 80.89/11.99 | (110) all_1017_4 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (71), (109) imply:
% 80.89/11.99 | (111) all_1038_9 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (73), (110) imply:
% 80.89/11.99 | (112) all_1044_17 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (76), (108) imply:
% 80.89/11.99 | (113) all_1046_13 = all_927_2
% 80.89/11.99 |
% 80.89/11.99 | REDUCE: (63), (113) imply:
% 80.89/11.99 | (114) register_pair$(x$, all_927_2) = all_1046_8
% 80.89/11.99 |
% 80.89/11.99 | REDUCE: (59), (112) imply:
% 80.89/11.99 | (115) register_pair$(x$, all_927_2) = all_1044_16
% 80.89/11.99 |
% 80.89/11.99 | REDUCE: (45), (111) imply:
% 80.89/11.99 | (116) register_pair$(x$, all_927_2) = all_1038_8
% 80.89/11.99 |
% 80.89/11.99 | REDUCE: (25), (106) imply:
% 80.89/11.99 | (117) register_pair$(x$, all_927_2) = all_959_1
% 80.89/11.99 |
% 80.89/11.99 | GROUND_INST: instantiating (20) with all_1044_16, all_1046_8, all_927_2, x$,
% 80.89/11.99 | simplifying with (114), (115) gives:
% 80.89/11.99 | (118) all_1046_8 = all_1044_16
% 80.89/11.99 |
% 80.89/11.99 | GROUND_INST: instantiating (20) with all_1038_8, all_1046_8, all_927_2, x$,
% 80.89/11.99 | simplifying with (114), (116) gives:
% 80.89/11.99 | (119) all_1046_8 = all_1038_8
% 80.89/11.99 |
% 80.89/11.99 | GROUND_INST: instantiating (20) with all_959_1, all_1046_8, all_927_2, x$,
% 80.89/11.99 | simplifying with (114), (117) gives:
% 80.89/11.99 | (120) all_1046_8 = all_959_1
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (118), (120) imply:
% 80.89/11.99 | (121) all_1044_16 = all_959_1
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (118), (119) imply:
% 80.89/11.99 | (122) all_1044_16 = all_1038_8
% 80.89/11.99 |
% 80.89/11.99 | COMBINE_EQS: (121), (122) imply:
% 80.89/11.99 | (123) all_1038_8 = all_959_1
% 80.89/11.99 |
% 80.89/11.99 | SIMP: (123) implies:
% 80.89/11.99 | (124) all_1038_8 = all_959_1
% 80.89/11.99 |
% 80.89/11.99 | REDUCE: (61), (120) imply:
% 80.89/11.99 | (125) fun_app$(all_959_1, uswap$) = all_1046_7
% 80.89/11.99 |
% 80.89/11.99 | REDUCE: (50), (121) imply:
% 80.89/11.99 | (126) fun_app$(all_959_1, uswap$) = all_1044_15
% 80.89/11.99 |
% 80.89/11.99 | REDUCE: (43), (124) imply:
% 80.89/11.99 | (127) fun_app$(all_959_1, uswap$) = all_1038_7
% 80.89/11.99 |
% 80.89/12.00 | GROUND_INST: instantiating (17) with all_1044_15, all_1046_7, uswap$,
% 80.89/12.00 | all_959_1, simplifying with (125), (126) gives:
% 80.89/12.00 | (128) all_1046_7 = all_1044_15
% 80.89/12.00 |
% 80.89/12.00 | GROUND_INST: instantiating (17) with all_1038_7, all_1046_7, uswap$,
% 80.89/12.00 | all_959_1, simplifying with (125), (127) gives:
% 80.89/12.00 | (129) all_1046_7 = all_1038_7
% 80.89/12.00 |
% 80.89/12.00 | COMBINE_EQS: (128), (129) imply:
% 80.89/12.00 | (130) all_1044_15 = all_1038_7
% 80.89/12.00 |
% 80.89/12.00 | REDUCE: (52), (130) imply:
% 80.89/12.00 | (131) cblinfun_image$(all_1038_7, all_1044_1) = all_1044_0
% 80.89/12.00 |
% 80.89/12.00 | REDUCE: (51), (130) imply:
% 80.89/12.00 | (132) cblinfun_image$(all_1038_7, all_1044_4) = all_1044_3
% 80.89/12.00 |
% 80.89/12.00 | GROUND_INST: instantiating (1) with all_1044_11, all_1044_6, top$,
% 80.89/12.00 | all_1044_10, all_1044_5, all_1044_4, simplifying with (15), (48),
% 80.89/12.00 | (49), (55), (57), (58) gives:
% 80.89/12.00 | (133) ? [v0: Mem_ell2_ccsubspace$] : (cblinfun_image$(all_1044_6, top$) =
% 80.89/12.00 | v0 & cblinfun_image$(all_1044_11, v0) = all_1044_4 &
% 80.89/12.00 | Mem_ell2_ccsubspace$(v0) & Mem_ell2_ccsubspace$(all_1044_4))
% 80.89/12.00 |
% 80.89/12.00 | DELTA: instantiating (133) with fresh symbol all_1151_0 gives:
% 80.89/12.00 | (134) cblinfun_image$(all_1044_6, top$) = all_1151_0 &
% 80.89/12.00 | cblinfun_image$(all_1044_11, all_1151_0) = all_1044_4 &
% 80.89/12.00 | Mem_ell2_ccsubspace$(all_1151_0) & Mem_ell2_ccsubspace$(all_1044_4)
% 80.89/12.00 |
% 80.89/12.00 | ALPHA: (134) implies:
% 80.89/12.00 | (135) cblinfun_image$(all_1044_11, all_1151_0) = all_1044_4
% 80.89/12.00 | (136) cblinfun_image$(all_1044_6, top$) = all_1151_0
% 80.89/12.00 |
% 80.89/12.00 | GROUND_INST: instantiating (18) with all_1044_2, all_1151_0, top$, all_1044_6,
% 80.89/12.00 | simplifying with (54), (136) gives:
% 80.89/12.00 | (137) all_1151_0 = all_1044_2
% 80.89/12.00 |
% 80.89/12.00 | REDUCE: (135), (137) imply:
% 80.89/12.00 | (138) cblinfun_image$(all_1044_11, all_1044_2) = all_1044_4
% 80.89/12.00 |
% 80.89/12.00 | GROUND_INST: instantiating (18) with all_1044_1, all_1044_4, all_1044_2,
% 80.89/12.00 | all_1044_11, simplifying with (53), (138) gives:
% 80.89/12.00 | (139) all_1044_1 = all_1044_4
% 80.89/12.00 |
% 80.89/12.00 | REDUCE: (131), (139) imply:
% 80.89/12.00 | (140) cblinfun_image$(all_1038_7, all_1044_4) = all_1044_0
% 80.89/12.00 |
% 80.89/12.00 | GROUND_INST: instantiating (18) with all_1044_3, all_1044_0, all_1044_4,
% 80.89/12.00 | all_1038_7, simplifying with (132), (140) gives:
% 80.89/12.00 | (141) all_1044_0 = all_1044_3
% 80.89/12.00 |
% 80.89/12.00 | REDUCE: (47), (141) imply:
% 80.89/12.00 | (142) $false
% 80.89/12.00 |
% 80.89/12.00 | CLOSE: (142) is inconsistent.
% 80.89/12.00 |
% 80.89/12.00 End of proof
% 80.89/12.00 % SZS output end Proof for theBenchmark
% 80.89/12.00
% 80.89/12.00 11393ms
%------------------------------------------------------------------------------