Home Participants Reports 
ReportsBelow is a selection of papers that report work done as part of the Church Project.Members: To add a new report, consult the maintenance page. Copyright notice: The documents contained in these pages are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. techreport{BUCSTR2010011
author = {Andrei Lapets and Assaf Kfoury} title = {A Userfriendly Interface for a Lightweight Verification System} institution = {CS Dept., Boston University} number = {BUCSTR2010011} month = {May} year = {2010} abstract = { }Userfriendly interfaces can play an important role in bringing
the benefits of a machinereadable representation of formal
arguments to a wider audience. The ``aartifact" system is an
easytouse lightweight verifier for formal arguments that involve
logical and algebraic manipulations of common mathematical
concepts. The system provides validation capabilities by utilizing
a database of propositions governing common mathematical concepts.
The ``aartifact" system's multifaceted interactive user interface
combines several approaches to userfriendly interface design: (1)
a familiar and natural syntax based on existing conventions in
mathematical practice, (2) a realtime keywordbased lookup
mechanism for interactive, contextsensitive discovery of the
syntactic idioms and semantic concepts found in the system's
database of propositions, and (3) immediate validation feedback in
the form of reformatted raw input. The system's natural syntax and
database of propositions allow it to meet a user's expectations in
the formal reasoning scenarios for which it is intended. The
realtime keywordbased lookup mechanism and validation feedback
allow the system to teach the user about its capabilities and
limitations in an immediate, interactive, and contextaware
manner. }techreport{BUCSTR2010010
author = {Andrei Lapets} title = {Userfriendly Support for Common Concepts in a Lightweight Verifier} institution = {CS Dept., Boston University} number = {BUCSTR2010010} month = {May} year = {2010} abstract = { }Machine verification of formal arguments can only increase our
confidence in the correctness of those arguments, but the costs of
employing machine verification still outweigh the benefits for
some common kinds of formal reasoning activities. As a result,
usability is becoming increasingly important in the design of
formal verification tools. We describe the ``aartifact"
lightweight verification system, designed for processing formal
arguments involving basic, ubiquitous mathematical concepts. The
system is a prototype for investigating potential techniques for
improving the usability of formal verification systems. It
leverages techniques drawn both from existing work and from our
own efforts. In addition to a parser for a familiar concrete
syntax and a mechanism for automated syntax lookup, the system
integrates (1) a basic logical inference algorithm, (2) a database
of propositions governing common mathematical concepts, and (3) a
data structure that computes congruence closures of expressions
involving relations found in this database. Together, these
components allow the system to better accommodate the expectations
of users interested in verifying formal arguments involving
algebraic and logical manipulations of numbers, sets, vectors, and
related operators and predicates. We demonstrate the reasonable
performance of this system on typical formal arguments and briefly
discuss how the system's design contributed to its usability in
two case studies. }techreport{BUCSTR2009030
author = {Andrei Lapets and Assaf Kfoury} title = {Verification with Natural Contexts: Soundness of Safe Compositional Network Sketches} institution = {CS Dept., Boston University} number = {BUCSTR2009030} month = {October} year = {2009} abstract = { }In research areas involving mathematical rigor, there are numerous
benefits to adopting a formal representation of models and
arguments: reusability, automatic evaluation of examples, and
verification of consistency and correctness. However,
accessibility has not been a priority in the design of formal
verification tools that can provide these benefits. In earlier
work \citeBUCSTR2009015, we attempt to address this broad
problem by proposing several specific design criteria organized
around the notion of a \emphnatural context: the sphere of
awareness a working human user maintains of the relevant
constructs, arguments, experiences, and background materials
necessary to accomplish the task at hand. We evaluate our proposed
design criteria by utilizing within the context of novel research
a formal reasoning system that is designed according to these
criteria. In particular, we consider how the design and
capabilities of the formal reasoning system that we employ
influence, aid, or hinder our ability to accomplish a formal
reasoning task  the assembly of a machineverifiable proof
pertaining to the NetSketch formalism.
NetSketch is a tool for the specification of constrainedflow
applications and the certification of desirable safety properties
imposed thereon. NetSketch is conceived to assist system
integrators in two types of activities: modeling and design. It
provides capabilities for compositional analysis based on a
stronglytyped DomainSpecific Language (DSL) for describing and
reasoning about constrainedflow networks and invariants that need
to be enforced thereupon. In a companion paper
\citeBUCSTR2009028, we overview NetSketch, highlight its
salient features, and illustrate how it could be used in actual
applications. In this paper, we define using a machinereadable
syntax major parts of the formal system underlying the operation
of NetSketch, along with its semantics and a corresponding notion
of validity. We then provide a proof of soundness for the
formalism that can be partially verified using a lightweight
formal reasoning system that simulates natural contexts. A
traditional version of these definitions and arguments can be
found in a related paper \citeBUCSTR2009029. }unpublished{Car+Wel:AOE2008
itrspaper = {yes} churchreport = {yes} author = {Sébastien Carlier and J. B. Wells} title = {The Algebra of Expansion} note = {Draft corresponding roughly to an ITRS 2008 workshop
talk} month = {March} date = {20080325} year = {2008} abstract = { }Expansion is an operation on typings (pairs of type
environments and result types) in type systems for
the $\lambda$calculus. Expansion was originally
introduced for calculating possible typings of a
term in systems with intersection types.
Unfortunately, definitions of expansion (even the
most modern ones) have been difficult for outsiders
to absorb. This paper aims to clarify expansion and
make it more accessible to nonspecialists by
isolating the pure notion of expansion on its own,
independent of type systems and types. We show how
expansion can be seen as a simple algebra on terms
with variables, substitutions, composition, and
miscellaneous constructors such that the algebra
satisfies 8 simple axioms and axiom schemas: the 3
standard axioms of a monoid, 4 standard axioms or
axiom schemas of substitutions (including one that
corresponds to the usual ``substitution lemma''),
and 1 axiom schema for expansion itself. This
presentation should help make more accessible to a
wider community theory and techniques involving
intersection types and type inference with flexible
precision. }techreport{Don+Hal+Kfo:BUCSTR2005X
author = {Kevin Donnelly and Joseph J. Hallett and Assaf J. Kfoury} title = {Formal Semantics For Weak References} institution = {Department of Computer Science, Boston University} year = {2005} month = {July} churchreport = {yes} type = {Technical Report} number = {Don+Hal+Kfo:BUCSTR2005X} xpostscript = {} xpdf = {} itrspaper = {no} abstract = { }Weak references are references that do not prevent the object they
point to from being garbage collected. Many realistic languages,
including Java, SML/NJ, and Haskell to name a few, support weak
references. However, there is no generally accepted formal
semantics for weak references. Without such a formal semantics it
becomes impossible to formally prove properties of such a language
and the programs written in it.
\par
We give a formal semantics for a calculus called lambdaweak that
includes weak references and is derived from Morrisett, Felleisen,
and Harper's lambdagc. The semantics is used to examine several
issues involving weak references. We use the framework to
formalize the semantics for the key/value weak references found in
Haskell. Furthermore, we consider a type system for the language
and show how to extend the earlier result that type inference can
be used to collect reachable garbage. In addition we show how to
allow collection of weakly referenced garbage without incurring
the computational overhead often associated with collecting a weak
reference which may be later used. Lastly, we address the
nondeterminism of the semantics by providing both an effectively
decidable syntactic restriction and a more general semantic
criterion, which guarantee a unique result of evaluation. }techreport{Don+Kfo:BUCSTR2005X
author = {Kevin Donnelly and Assaf J. Kfoury} title = {Some Considerations on Formal Semantics For Weak References} institution = {Department of Computer Science, Boston University} year = {2005} month = {July} churchreport = {yes} type = {Technical Report} number = {Don+Kfo:BUCSTR2005X} xpostscript = {} xpdf = {} itrspaper = {no} abstract = { }Weak references are references that do not prevent the object they
point to from being garbage collected. Most realistic languages,
including Java, SML/NJ, and OCaml to name a few, have some
facility for programming with weak references. Weak references
are used in implementing idioms like memoizing functions and
hashconsing in order to avoid potential memory leaks.
However, the semantics of weak references in many languages are
not clearly specified. Without a formal semantics for weak
references it becomes impossible to prove the correctness of
implementations making use of this feature. Previous work by
Hallett and Kfoury \citeHKweak extends $\gc$
\citeMor+Fel+Har:FPCA1995, a language for modeling garbage
collection, to $\weak$, a similar language with weak references.
Using this previously formalized semantics for weak references, we
consider two issues related to wellbehavedness of programs.
Firstly, we provide a new, simpler proof of the wellbehavedness
of the syntactically restricted fragment of $\weak$ defined in
\citeHKweak. Secondly, we give a natural semantic criterion
for wellbehavedness much broader than the syntactic restriction,
which is useful as principle for programming with weak references.
Furthermore we extend the result, proved in
\citeMor+Fel+Har:FPCA1995, which allows one to use
typeinference to collect some reachable objects that are never
used. We prove that this result holds of our language, and we
extend this result to allow the collection of weaklyreferenced
reachable garbage without incurring the computation overhead
sometimes associated with collecting weak bindings (e.g. the need
to recompute a memoized function).
Lastly we use extend the semantic framework to model the key/value
weak references found in Haskell and we prove the Haskell is
semantics equivalent to a simpler semantics due to the lack of
sideeffects in our language. }techreport{Hal+Kfo:BUCSTR2005031
author = {Joseph J. Hallett and Assaf J. Kfoury} title = {A Formal Semantics For Weak References} institution = {Department of Computer Science, Boston University} year = {2005} month = {May} churchreport = {yes} type = {Technical Report} number = {Hal+Kfo:BUCSTR2005031} xpostscript = {} xpdf = {} itrspaper = {no} abstract = { }A weak reference is a reference to an object that is not
followed by the pointer tracer when garbage collection is called.
That is, a weak reference cannot prevent the object it references
from being garbage collected. Weak references remain a
troublesome programming feature largely because there is not an
accepted, precise semantics that describes their behavior (in
fact, we are not aware of any formalization of their semantics).
The trouble is that weak references allow reachable objects to be
garbage collected, therefore allowing garbage collection to
influence the result of a program. Despite this difficulty, weak
references continue to be used in practice for reasons related to
efficient storage management, and are included in many popular
programming languages (Standard ML, Haskell, OCaml, and Java).
We give a formal semantics for a calculus called $\weak$ that
includes weak references and is derived from Morrisett, Felleisen,
and Harper's $\gc$. $\gc$ formalizes the notion of garbage
collection by means of a rewrite rule. Such a formalization is
required to precisely characterize the semantics of weak
references. However, the inclusion of a garbagecollection
rewriterule in a language with \em weak references introduces
nondeterministic evaluation, even if the parameterpassing
mechanism is deterministic (callbyvalue in our case). This
raises the question of confluence for our rewrite system. We
discuss natural restrictions under which our rewrite system is
confluent, thus guaranteeing uniqueness of program result. We
define conditions that allow other garbage collection algorithms
to coexist with our semantics of weak references. We also
introduce a polymorphic type system to prove the absence of
erroneous program behavior (i.e., the absence of ``stuck
evaluation'') and a corresponding type inference algorithm. We
prove the type system sound and the inference algorithm sound and
complete. }techreport{Bak+Kfo:TR2005a
author = {Adam Bakewell and Assaf J. Kfoury} title = {Properties of a Rewrite System for Unification with Expansion Variables} institution = {Church Project, Boston University} year = {2005} month = {April} churchreport = {yes} type = {Technical Report} postscript = {http://cspeople.bu.edu/bake/uwev/propbun.ps.gz} }techreport{Bak+Car+Kfo+Wel:BUCSTR2005a
author = {Adam Bakewell and Sébastien Carlier and A. J. Kfoury and J. B. Wells} title = {Inferring Intersection Typings that are Equivalent to CallbyName and CallbyValue Evaluations} institution = {Church Project, Boston University} year = {2005} month = {April} date = {20050413} churchreport = {yes} type = {Technical Report} postscript = {http://cspeople.bu.edu/bake/uwev/nameval.ps.gz} abstract = { }We present a procedure to infer a typing for an arbitrary
$\lambda$term $M$ in an intersectiontype system that translates
into exactly the callbyname (resp., callbyvalue) evaluation of
$M$. Our framework is the recently developed System~E which
augments intersection types with \emphexpansion variables. The
inferred typing for $M$ is obtained by setting up a unification
problem involving both type variables and expansion variables,
which we solve with a confluent rewrite system. The inference
procedure is \emphcompositional in the sense that typings for
different program components can be inferred in \emphany order,
and without knowledge of the definition of other program
components. Using expansion variables lets us achieve a
compositional inference procedure easily. Termination of the
procedure is generally undecidable. The procedure terminates and
returns a typing iff the input $M$ is normalizing according to
callbyname (resp., callbyvalue). The inferred typing is
\emphexact in the sense that the exact callbyname (resp.,
callbyvalue) behaviour of $M$ can be obtained by a (polynomial)
transformation of the typing. The inferred typing is also
\emphprincipal in the sense that any other typing that
translates the callbyname (resp., callbyvalue) evaluation of
$M$ can be obtained from the inferred typing for $M$ using a
substitutionbased transformation. }inproceedings{Zhu+Xi:PADL2005
author = {Dengping Zhu and Hongwei Xi} title = {Safe Programming with Pointers through Stateful Views} booktitle = {Proceedings of the 7th International Symposium on Practical Aspects of Declarative Languages} publisher = {SpringerVerlag LNCS} year = {2005} month = {January} pages = {TBA} address = {Long Beach, CA} churchreport = {yes} postscript = {http://cspeople.bu.edu/zhudp/pubs/PADL05.ps} abstract = { }The need for direct memory manipulation through pointers is essential in
many applications. However, it is also commonly understood that the use (or
probably misuse) of pointers is often a rich source of program errors.
Therefore, approaches that can effectively enforce safe use of pointers in
programming are highly sought after. ATS is a programming language with a
type system rooted in a recently developed framework Applied Type System,
and a novel and desirable feature in ATS lies in its support for safe
programming with pointers through a novel notion of \em stateful
views. In particular, even pointer arithmetic is allowed in ATS and
guaranteed to be safe by the type system of ATS. In this paper, we give an
overview of this feature in ATS, presenting some interesting examples based
on a prototype implementation of ATS to demonstrate the practicality of
safe programming with pointer through stateful views. }inproceedings{Mak+Wel:ESOP2005
author = {Henning Makholm and J. B. Wells} title = {Instant Polymorphic Type Systems for Mobile Process
Calculi: Just Add Reduction Rules and Close} churchreport = {yes} dartreport = {yes} crossref = {ESOP2005} pages = {389407} note = {A more detailed predecessor is
\citeMak+Wel:PolyStar2004} abstract = { }Many different \emphmobile process calculi have
been invented, and for each some number of type
systems has been developed. Soundness and other
properties must be proved separately for each
calculus and type system. We present the
\emphgeneric polymorphic type system
Poly$\star$ which works for a wide range of
mobile process calculi, including the $\pi$calculus
and Mobile Ambients. For any calculus satisfying
some general syntactic conditions, wellformedness
rules for types are derived automatically from the
reduction rules and Poly$\star$ works
otherwise unchanged. The derived type system is
automatically sound (i.e., has subject reduction)
and often more precise than previous type systems
for the calculus, due to Poly$\star$'s
\emphspatial polymorphism. We present an
implemented type inference algorithm for
Poly$\star$ which automatically constructs
a typing given a set of reduction rules and a term
to be typed. The generated typings are
\emphprincipal with respect to certain natural
type shape constraints. }inproceedings{Mak+Wel:ICFP2005
author = {Henning Makholm and J. B. Wells} title = {Type Inference, Principal Typings, and
LetPolymorphism for FirstClass Mixin Modules} crossref = {ICFP2005} churchreport = {yes} dartreport = {yes} pages = {156167} abstract = { }A \emphmixin module is a programming abstraction
that simultaneously generalizes
$\lambda$abstractions, records, and mutually
recursive definitions. Although various mixin module
type systems have been developed, no one has
investigated \emphprincipal typings or developed
\emphtype inference for firstclass mixin modules,
nor has anyone added Milner's
\emphletpolymorphism to such a system.
\par
This paper proves that typability is NPcomplete for
the naive approach followed by previous mixin module
type systems. Because a $\lambda$calculus extended
with \emphrecord concatenation is a simple
restriction of our mixin module calculus, we also
prove the folk belief that typability is NPcomplete
for the naive early type systems for record
concatenation.
\par
To allow feasible type inference, we present
\textsfMartini, a new system of \emphsimple
types for mixin modules with \emphprincipal
typings. \textsfMartini is conceptually simple,
with no subtyping and a clean and balanced
separation between unificationbased type inference
with type and row variables and constraint solving
for safety of linking and field extraction. We have
implemented a type inference algorithm and we prove
its complexity to be $O(n^2)$, or $O(n)$ given a
fixed bound on the number of field
labels. (Technically, there is also a factor of
$\alpha(n)$, but $\alpha(n)\leq 4$ for
$n<10^10^100$ (a ``googolplex'').) To prove the
complexity, we need to present an algorithm for
\emphrow unification that may have been
implemented by others, but which we could not find
written down anywhere. Because \textsfMartini has
principal typings, we successfully extend it with
Milner's letpolymorphism. }techreport{Bak+Kfo:BUCSTR20040xx
author = {Adam Bakewell and Assaf J. Kfoury} title = {Unification with Expansion Variables} institution = {Department of Computer Science, Boston University} year = {2004} month = {December} churchreport = {yes} type = {Technical Report} postscript = {http://cspeople.bu.edu/bake/uwev/betau.ps.gz} }techreport{Bak+Car+Kfo+Wel:BUCSTR20040xy
author = {Adam Bakewell and Sébastien Carlier and A. J. Kfoury and J. B. Wells} title = {Exact Intersection Typing Inference and CallbyName Evaluation} institution = {Department of Computer Science, Boston University} year = {2004} month = {December} note = {Superseded by \citeBak+Car+Kfo+Wel:BUCSTR2005a} churchreport = {yes} type = {Technical Report} postscript = {http://cspeople.bu.edu/bake/uwev/seti.ps.gz} }techreport{Mak+Wel:PolyStar2004
author = {Henning Makholm and J. B. Wells} title = {Instant Polymorphic Type Systems for Mobile Process
Calculi: Just Add Reduction Rules and Close} year = {2004} month = {November} churchreport = {yes} dartreport = {yes} institution = {HeriotWatt Univ., School of Math.\ & Comput.\
Sci.} number = {HWMACSTR0022} xnote = {Available at http://www.macs.hw.ac.uk:8080/} note = {A shorter successor is~\citeMak+Wel:ESOP2005} abstract = { }Many different \emphmobile process calculi have
been invented, and for each some number of type
systems has been developed. Soundness and other
properties must be proved separately for each
calculus and type system.
\par
We present the \emphgeneric polymorphic type
system Poly$\star$ which works for a wide
range of mobile process calculi. For any calculus
satisfying some general syntactic conditions,
wellformedness rules for types are derived
automatically from the reduction rules and
Poly$\star$ works otherwise unchanged. The
derived type system is automatically sound and often
more precise than previous type systems for the
calculus, due to Poly$\star$'s
\emphspatial polymorphism.
\par
We present an implemented type inference algorithm
for Poly$\star$ which automatically
constructs a typing given a set of reduction rules
and a term to be typed. The generated typings are
\emphprincipal with respect to certain natural
type shape constraints. }inproceedings{Chen+Zhu+Xi:PADL2004
author = {Chiyan Chen and Dengping Zhu and Hongwei Xi} title = {Implementing Cut Elimination: A Case Study of Simulating Dependent Types in Haskell} booktitle = {Proceedings of the 6th International Symposium on Practical Aspects of Declarative Languages} publisher = {SpringerVerlag LNCS vol. 3057} year = {2004} month = {June} pages = {239254} address = {Dallas, TX} churchreport = {yes} abstract = { }Gentzen's Hauptsatz  cut elimination theorem  in sequent
calculi reveals a fundamental property on logic connectives in
various logics such as classical logic and intuitionistic logic.
In this paper, we implement a procedure in Haskell to perform cut
elimination for intuitionistic sequent calculus, where we use
types to guarantee that the procedure can only return a cutfree
proof of the same sequent when given a proof of a sequent that may
contain cuts. The contribution of the paper is twofold. On the
one hand, we present an interesting (and somewhat unexpected)
application of the current type system of Haskell, illustrating
through a concrete example how some typical use of dependent types
can be simulated in Haskell. On the other hand, we identify
several problematic issues with such a simulation technique and
then suggest some approaches to addressing these issues in
Haskell. }inproceedings{Chen+Shi+Xi:PADL2004
author = {Chiyan Chen and Rui Shi and Hongwei Xi} title = {A Typeful Approach to ObjectOriented Programming with Multiple Inheritance} booktitle = {Proceedings of the 6th International Symposium on Practical Aspects of Declarative Languages} publisher = {SpringerVerlag LNCS vol. 3057} pages = {2338} year = {2004} month = {June} address = {Dallas, TX} churchreport = {yes} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/OBJwMI/OBJwMI.ps} abstract = { }The wide practice of objected oriented programming (OOP) in
current software practice is evident. Despite extensive studies on
typing programming objects, it is still undeniably a challenging
research task to design a type system that can satisfactorily
account for a variety of features (e.g., binary methods and
multiple inheritance) in OOP. In this paper, we present a typeful
approach to implementing objects that makes use of a recently
introduced notion of guarded datatypes. In particular, we
demonstrate how the feature of multiple inheritance can be
supported with this approach, presenting a simple and general
account for multiple inheritance in a typeful manner. }techreport{Amt+Mak+Wel:PolyA2004a
author = {Torben Amtoft and Henning Makholm and J. B. Wells} title = {PolyA: True Type Polymorphism for Mobile
Ambients} year = {2004} month = {February} institution = {HeriotWatt Univ., School of Math.\ & Comput.\
Sci.} number = {HWMACSTR0015} churchreport = {yes} dartreport = {yes} note = {A shorter successor is \citeAmt+Mak+Wel:TCS2004} abstract = { }Previous type systems for mobility calculi (the original Mobile
Ambients, its variants and descendants, e.g., Boxed Ambients and
Safe Ambients, and other related systems) offer little support for
generic mobile agents. Previous systems either do not handle
communication at all or globally assign fixed communication types
to ambient names that do not change as an ambient moves around or
interacts with other ambients. This makes it hard to type
examples such as a ``messenger'' ambient that uses communication
primitives to collect a message of nonpredetermined type and
deliver it to a nonpredetermined destination.
\par
In contrast, we present our new type system $\mathsfPolyA$.
Instead of assigning communication types to ambient names,
$\mathsfPolyA$ assigns a type to each process $P$ that gives
upper bounds on (1) the possible ambient nesting shapes of any
process $P'$ to which $P$ can evolve, (2) the values that may be
communicated at each location, and (3) the capabilities that can
be used at each location. Because $\mathsfPolyA$ can type
generic mobile agents, we believe $\mathsfPolyA$ is the first
type system for a mobility calculus that provides type
polymorphism comparable in power to polymorphic type systems for
the $\lambda$calculus. $\mathsfPolyA$ is easily extended to
ambient calculus variants. A restriction of $\mathsfPolyA$ has
principal typings. }techreport{Hal+Kfo:BUCSTR2004004
author = {Joseph J. Hallett and Assaf J. Kfoury} title = {Programming Examples Needing Polymorphic Recursion} institution = {Department of Computer Science, Boston University} year = {2004} month = {January} churchreport = {yes} type = {Technical Report} number = {BUCSTR2004004} xpostscript = {http://www.cs.bu.edu/techreports/2004004polymorphicrecursionprogams.ps.Z} xpdf = {http://www.cs.bu.edu/techreports/pdf/2004004polymorphicrecursionprogams.pdf} itrspaper = {yes} abstract = { }Inferring types for polymorphic recursive function definitions (abbreviated
to \em polymorphic recursion) is a recurring topic on the mailing lists
of popular typed programming languages. This is despite the fact that type
inference for polymorphic recursion using $\forall$types has been proved
undecidable. This report presents several programming examples involving
polymorphic recursion and determines their typability under various type
systems, including the HindleyMilner system, an intersectiontype system,
and extensions of these two. The goal of this report is to show that many
of these examples are typable using a system of intersection types as an
alternative form of polymorphism. By accomplishing this, we hope to lay
the foundation for future research into a decidable intersectiontype inference
algorithm.
\par
We do not provide a comprehensive survey of type systems
appropriate for polymorphic recursion, with or without type
annotations inserted in the source language. Rather, we focus on
examples for which types may be inferred without type annotations,
with an emphasis on systems of intersectiontypes. }inproceedings{Xi:TYPES2004
author = {Hongwei Xi} title = {Applied Type System (extended abstract)} booktitle = {postworkshop Proceedings of TYPES 2003} publisher = {SpringerVerlag LNCS 3085} year = {2004} pages = {394408} churchreport = {yes} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/types03.ps} abstract = { }The framework Pure Type System (PTS) offers a simple and general
approach to designing and formalizing type systems. However, in
the presence of dependent types, there often exist some acute
problems that make it difficult for PTS to accommodate many common
realistic programming features such as general recursion,
recursive types, effects (e.g., exceptions, references,
input/output), etc. In this paper, we propose a new framework
Applied Type System (ATS) to allow for designing and formalizing
type systems that can readily support common realistic programming
features. The key salient feature of ATS lies in a complete
separation between statics, in which types are formed and reasoned
about, and dynamics, in which programs are constructed and
evaluated. With this separation, it is no longer possible for a
program to occur in a type as is otherwise allowed in PTS. We
present not only a formal development of ATS but also mention some
examples in support of using ATS as a framework to form type
systems for practical programming. }article{Andrews+etc:JAR2004
author = {Peter Andrews and Matthew Bishop and Chad Brown and Sunil Issar and Frank Pfenning and Hongwei Xi} title = {ETPS: A System to Help Students to Write Formal Proofs} journal = {Journal of Automated Reasoning} volume = {32} year = {2004} pages = {7592} churchreport = {yes} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/etpsjar.ps} abstract = { }ETPS (Educational Theorem Proving System) is a program which logic
students and use to write formal proofs in firstorder logic or
higherorder logic. It help students to concentrate on the
essential logic problems involved in proving theorems, and
automatically checks the proofs. }techreport{Mak+Wel:TIP2004
author = {Henning Makholm and J. B. Wells} title = {Type Inference for PolyA} year = {2004} month = {January} churchreport = {yes} dartreport = {yes} disabledpdf = {http://www.macs.hw.ac.uk:8080/techreps/docs/files/HWMACSTR0013.pdf} institution = {HeriotWatt Univ., School of Math.\ & Comput.\
Sci.} number = {HWMACSTR0013} abstract = { }We present an automatic type inference algorithm for PolyA, a type
system for Mobile Ambients presented in earlier work by us
together with Torben Amtoft. We present not only a basic
inference algorithm, but also several optimizations to it aimed at
reducing the size of the inferred types. The final algorithm has
been implemented and verified to work on small examples. We
discuss some small problems that still exist and methods for
solving them. }inproceedings{Amt+Mak+Wel:TCS2004
churchreport = {yes} dartreport = {yes} author = {Torben Amtoft and Henning Makholm and J. B. Wells} title = {PolyA: True Type Polymorphism for Mobile
Ambients} crossref = {TCS2004} year = {2004} pages = {591604} note = {A more detailed predecessor is
\citeAmt+Mak+Wel:PolyA2004a} abstract = { }Previous type systems for mobility calculi (the
original Mobile Ambients, its variants and
descendants, e.g., Boxed Ambients and Safe Ambients,
and other related systems) offer little support for
generic mobile agents. Previous systems either do
not handle communication at all or globally assign
fixed communication types to ambient names that do
not change as an ambient moves around or interacts
with other ambients. This makes it hard to type
examples such as a ``messenger'' ambient that uses
communication primitives to collect a message of
nonpredetermined type and deliver it to a
nonpredetermined destination.
\par
In contrast, we present our new type system
$\mathsfPolyA$. Instead of assigning communication
types to ambient names, $\mathsfPolyA$ assigns a
type to each process $P$ that gives upper bounds on
(1) the possible ambient nesting shapes of any
process $P'$ to which $P$ can evolve, (2) the values
that may be communicated at each location, and (3)
the capabilities that can be used at each
location. Because $\mathsfPolyA$ can type generic
mobile agents, we believe $\mathsfPolyA$ is the
first type system for a mobility calculus that
provides type polymorphism comparable in power to
polymorphic type systems for the
$\lambda$calculus. $\mathsfPolyA$ is easily
extended to ambient calculus variants. A restriction
of $\mathsfPolyA$ has principal typings. }article{Haa+Wel:SCP2004v50
itrspaper = {yes} author = {Christian Haack and J. B. Wells} title = {Type Error Slicing in Implicitly Typed HigherOrder
Languages} journal = {Sci.\ Comput.\ Programming} dartreport = {yes} volume = {50} pages = {189224} year = {2004} pdftodo = {*** rename this file ***} churchreport = {yes} note = {Supersedes \citeHaa+Wel:ESOP2003} doihttp = {http://dx.doi.org/10.1016/j.scico.2004.01.004} doi = {doi:10.1016/j.scico.2004.01.004} abstract = { }Previous methods have generally identified the
location of a type error as a particular program
point or the program subtree rooted at that
point. We present a new approach that identifies the
location of a type error as a set of program points
(a \emphslice) all of which are necessary for the
type error. We identify the criteria of
\emphcompleteness and \emphminimality for type
error slices. We discuss the advantages of complete
and minimal type error slices over previous methods
of presenting type errors. We present and prove the
correctness of algorithms for finding complete and
minimal type error slices for implicitly typed
higherorder languages like Standard~ML. }inproceedings{Car+Pol+Wel+Kfo:ESOP2004
itrspaper = {yes} churchreport = {yes} dartreport = {yes} author = {Sébastien Carlier and Jeff Polakow and
J. B. Wells and A. J. Kfoury} title = {System E: Expansion Variables for Flexible Typing
with Linear and Nonlinear Types and Intersection
Types} crossref = {ESOP2004} year = {2004} pages = {294309} altpdf = {http://www.macs.hw.ac.uk/DART/reports/D2.3/CPWK04.pdf} abstract = { }Types are often used to control and analyze computer
programs. Intersection types give a type system
great flexibility, but have been difficult to
implement. The $!$ operator, used to distinguish
between linear and nonlinear types, has good
potential for improving tracking of resource usage,
but has not been as flexible as one might want and
has been difficult to use in compositional
analysis. We introduce System~E, a type system with
expansion variables, linear intersection types, and
the $!$ type constructor for creating nonlinear
types. System~E is designed for maximum flexibility
in automatic type inference and for ease of
automatic manipulation of type
information. Expansion variables allow postponing
the choice of which typing rules to use until later
constraint solving gives enough information to allow
making a good choice. System~E removes many
limitations and technical difficulties that
expansion variables had in the earlier System~I and
extends expansion variables to work with $!$ in
addition to the intersection type constructor. We
present subject reduction results for callbyneed
evaluation and discuss approaches for implementing
program analysis in System~E. }inproceedings{Car+Wel:PPDP2004
itrspaper = {yes} churchreport = {yes} dartreport = {yes} year = {2004} author = {Sébastien Carlier and J. B. Wells} title = {Type Inference with Expansion Variables and
Intersection Types in System~E and an Exact
Correspondence with $\beta$Reduction} crossref = {PPDP2004} xpages = {132143} note = {Completely supersedes \citeCar+Wel:TIEV2004} abstract = { }System~E is a recently designed type system for the
$\lambda$calculus with intersection types and
\emphexpansion variables. During automatic type
inference, expansion variables allow postponing
decisions about which nonsyntaxdriven typing rules
to use until the right information is available and
allow implementing the choices via substitution.
\par
This paper uses expansion variables in a
unificationbased automatic type inference algorithm
for System~E that succeeds for every
$\beta$normalizable $\lambda$term. We have
implemented and tested our algorithm and released
our implementation publicly. Each step of our
unification algorithm corresponds to exactly one
$\beta$reduction step, and \emphvice versa. This
formally verifies and makes precise a stepforstep
correspondence between type inference and
$\beta$reduction. This also shows that type
inference with intersection types and expansion
variables can, in effect, carry out an arbitrary
amount of partial evaluation of the program being
analyzed. }phdthesis{neergaardcapld2004
author = {Peter Møller Neergaard} title = {Complexity Aspects of Programming Language DesignFrom Logspace to Elementary Time via Proofnets and Intersection Types} school = {Brandeis University} year = {2004} month = {October} abstract = { More than being just a tool for expressing algorithms, a
welldesigned programming language allows the user to express her
ideas efficiently. The design choices, however, affect the
efficiency of the algorithms written in the languages. It is
therefore important to understand how such choices effect the
expressibility of programming languages. The dissertation
approaches this problem from two different angles.
\par
The dissertation presents what appears to be the first resource
independent characterization of the \textsclogspacecomputable
\emphfunctions. This is done in the form of a \emphfunction
algebra~BC$^\mbox_\varepsilon$ that is sound and complete
for \textsclogspace. BC$^\mbox_\varepsilon$ provides
insights into restricting recursion that might be useful to
provide compiletime guarantees on resource bounds. Furthermore,
by comparison with work by Bellantoni and Cook, it highlights
linearity as a potential distinction between~\textsclogspace\
and \textscptime.
\par
The dissertation also formalizes how successful type inference
fundamentally depends on the \emphamnesia of the type system:
the fact that the type system forgets information about the
program. \emphIntersection type systems without
\emphidempotency provides an exact analysis where all
information is retained. The dissertation shows that in such
intersection type systems, normalization is equivalent to type
inference in \emphevery single case. Time bounds on type
inference and normalization are thus identical and doing
compiletime type inference is useless.
\par
The latter part reveals that \emphexpansion variables proposed
for type inference are closely related to the boxes already known
from the \emphlinear logic technology of \emphproofnets. This
hithertho unknown connection between the two technologies suggests
fruitful contributions between the two fields. Moreover, it
extends and formalizes a previously known connection between
intersection types and proofnets. }pmnwww = {dissertation} churchreport = {yes} }article{neergaardjfpta
author = {Peter Møller Neergaard} title = {A Bargain for Intersection Types: A Simple Strong Normalization Proof} journal = {J.~Funct.\ Programming} note = {To appear} pmnwww = {journal} churchreport = {yes} abstract = { }This pearl gives a discount proof of the folklore theorem that every
strongly $\beta$normalizing $\lambda$term is typable with an
intersection type. (We consider typings that do not use the empty
intersection~$ømega$ which can type any term.) The proof uses the
perpetual reduction
strategy which finds a longest path. This is a simplification over existing proofs that
consider any longest reduction path. The choice of reduction
strategy avoids the need for weakening or strengthening of type derivations. The proof
becomes a bargain because it works for more intersection type
systems, while being simpler than existing proofs. }techreport{Neergaardbrtcl2004
author = {Peter Møller Neergaard} title = {BC$^\textrm_\epsilon$: A RecursionTheoretic Characterization of \textsclogspace} institution = {Brandeis University} year = {2004} month = {March} note = {Preliminary version} churchreport = {yes} abstract = { We present BC$^\textrm_\epsilon$ which is a function algebra that is sound and
complete for \textsclogspace. It is based on the novel
recursiontheoretic principle of generalized recursion where the
step length of primitive recursion can be varied at each step. This
allows elegant representations of functions like logarithm and
division. Unlike characterizations found in the literature, it is
noticeable that there does not appear to be a way to represent pairs
in the algebra.\par
The soundness proof uses a simulation based on ``computational
amnesia'' where, analogously to tail recursion optimization, a
recursive call replaces its own activation record. Even though the
call is not necessarily tail, we recover the full recursion by
repeatedly restarting the computation. }pmnwww = {techreport} }inproceedings{neergaardaplas2004
author = {Peter Møller Neergaard} title = {A Functional Language for Logarithmic Space} crossref = {APLAS2004} year = {2004} pages = {311326} doi = {http://www.springerlink.com/(qe2jbx55cx2ecx55i0bnjp45)/app/home/contribution.asp?referrer} abstract = { More than being just a tool for expressing algorithms, a
welldesigned programming language allows the user to express her
ideas efficiently. The design choices however effect the
efficiency of the algorithms written in the languages. It is
therefore important to understand how such choices effect the
expressibility of programming languages.
\par
The paper pursues the very low complexity programs by presenting a
firstorder function
algebra~\textrmBC$^\textrm_\varepsilon$ that captures
exactly \textsclogspace, the functions computable in logarithmic
space. This gives insights into the expressiveness of recursion.
\par
The important technical features of
\textrmBC$^\textrm_\varepsilon$ are (1) a separation of
variables into safe and normal variables where recursion can only
be done over the latter; (2) linearity of the recursive call; and
(3) recursion with a variable step length (courseofvalue
recursion). Unlike formulations of \textsclogspace via Turing
machines, \textrmBC$^\textrm_\varepsilon$ makes no
references to outside resource measures, e.g., the size of the
memory used. This appears to be the first such characterization of
\textsclogspacecomputable functions (not just predicates).
\par
The proof that all
\textrmBC$^\textrm_\varepsilon$programs can be evaluated
in \textsclogspace is of separate interest to programmers: it
trades space for time and evaluates recursion with at most one
recursive call without a call stack. }churchreport = {yes} pmnwww = {conference} copyright = {\copyright\ SpringerVerlag; Appears in the
\beginrawhtml\endrawhtmlLecture Notes in Computer Science\beginrawhtml\endrawhtml series} }inproceedings{NeeMaiicfp2004
itrspaper = {yes} author = {Peter Møller Neergaard and Harry G. Mairson} title = {Types, Potency, and Idempotency: Why Nonlinearity and Amnesia Make a Type System Work} year = {2004} abstract = { Useful type inference must be faster than normalization.
Otherwise, you could check safety conditions by running the program.
We analyze the relationship between bounds on normalization and type
inference. We show how the success of type inference is
fundamentally related to the \emphamnesia of the type system: the
\emphnonlinearity by which all instances of a variable are
constrained to have the same type.
\par
Recent work on \emphintersection types has advocated their
usefulness for static analysis and modular compilation. We analyze
system~$\mathbbI$ (and some instances of its descendant, system~E), an
intersection type system with a type inference algorithm. Because
system~$\mathbbI$ lacks \emphidempotency, each occurrence of a variable
requires a distinct type. Consequently, type inference is
equivalent to normalization in \emphevery single case, and time
bounds on type inference and normalization are identical. Similar
relationships hold for other intersection type systems without
idempotency.
\par
The analysis is founded on an investigation of the relationship
between \emphlinear logic and intersection types. We show a lockstep
correspondence between normalization and type inference. The latter
shows the promise of intersection types to facilitate static analyses
of varied granularity, but also belies an immense challenge: to add
amnesia to such analysis without losing all of its benefits. }crossref = {ICFP2004} churchreport = {yes} pmnwww = {conference} copyright = {\copyrightACM, 2004. This is the author's version of
the work. It is posted here by permission of ACM for
your personal use. Not for redistribution. The
definitive version was published in ACM SIGPLAN
Notices, Volume~39, Issue~9 (September 2004)} doi = {http://portal.acm.org/citation.cfm?doid} }article{Kfo+Wel:TCSB2004v311n13
itrspaper = {yes} churchreport = {yes} darttodo = {Mention this.} author = {Assaf J. Kfoury and J. B. Wells} title = {Principality and Type Inference for Intersection
Types Using Expansion Variables} journal = {Theoret.\ Comput.\ Sci.} volume = {311} number = {13} pages = {170} xnumbernote = {Elsevier refers to "issues" 13, not "number"} year = {2004} doi = {doi:10.1016/j.tcs.2003.10.032} sciencedirect = {http://authors.elsevier.com/sd/article/S0304397503005772} tcsref = {TCS4942} documenturl = {http://www.churchproject.org/reports/Kfo+Wel:TCSB2004v311n13.html} note = {Supersedes \citeKfo+Wel:POPL1999.
For omitted proofs, see the longer
report \citeKfo+Wel:PTI2003} abstract = { }Principality of typings is the property that for
each typable term, there is a typing from which all
other typings are obtained via some set of
operations. Type inference is the problem of finding
a typing for a given term, if possible. We define an
intersection type system which has principal typings
and types exactly the strongly normalizable
$\lambda$terms. More interestingly, every
finiterank restriction of this system (using
Leivant's first notion of rank) has principal
typings and also has decidable type inference. This
is in contrast to System~F where the finite rank
restriction for every finite rank at 3 and above has
neither principal typings nor decidable type
inference. Furthermore, the notion of principal typings
for our system involves only one operation,
substitution, rather than several operations (not
all substitutionbased) as in earlier presentations
of principality for intersection types (without rank
restrictions). In our system the earlier notion of
\emphexpansion is integrated in the form of
\emphexpansion variables, which are subject to
substitution as are ordinary variables. A
unificationbased type inference algorithm is
presented using a new form of unification,
$\beta$unification. }inproceedings{Zhu+Xi:APLAS2003
author = {Dengping Zhu and Hongwei Xi} title = {A Typeful and Tagless Representation for XML Documents} booktitle = {Proceedings of the First Asian Symposium on Programming Languages and Systems} year = {2003} month = {November} address = {Beijing, China} pages = {89104} churchreport = {yes} postscript = {http://cspeople.bu.edu/zhudp/pubs/aplas03.ps} abstract = { }Abstract: When constructing programs to process XML documents, we immediately face
the question as to how XML documents should be represented
internally in the programming language we use. Currently, most
representations for XML documents are typeless in the sense that
the type information of an XML document cannot be reflected in the
type of the representation of the document (if the representation
is assumed to be typed). Though convenient to construct, a
typeless representation for XML documents often makes use of a
large number of representation tags, which not only require some
significant amount of space to store but may also incur numerous
runtime tag checks when the represented documents are processed.
Moreover, with a typeless representation for XML documents, it
becomes difficult or even impossible to statically capture program
invariants that are related to the type information of XML
documents. Building upon our recent work on guarded recursive
datatypes, we present an approach to representing XML documents in
this paper that not only allows the type information of an XML
document to be reflected in the type of the representation of the
document but also significantly reduces the need for
representation tags that are required in typeless representations.
With this approach, we become able to process XML documents in a
typeful manner, thus reaping various wellknown software
engineering benefits from the presence of types. }inproceedings{Taha+Ellner+Xi:EMSOFT2003
author = {Walid Taha and Stephan Ellner and Hongwei Xi} title = {Generating Imperative, HeapBounded Programs in a Functional Setting} booktitle = {Proceedings of the Third International Conference on Embedded Software} publisher = {SpringerVerlag LNCS 2855} year = {2003} month = {October} address = {Philadelphia, PA} pages = {340355} churchreport = {yes} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/emsoft03.ps} abstract = { }Highlevel programming languages offer significant expressivity
but provide little or no guarantees about resource utilization.
Resourcebounded languages provide strong guarantees about the
runtime behavior of programs but often lack mechanisms that allow
programmers to write more structured, modular, and reusable
programs. To overcome this basic tension in language design, this
paper advocates taking into account the natural distinction
between the development platform and the deployment platform for
resourcesensitive software. To illustrate this approach, we
develop the metatheory for GeHB, a twolevel language in which
first stage computations can involve arbitrary resource
consumption, but the second stage can only involve functional
programs that do not require new heap allocations. As an example
of a such a secondstage language we use the recently proposed
firstorder functional language LFPL. LFPL can be compiled
directly to mallocfree, imperative C code. We show that all
generated programs in GeHB can be transformed into welltyped LFPL
programs, thus ensuring that the results established for LFPL are
directly applicable to GeHB. }inproceedings{Xi:SEFM2003
author = {Hongwei Xi} title = {Facilitating Program Verification with Dependent Types} booktitle = {Proceedings of the International Conference on Software Engineering and Formal Methods} year = {2003} month = {September} address = {Brisbane, Australia} pages = {7281} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/sefm03.ps} churchreport = {yes} abstract = { }The use of types in capturing program invariants is overwhelming
in practical programming. The type systems in languages such as
ML and Java scale convincingly to realistic programs but they are
of relatively limited expressive power. In this paper, we show
that the use of a restricted form of dependent types can enable us
to capture many more program invariants such as memory safety
while retaining practical typechecking. The programmer can encode
program invariants with type annotations and then verify these
invariants through static typechecking. Also the type annotations
can serve as informative program documentation, which are
mechanically verified and can thus be fully trusted. We argue with
realistic examples that this restricted form of dependent types
can significantly facilitate program verification as well as
program documentation. }inproceedings{Chen+Xi:ICFP2003
author = {Chiyan Chen and Hongwei Xi} title = {MetaProgramming through Typeful Code Representation} booktitle = {Proceedings of the Eighth ACM SIGPLAN International Conference on Functional Programming} year = {2003} month = {August} address = {Uppsala, Sweden} pages = {275286} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/icfp03.ps} churchreport = {yes} abstract = { }By allowing the programmer to write code that can generate code at
runtime, metaprogramming offers a powerful approach to program
construction. For instance, metaprogramming can often be employed
to enhance program efficiency and facilitate the construction of
generic programs. However, metaprogramming, especially in an
untyped setting, is notoriously errorprone. In this paper, we
aim at making metaprogramming less errorprone by providing a
type system to facilitate the construction of correct
metaprograms. We first introduce some code constructors for
constructing typeful code representation in which program
variables are replaced with deBruijn indices, and then formally
demonstrate how such typeful code representation can be used to
support metaprogramming. The main contribution of the paper lies
in recognition and then formalization of a novel approach to typed
metaprogramming that is practical, general and flexible. }inproceedings{Chen+Xi:PEPM2003
author = {Chiyan Chen and Hongwei Xi} title = {Implementing Typeful Program Transformation} booktitle = {Proceedings of ACM SIGPLAN Workshop on Partial Evaluation and Semantics Based Program Manipulation} year = {2003} month = {June} churchreport = {yes} address = {San Diego, CA} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/pepm03.ps} abstract = { }The notion of program transformation is ubiquitous in programming
language studies on interpreters, compilers, partial evaluators,
etc. In order to implement a program transformation, we need to
choose a representation in the meta language, that is, the
programming language in which we construct programs, for
representing object programs, that is, the programs in the object
language on which the program transformation is to be performed.
In practice, most representations chosen for typed object programs
are typeless in the sense that the type of an object program
cannot be reflected in the type of its representation. This is
unsatisfactory as such typeless representations make it impossible
to capture in the type system of the meta language various
invariants in a program transformation that are related to the
types of object programs. In this paper, we propose an approach
to implementing program transformations that makes use of a
firstorder typeful program representation formed in Dependent ML
(DML), where the type of an object program as well as the types of
the free variables in the object program can be reflected in the
type of the representation of the object program. We introduce
some programming techniques needed to handle this typeful program
representation, and then present an implementation of a CPS
transform function where the relation between the type of an
object program and that of its CPS transform is captured in the
type system of DML. In a broader context, we claim to have taken a
solid step along the line of research on constructing certifying
compilers. }inproceedings{Xi:SBLP2003
author = {Hongwei Xi} title = {Dependently Typed Pattern Matching} booktitle = {Proceedings of Simposio Brasileiro de Linguagens de Programacao} year = {2003} month = {May} address = {Ouro Preto, Brazil} churchreport = {yes} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/sblp03.ps} abstract = { }The mechanism for declaring datatypes to model data structures in
functional programming languages such as Standard ML and Haskell
can offer both convenience in programming and clarity in code.
With the introduction of dependent datatypes in DML, the
programmer can model data structures with more accuracy, thus
capturing more program invariants. In this paper, we study some
practical aspects of dependent datatypes that affect both
typechecking and compiling pattern matching. The results, which
have already been tested, demonstrate that dependent datatype can
not only offer various programming benefits but also lead to
performance gains, yielding a concrete case where safer programs
run faster. }unpublished{Wel+Plu+Kam:DMP2003
author = {J. B. Wells and Detlef Plump and Fairouz Kamareddine} title = {Diagrams for Meaning Preservation} note = {Long version of \citeWel+Plu+Kam:RTA2003} month = {April} year = {2003} churchreport = {yes} abstract = { }This paper presents an abstract framework and
multiple diagrambased methods for proving
\emphmeaning preservation, i.e., that all rewrite
steps of a rewriting system preserve the meaning
given by an operational semantics based on a
rewriting strategy. While previous rewritingbased
methods have generally needed the treated rewriting
system as a whole to have such properties as, e.g.,
\emphconfluence, \emphstandardization, and/or
\emphtermination or \emphboundedness of
\emphdevelopments, our methods can work when
\emphall of these conditions fail, and thus can
handle more rewriting systems. We isolate the new
\emphlift/project with termination diagram as the
key proof idea and show that previous
rewritingbased methods (Plotkin's method based on
\emphconfluence and \emphstandardization and
Machkasova and Turbak's method based on distinct
\emphlift and \emphproject properties)
implicitly use this diagram. Furthermore, our
framework and proof methods help reduce the proof
burden substantially by, e.g., supporting separate
treatment of partitions of the rewrite steps,
needing only \emphelementary diagrams for rewrite
step interactions, excluding many rewrite step
interactions from consideration, needing weaker
termination properties, and providing generic
support for using developments in combination with
any method. }article{Xi:JUCS2003
author = {Hongwei Xi} title = {Dependently Typed Pattern Matching} journal = {Journal of Universal Computer Science} volume = {9} number = {8} year = {2003} pages = {851872} churchreport = {yes} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/jucs03.ps} abstract = { }The mechanism for declaring datatypes to model data structures in
programming languages such as Standard ML and Haskell can offer
both convenience in programming and clarity in code. With the
introduction of dependent datatypes in DML, the programmer can
model data structures with more accuracy, thus capturing more
program invariants. In this paper, we study some practical
aspects of dependent datatypes that affect both typechecking and
compiling pattern matching. The results, which have already been
tested, demonstrate that dependent datatype can not only offer
various programming benefits but also lead to performance gains,
yielding a concrete case where safer programs run faster. }inproceedings{Xi+Chen+Chen:POPL2003
author = {Hongwei Xi and Chiyan Chen and Gang Chen} title = {Guarded Recursive Datatype Constructors} booktitle = {Proceedings of the 30th ACM SIGPLAN Symposium on Principles of Programming Languages} year = {2003} month = {January} address = {New Orleans} pages = {224235} churchreport = {yes} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/popl03.ps} abstract = { }Abstract: We introduce a notion of guarded recursive (g.r.) datatype
constructors, generalizing the notion of recursive datatypes in
functional programming languages such as ML and Haskell. We
address both theoretical and pragmatic issues resulted from this
generalization. On one hand, we design a type system to formalize
the notion of g.r. datatype constructors and then prove the
soundness of the type system. On the other hand, we present some
significant applications (e.g., implementing objects, implementing
staged computation, etc.) of g.r. datatype constructors, arguing
that g.r. datatype constructors can have farreaching consequences
in programming. The main contribution of the paper lies in the
recognition and then the formalization of a programming notion
that is of both theoretical interest and practical use. }inproceedings{Wel+Plu+Kam:RTA2003
author = {J. B. Wells and Detlef Plump and Fairouz Kamareddine} title = {Diagrams for Meaning Preservation} note = {A long version is \citeWel+Plu+Kam:DMP2003} crossref = {RTA2003} year = {2003} pages = {88106} churchreport = {yes} abstract = { }This paper presents an abstract framework and
multiple diagrambased methods for proving
\emphmeaning preservation, i.e., that all rewrite
steps of a rewriting system preserve the meaning
given by an operational semantics based on a
rewriting strategy. While previous rewritingbased
methods have generally needed the treated rewriting
system as a whole to have such properties as, e.g.,
\emphconfluence, \emphstandardization, and/or
\emphtermination or \emphboundedness of
\emphdevelopments, our methods can work when
\emphall of these conditions fail, and thus can
handle more rewriting systems. We isolate the new
\emphlift/project with termination diagram as the
key proof idea and show that previous
rewritingbased methods (Plotkin's method based on
\emphconfluence and \emphstandardization and
Machkasova and Turbak's method based on distinct
\emphlift and \emphproject properties)
implicitly use this diagram. Furthermore, our
framework and proof methods help reduce the proof
burden substantially by, e.g., supporting separate
treatment of partitions of the rewrite steps,
needing only \emphelementary diagrams for rewrite
step interactions, excluding many rewrite step
interactions from consideration, needing weaker
termination properties, and providing generic
support for using developments in combination with
any method. }misc{Chen:JLC2003
author = {Gang Chen} title = {Soundness of Coercion in the Calculus of Constructions} churchreport = {yes} year = {2003} keywords = {Subtyping, coersion, Calculus of Constructions} xpostscript = {http://www.churchproject.org/reports/electronic/Chen:JLC2003.ps.gz} xurl = {http://www.churchproject.org/reports/electronic/Chen:JLC2003.pdf.gz} xpdf = {http://www.churchproject.org/reports/electronic/Chen:JLC2003.pdf} abstract = { We present a coercive subtyping system for the calculus of constructions. The proposed system $\lambda C_\leq^co$ is obtained essentially by adding coercions and $\eta$conversion to $\lambda C_\leq$\citeChen97a, which is a subtyping extension to the calculus of constructions without coercions. We proved that a coercion $c : A \leq B$ is semantically sound. That is, 1) $c$ is a term of type $A\to B$; 2) $c$ behaves like an identity function. This development follows the spirit of semantical interpretation of subtyping in \citeLMS95,LMS98. }note = {To appear in Journal of Logic and Computation} }inproceedings{Chen:POPL2003
author = {Gang Chen} title = {Coercive Subtyping for the Calculus of Constructions
(extended abstract)} churchreport = {yes} crossref = {POPL2003} keywords = {Subtyping, coersion, Calculus of Constructions} year = {2003} abstract = { }We present a coercive subtyping system for the calculus of
constructions. The proposed system $\lambda C_\leq^co$ is obtained
essentially by adding coercions and $\eta$conversion to
$\lambda C_\leq$\citeChen97a, which is a subtyping
extension to the calculus of constructions without
coercions. Following \citeLMS95,LMS98, the coercive
subtyping $c : A \leq B$ is understood as a special case
of typing in arrow type $c : A \to B$ such that the term
$c$ behaves like an identity function. We prove that,
with respect to this semantic interpretation, the
proposed coercive subtyping system is sound and complete,
and that this completeness leads to transitivity
elimination. In addition, we establish the equivalence
between $\lambda C_\leq^co$ and $\lambda C_\beta \eta$, this fact implies that
$\lambda C_\leq^co$ has confluence, subject reduction and strong
normalization. We propose a formalization of coercion
inference problem and present a sound and complete
coercion inference algorithm. }unpublished{NeeMairankbndinterta
author = {Peter Møller Neergaard and Harry G. Mairson} title = {Rank Bounded Intersection: Types, Potency, and
Idempotency} abstract = { Intersection type systems realize a \emphfinite
polymorphism where different types for a term are
itemized explicitly. We analyze System~$\mathbbI$,
a rankbounded intersection type system where
intersection is not \emphassociative, commutative,
or \emphidempotent (ACI), but includes a
substitution mechanism employing \em expansion
variables that facilitates modular program
composition and flow analysis. This type system is
used in a prototype intersection type compiler for
the Church project. We prove that the problem of
type inference is exactly as hard as the problem of
normalization: the worstcase cost of both is an
elementary function, where the iterated exponential
depends on the rank. The key to these results is
that simplytyped terms must be linear without ACI,
but have the usual nonelementary power with
ACI. Further, type inference is \emphalways
synonymous with normalization: the cost of computing
the principal typing of any term is exactly the cost
of computing its normal form. These results do not
hold when AC, and particularly I, is added. }note = {This paper is obsolete and has been replaced with \citeNeeMaiicfp2004} year = {2003} churchreport = {yes} }inproceedings{Haa+Wel:ESOP2003
itrspaper = {yes} author = {Christian Haack and J. B. Wells} title = {Type Error Slicing in Implicitly Typed HigherOrder
Languages} year = {2003} crossref = {ESOP2003} pages = {284301} churchreport = {yes} note = {Superseded by \citeHaa+Wel:SCP2004v50} dartreport = {yes} xdartreport = {not a DART paper, although its journal version is part of a deliverable} abstract = { }Previous methods have generally identified the
location of a type error as a particular program
point or the program subtree rooted at that
point. We present a new approach that identifies the
location of a type error as a set of program
points (a \emphslice) all of which are necessary
for the type error. We describe algorithms for
finding minimal type error slices for implicitly typed
higherorder languages like Standard ML. }unpublished{Kfo+Wel:PTI2003
author = {Assaf J. Kfoury and J. B. Wells} title = {Principality and Type Inference for Intersection
Types Using Expansion Variables} churchreport = {yes} darttodo = {Mention this.} note = {Supersedes \citeKfo+Wel:POPL1999} month = {August} year = {2003} abstract = { }Principality of typings is the property that for
each typable term, there is a typing from which all
other typings are obtained via some set of
operations. Type inference is the problem of finding
a typing for a given term, if possible. We define an
intersection type system which has principal typings
and types exactly the strongly normalizable
$\lambda$terms. More interestingly, every
finiterank restriction of this system (using
Leivant's first notion of rank) has principal
typings and also has decidable type inference. This
is in contrast to System~F where the finite rank
restriction for every finite rank at 3 and above has
neither principal typings nor decidable type
inference. Furthermore, the notion of principal typings
for our system involves only one operation,
substitution, rather than several operations (not
all substitutionbased) as in earlier presentations
of principality for intersection types (without rank
restrictions). In our system the earlier notion of
\emphexpansion is integrated in the form of
\emphexpansion variables, which are subject to
substitution as are ordinary variables. A
unificationbased type inference algorithm is
presented using a new form of unification,
$\beta$unification. }unpublished{NeeMaihowlightissaferecLICS03
author = {Peter Møller Neergaard and Harry G. Mairson} title = {How Light is Safe Recursion? Translations Between Logics of Polynomial Time} note = {Unpublished note} year = {2003} }unpublished{NeeMaihowlightissaferec
crossref = {NeeMaihowlightissaferecLICS03} year = {2003} abstract = { We investigate the simulation of Bellantoni and
Cook's function algebra BC by Girard's Light Linear
Logic LAL (as amended by Asperti). Both of these
languages characterize exactly the polynomial time
functions. Given a $\textscptime$ function $f$
described in BC program $\phi_f$, how can we
translate $\phi_f$ into LAL? \par The best known
translation, due to Murawski and Ong, compiles only
the \emphlinear fragment $\textscbc^$, where
\emphsafe variables cannot be shared. We show that
this fragment can be evaluated in
$\textsclogspace$. Next, we prove that extensions
of their approach to the full BC algebra, as well as
any translation that interprets integers and
primitive recursion via their standard, System
Finspired inductive type coding, cannot provide a
correct translation into LAL. Finally, we provide a
correct translation, based on generalizing Turing
machine simulations into an SECD compiler, realized
in LAL. We further elaborate this translation so
that it is compositional on the BC combinators. }churchreport = {yes} pmnwww = {conference} }inproceedings{Amt+Mul:IpRA2002
author = {Torben Amtoft and Robert Muller} title = {Inferring Annotated Types for Interprocedural
Register Allocation with Constructor Flattening} booktitle = {The 2003 ACM SIGPLAN International Workshop on Types
in Language Design and Implementation (TLDI'03)} location = {New Orleans, Louisiana, USA} pages = {8697} year = {2003} month = {January} publisher = {ACM Press} churchreport = {yes} abstract = { }We introduce an annotated type system for a
compiler intermediate language. The type system is designed to
support interprocedural register allocation and the
representation of tuples and variants directly in the register
file. We present an algorithm for assigning annotations and prove
its soundness with respect to the type system. }article{NeeSorIAC2002v178n1
crossref = {Nee+Sor:IACta} pmnwww = {journal} churchreport = {yes} year = {2002} abstract = { For a notion of reduction in a $\lambda$calculus
one can ask whether a term satisfies
\emphconservation and \emphuniform
normalization. Conservation means that singlestep
reductions of the term preserve infinite reduction
paths from the term. Uniform normalization means
that either the term will have no reduction paths
leading to a normal form, or all reduction paths
will lead to a normal form. \par In the classical
\emphconservation theorem for $\Lambda^I$ the
distinction between the two notions is not clear:
uniform normalization implies conservation, and
conservation also implies uniform normalization. The
reason for this is that $\Lambda^I$ is closed under
reduction, due to the fact that reductions never
erase terms in $\Lambda^I$. More generaly for
nonerasing reductions, the two notions are
equivalent on a set closed under reduction. However,
when turning to erasing reductions the distinction
becomes important as conservation no longer implies
uniform normalization. \par This paper presents a
new technique for finding uniformly normalizing
subsets of a $\lambda$calculus. This is done by
combining a syntactic and a semantic criterion. The
technique is demonstrated by several
applications. The technique is used to present a new
uniformly normalizing subset of the pure
$\lambda$calculus; this subset is a superset
of~$\Lambda^I$ and thus contains erasing
$\mathbfK$redexes. The technique is also used to
prove strong normalization from weak normalization
of the simply typed $\lambda$calculus extended with
pairs; this is an extension of techniques developed
recently by Sørensen and Xi. Before presenting
the technique the paper presents a simple proof of a
slightly weaker form of the characterization of
perpetual redexes by Bergstra and Klop; this is a
stepping for the later applications of the
technique. }copyright = {\copyrightThis paper is Copyright Elsevier.} }article{Nee+Sor:IACta
author = {Peter Møller Neergaard and Morten Heine Bjørnlund Sørensen} title = {Conservation and Uniform Normalization in $\lambda$calculi with Erasing Reductions} journal = {Information and Computation} volume = {178} number = {1} pages = {149179} month = {October} year = {2002} }inproceedings{NeeMaiLISICC02
author = {Peter Møller Neergaard and Harry G. Mairson} title = {LAL Is Square: Representation and Expressiveness in Light Affine Logic} crossref = {ICC2002} year = {2002} churchreport = {yes} pmnwww = {conference} abstract = { }We focus on how the choice of inputoutput representation has a crucial impact on the expressiveness of socalled ``logics of polynomial
time.'' Our analysis illustrates this dependence in
the context of \emphLight Affine Logic (LAL),
which is both a restricted version of Linear Logic,
and a primitive functional programming language with
restricted sharing of arguments. By slightly
relaxing representation conventions, we derive
doublyexponential expressiveness bounds for this
``logic of polynomial time.'' We emphasize that
\emphsquaring is the unifying idea that relates
upper bounds on cut elimination for LAL with lower
bounds on representation. Representation issues
arise in the simulation of
\textscdtime$(2^2^n)$, where we construct a \em
uniform family of proofnets encoding Turing
Machine; specifically, the dependence on $n$ only
affects the number of enclosing \em boxes. A
related technical improvement is the simulation of
\textscdtime$(n^k)$ in depth~$O(\log k)$ LAL
proofnets. The resulting upper bounds on cut
elimination then satisfy the properties of a
\emphfirstclass polynomial Turing Machine
simulation, where there is a fixed polynomial
slowdown in the simulation of any polynomial
computation. }article{Xi:JHOSL2002
author = {Hongwei Xi} title = {Dependent Types for Program Termination Verification} journal = {Journal of HigherOrder and Symbolic Computation} year = {2002} month = {October} volume = {15} pages = {91131} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/HOSCdmlterm.ps} churchreport = {yes} abstract = { }Program termination verification is a challenging research subject
of significant practical importance. While there is already a rich
body of literature on this subject, it is still undeniably a
difficult task to design a termination checker for a realistic
programming language that supports general recursion. In this
paper, we present an approach to program termination verification
that makes use of a form of dependent types developed in Dependent
ML (DML), demonstrating a novel application of such dependent
types to establishing a liveness property. We design a type system
that enables the programmer to supply metrics for verifying
program termination and prove that every welltyped program in
this type system is terminating. We also provide realistic
examples, which are all verified in a prototype implementation, to
support the effectiveness of our approach to program termination
verification as well as its unobtrusiveness to programming. The
main contribution of the paper lies in the design of an approach
to program termination verification that smoothly combines types
with metrics, yielding a type system capable of guaranteeing
program termination that supports a general form of recursion
(including mutual recursion), higherorder functions, algebraic
datatypes, and polymorphism. }inproceedings{Xi:ASIAPEPM2002
author = {Hongwei Xi} title = {Unifying ObjectOriented Programming with Typed Functional Programming} booktitle = {Proceedings of ASIAN Symposium on Partial Evaluation and SemanticsBased Program Manipulation (ASIAPEPM)} year = {2002} month = {September} pages = {117125} address = {AizuWakamatsu, Japan} churchreport = {yes} postscript = {http://www.cs.bu.edu/~hwxi/academic/papers/asiapepm02.ps} abstract = { }The wide practice of objectoriented programming in current
software construction is evident. Despite extensive studies on
typing programming objects, it is still undeniably a challenging
research task to design a type system for objectoriented
programming that is both effective in capturing program errors and
unobtrusive to program construction. In this paper, we present a
novel approach to typing objects that makes use of a recently
invented notion of guarded dependent datatypes. We show that our
approach can address various difficult issues (e.g., handling
``self'' type, typing binary methods, etc.) in a simple and
natural typetheoretical manner, remedying the deficiencies in
many existing approaches to typing objects. }inproceedings{Chen:PPDP2002
author = {Gang Chen} title = {Full Integration of Subtyping and Ifexpression} crossref = {PPDP2002} year = {2002} keywords = {Subtyping, Ifexpression, Java, Programming Language} churchreport = {yes} abstract = { }The combination of full subsumption and conditional
expression is a challenging problem, because in such a
system, a term might not have a type which is a
representative of all types of the term. Therefore, the
traditional type checking technique fails. Due to such a
difficulty, Java typing rule for ifexpression uses only
a restricted form of subtyping. In this paper, we propose
a type system which includes both of the above mentioned
features and enjoys decidable type checking. We also show
that the system has the subject reduction property. It is
expected that this technique could be used to improve the
type system of Java with full subsumption. }techreport{Amt+Wel:MPD2002
author = {Torben Amtoft and J. B. Wells} title = {Mobile Processes with Dependent Communication Types
and Singleton Types for Names and Capabilities} month = {December} year = {2002} institution = {Kansas State University, Department of Computing and
Information Sciences} number = {20023} xnote = {Torben has replaced the file supposed linked to as
KSU CIS TR 20023 with a later version which was
submitted to SAS '03. The original KSU CIS TR
200203 is no longer available!!! It might be
possible to reconstruct something like it via CVS
time travel, but it would be a pain.
Torben, 02/07/04: the link now again points to
the original report!!} dartreport = {yes} churchreport = {yes} abstract = { }There are many calculi for reasoning about
concurrent communicating processes which have
locations and are mobile. Examples include the
original Ambient Calculus and its many variants, the
Seal Calculus, the MRcalculus, the Mcalculus,
etc. It is desirable to use such calculi to describe
the behavior of mobile agents. It seems reasonable
that mobile agents should be able to follow
nonpredetermined paths and to carry
nonpredetermined types of data from location to
location, collecting and delivering this data using
communication primitives. Previous type systems for
ambient calculi make this difficult or impossible to
express, because these systems (if they handle
communication at all) have always globally mapped
each ambient name to a type governing the type of
values that can be communicated locally or with
adjacent locations, and this type can not depend on
where the ambient has traveled. \par We present a
new type system where there are no global
assignments of types to ambient names. Instead, the
type of an ambient process $P$ not only indicates
what can be locally communicated but also gives an
upper bound on the possible ambient nesting shapes
of any process $P'$ to which $P$ can evolve, as well
as the possible capabilities and names that can be
exhibited or communicated at each location. Because
these shapes can depend on which capabilities and
names are actually communicated, the types support
this with explicit dependencies on
communication. This system is thus the first type
system for an ambient calculus which provides type
polymorphism of the kind that is usually present in
polymorphic type systems for the
$\lambda$calculus. }inproceedings{Carlier:ITRS2002
itrspaper = {yes} author = {Sébastien Carlier} title = {Polar Type Inference with Intersection Types and $ømega$} crossref = {ITRS2002} year = {2002} postscript = {http://www.macs.hw.ac.uk/~sebc/carlierptiio.ps.gz} churchreport = {yes} noteremark = {The note field is blank to suppress the note field from ITRS2002.} note = {} abstract = { }We present a type system featuring intersection
types and $ømega$, a type constant which is
assigned to unused terms. We exploit and extend the
technology of expansion variables from the recently
developed System I, with which we believe our system
shares many interesting properties, such as strong
normalization, principal typings, and compositional
analysis. Our presentation emphasizes a polarity
discipline and shows its benefits. We syntactically
distinguish positive and negative types, and give
them different interpretations. We take the point of
view that the interpretation of a type is intrinsic
to it, and should not change implicitly when it
appears at the opposite polarity. Our system is the
result of a process which started with an extension
of Trevor Jim's Polar Type System. }inproceedings{Kfo+Was+Wel:ITRS2002
itrspaper = {yes} author = {Assaf J. Kfoury and Geoff Washburn and J. B. Wells} title = {Implementing Compositional Analysis Using
Intersection Types With Expansion Variables} crossref = {ITRS2002} year = {2002} noteremark = {The note field is blank to suppress the note field from ITRS2002.} note = {} dartreport = {yes} churchreport = {yes} nonexistentdvi = {http://www.churchproject.org/reports/electronic/Kfo+Was+Wel:ITRS2002.dvi} elsevierurlb = {http://www.elsevier.nl/gejng/31/29/23/125/51/show/Products/notes/index.htt} elsevierurla = {http://www.elsevier.nl/locate/entcs/volume70.html} abstract = { }A program analysis is \emphcompositional when the analysis
result for a particular program fragment is obtained solely from
the results for its immediate subfragments via some composition
operator. This means the subfragments can be analyzed
independently in any order. Many commonly used program analysis
techniques (in particular, most abstract interpretations and most
uses of the Hindley/Milner type system) are not compositional and
require the entire text of a program for sound and complete
analysis. \par System I is a recent type system for the pure
$\lambda$calculus with intersection types and the new technology
of expansion variables. System I supports compositional analysis
because it has the \emphprincipal typings property and an
algorithm based on the new technology of $\beta$unification has
been developed that finds these principal typings. In addition,
for each natural number $k$, typability in the rank$k$
restriction of System I is decidable, so a complete and
terminating analysis algorithm exists for the rank$k$
restriction. \par This paper presents new understanding that has
been gained from working with multiple implementations of System I
and $\beta$unificationbased analysis algorithms. The previous
literature on System I presented the type system in a way that
helped in proving its more important theoretical properties, but
was not as easy for implementers to follow as it could be. This
paper provides a presentation of many aspects of System I that
should be clearer as well as a discussion of important
implementation issues. }article{Amt+Kfo+Per:CL2002
author = {Torben Amtoft and Assaf J. Kfoury and Santiago
M. PericasGeertsen} title = {Orderly Communication in the Ambient Calculus} journal = {Computer Languages} publisher = {Elsevier Science} year = {2002} volume = {28} pages = {2960} churchreport = {yes} abstract = { }The Ambient Calculus (henceforth, \bf AC) was developed by Cardelli
and Gordon as a formal framework to study issues of mobility and
migrant code. We present a type system for \bf AC that allows
the type of exchanged data within the same ambient to vary over
time. Our type system assigns what we call \emphbehaviors to
processes; a denotational semantics of behaviors is proposed, here
called \emphtrace semantics, underlying much of the remaining
analysis. We state and prove a Subject Reduction property for our
typed version of \bf AC. Based on techniques borrowed from
finite automata theory, type checking of fully typeannotated
processes is shown to be decidable. We show that the typed
version of \bf AC originally proposed by Cardelli and Gordon can
be naturally embedded into our typed version of \bf AC. }article{Wel+Dim+Mul+Tur:JFP2002v12n3
itrspaper = {yes} author = {J. B. Wells and Allyn Dimock and Robert Muller and
Franklyn Turbak} title = {A Calculus with Polymorphic and Polyvariant Flow
Types} journal = {J.~Funct.\ Programming} year = {2002} volume = {12} number = {3} month = {May} pages = {183227} note = {Supersedes \citeWel+Dim+Mul+Tur:TAPSOFT1997} churchreport = {yes} documenturl = {http://www.churchproject.org/reports/Wel+Dim+Mul+Tur:JFP2002v12n3.html} abstract = { }We present $\lambda^\mathrmCIL$, a typed
$\lambda$calculus which serves as the foundation
for a typed intermediate language for optimizing
compilers for higherorder polymorphic programming
languages. The key innovation of
$\lambda^\mathrmCIL$ is a novel formulation of
intersection and union types and flow labels on both
terms and types. These \emphflow types can encode
polyvariant control and data flow information within
a polymorphically typed program representation. Flow
types can guide a compiler in generating customized
data representations in a strongly typed setting.
Since $\lambda^\mathrmCIL$ enjoys confluence,
standardization, and subject reduction properties,
it is a valuable tool for reasoning about programs
and program transformations. }inproceedings{Wells:ICALP2002
itrspaper = {yes} author = {J. B. Wells} title = {The Essence of Principal Typings} crossref = {ICALP2002} year = {2002} documenturl = {http://www.churchproject.org/reports/Wells:ICALP2002.html} dartreport = {yes} churchreport = {yes} pages = {913925} springerurl = {http://link.springer.de/link/service/series/0558/bibs/2380/23800913.htm} abstract = { }Let $S$ be some type system. A \emphtyping in $S$
for a typable term $M$ is the collection of all of
the information other than $M$ which appears in the
final judgement of a proof derivation showing that
$M$ is typable. For example, suppose there is a
derivation in $S$ ending with the judgement
$A\vdashM\mathrel:\tau$ meaning that $M$ has
result type $\tau$ when assuming the types of free
variables are given by $A$. Then $(A,\tau)$ is a
typing for $M$.
\par
A \emphprincipal typing in $S$ for a term $M$ is a
typing for $M$ which somehow represents all other
possible typings in $S$ for $M$. It is important not
to confuse this notion with the weaker notion of
\emphprincipal type often mentioned in connection
with the Hindley/Milner type system. Previous
definitions of principal typings for specific type
systems have involved various syntactic operations
on typings such as \emphsubstitution of types for
type variables, \emphexpansion, \emphlifting,
etc.
\par
This paper presents a new general definition of
principal typings which does not depend on the
details of any particular type system. This paper
shows that the new general definition correctly
generalizes previous systemdependent definitions.
This paper explains why the new definition is the
right one. Furthermore, the new definition is used
to prove that certain polymorphic type systems using
$\forall$quantifiers, namely System F and the
Hindley/Milner system, do not have principal
typings. }unpublished{Wells:EPT2002
itrspaper = {yes} author = {J. B. Wells} title = {The Essence of Principal Typings} documenturl = {http://www.churchproject.org/reports/Wells:EPT2002.html} pdf = {http://www.macs.hw.ac.uk/~jbw/papers/Wells:TheEssenceofPrincipalTypings:slightlylonger.pdf} dartreport = {yes} churchreport = {yes} year = {2002} note = {A longer version of \citeWells:ICALP2002 with a
3page appendix with extra proofs, a page of extra
discussion of nonrelated work, and other minor
changes} abstract = { }Let $S$ be some type system. A \emphtyping in $S$
for a typable term $M$ is the collection of all of
the information other than $M$ which appears in the
final judgement of a proof derivation showing that
$M$ is typable. For example, suppose there is a
derivation in $S$ ending with the judgement
$A\vdashM\mathrel:\tau$ meaning that $M$ has
result type $\tau$ when assuming the types of free
variables are given by $A$. Then $(A,\tau)$ is a
typing for $M$.
\par
A \emphprincipal typing in $S$ for a term $M$ is a
typing for $M$ which somehow represents all other
possible typings in $S$ for $M$. It is important not
to confuse this notion with the weaker notion of
\emphprincipal type often mentioned in connection
with the Hindley/Milner type system. Previous
definitions of principal typings for specific type
systems have involved various syntactic operations
on typings such as \emphsubstitution of types for
type variables, \emphexpansion, \emphlifting,
etc.
\par
This paper presents a new general definition of
principal typings which does not depend on the
details of any particular type system. This paper
shows that the new general definition correctly
generalizes previous systemdependent definitions.
This paper explains why the new definition is the
right one. Furthermore, the new definition is used
to prove that certain polymorphic type systems using
$\forall$quantifiers, namely System F and the
Hindley/Milner system, do not have principal
typings. }article{Wel+Haa:IACta
itrspaper = {yes} darttodo = {Mention this.} author = {J. B. Wells and Christian Haack} title = {Branching Types} journal = {Inform.\ & Comput.} year = {200X} xvolume = {} xnumber = {} xpages = {} xmonth = {} note = {Completely supersedes \citeWel+Haa:ESOP2002. Accepted
subject to revisions} churchreport = {yes} abstract = { }Although systems with intersection types have many
unique capabilities, there has never been a fully
satisfactory explicitly typed system with
intersection types. We introduce and prove the basic
properties of $\lambda^\mathrmB$, a typed
$\lambda$calculus with \emphbranching types and
types with quantification over \emphtype selection
parameters. The new system $\lambda^\mathrmB$ is an
explicitly typed system with the same expressiveness
as a system with intersection types. Typing
derivations in $\lambda^\mathrmB$ use branching types to
squash together what would be separate parallel
derivations in earlier systems with intersection
types. }inproceedings{Wel+Haa:ESOP2002
itrspaper = {yes} author = {J. B. Wells and Christian Haack} title = {Branching Types} year = {2002} crossref = {ESOP2002} pages = {115132} documenturl = {http://www.churchproject.org/reports/Wel+Haa:ESOP2002.html} springerurl = {http://link.springer.de/link/service/series/0558/bibs/2305/23050115.htm} churchreport = {yes} note = {Completely superseded by \citeWel+Haa:IACta} abstract = { }Although systems with intersection types have many
unique capabilities, there has never been a fully
satisfactory explicitly typed system with
intersection types. We introduce $\lambda^\mathrmB$ with \emphbranching types and types which are
quantified over \emphtype selectors to provide an
explicitly typed system with the same expressiveness
as a system with intersection types. Typing
derivations in $\lambda^\mathrmB$ effectively
squash together what would be separate parallel
derivations in earlier systems with intersection
types. }unpublished{Amt:CauMov2001
author = {Torben Amtoft} title = {Causal Type System for Ambient Movements} note = {Submitted for publication} year = {2002} churchreport = {yes} abstract = { }The Ambient Calculus was developed by Cardelli and Gordon as a
formal framework to study issues of mobility and migrant code. We
present a type system for the calculus, parameterized by security
constraints expressing where a given ambient may reside and where
it may be dissolved. A subject reduction property then guarantees
that a welltyped process never violates these constraints;
additionally it ensures that communicating subprocesses agree on
their ``topic of conversation''. Based on techniques borrowed from
finite automata theory, type checking of typeannotated processes
is decidable. The type system employs a notion of causality in
that processes are assigned ``behaviors''. This significantly
increases the precision of the analysis and compensates for the
lack of ``cocapabilities'' (an otherwise increasingly popular
extension to the ambient calculus); also it allows (in contrast to
other approaches) an ambient to hold multiple topics of
conversation. }inproceedings{Tur+Wel:PPDP2001
author = {Franklyn Turbak and J. B. Wells} title = {Cycle Therapy: A Prescription for Fold and Unfold on
Regular Trees} crossref = {PPDP2001} year = {2001} pages = {137149} churchreport = {yes} documenturl = {http://www.churchproject.org/reports/Tur+Wel:PPDP2001.html} overheadsurl = {http://cs.wellesley.edu/~fturbak/talks/cycletherapy.pdf} overheadscomment = {to make printable version: pdf2ps
cycletherapy.pdf,ps then "psnup b1cm c
w20.99cm h29.7cm W29.7cm H20.99cm 4
cycletherapy.ps,4up"} abstract = { }Cyclic data structures can be tricky to create and
manipulate in declarative programming languages. In
a declarative setting, a natural way to view cyclic
structures is as denoting \emphregular trees,
those trees which may be infinite but have only a
finite number of distinct subtrees. This paper shows
how to implement the \emphunfold
(\emphanamorphism) operator in both eager and lazy
languages so as to create cyclic structures when the
result is a regular tree as opposed to merely
infinite lazy structures. The usual \emphfold
(\emphcatamorphism) operator when used with a
strict combining function on any infinite tree
yields an undefined result. As an alternative, this
paper defines and show how to implement a
\emphcycfold operator with more useful semantics
when used with a strict function on cyclic
structures representing regular trees. This paper
also introduces an abstract data type
(\emphcycamores) to simplify the use of cyclic
structures representing regular trees in both eager
and lazy languages. Implementions of cycamores in
both SML and Haskell are presented. }phdthesis{Per:PhD2001
author = {Santiago M. PericasGeertsen} title = {XMLFluent Mobile Ambients} institution = {Ph.D. Thesis Comp.\ Sci.\ Dept., Boston Univ.} school = {Boston University} year = {2001} postscript = {http://cspeople.bu.edu/santiago/Papers/Per:PhD2001.ps} churchreport = {yes} abstract = { }The Internet is considered to be among the greatest inventions of the
twentieth century. Despite its immense importance we, programming language
designers, have yet to propose a programming language that can exploit a
network that spans the entire globe.
In this thesis, we introduce a language for a programming model based on
XMLspeaking mobile agents. The thesis is divided into three parts: the
first part defines the mobility aspect of the language, the second
part introduces a typed calculus for manipulating XML documents, and the
third part proposes a way of integrating these two calculi. The final goal
is to design a typed language in which computation can be carried out by
mobile agents exchanging data in the form of XML documents.
The Ambient Calculus was developed by Cardelli and Gordon as a formal
framework to study issues of mobility and migrant. In Part I, we consider
an extension to the Ambient Calculus where ambients are equipped with a set
of local channels over which processes can communicate. This extension
incorporates certain features from the PiCalculus, such as the ability
to communicate over higherorder channels, but without violating design
principles of the Ambient Calculus (e.g., that two processes are unable to
communicate if located in different administrative domains). For the
extended calculus, we introduce an appropriate operational semantics and a
provably sound type system. We conclude this part by showing an encoding
of the Ambient Calculus into our calculus that is both semantics
preserving and type preserving.
XML is a simple and portable way of representing structured data in the
form of trees. It is an ideal language for machinetomachine (or
peertopeer) exchange of data because (i) it is portable, so there is no
need to worry about things like integer representations (big endian
vs.~little endian) or string representations (null terminated vs.~fixed
length) and (ii) structured data is easily represented in the form of
trees. Interestingly, the ability of XML to represent almost any form of
structured data is often an important weakness at the same time: for a
data exchange to be successful, a producer and a consumer must agree on
the structure of the exchanged documents so that the latter can
find/extract what it needs.
This problem can be addressed by the use of a type system. A type
system is needed in order to keep track of the metadata (i.e.~data about
the data) which can be used by a consumer as a guideline for data
extraction. A number of type systems for XML have been proposed in recent
years. Perhaps surprisingly, specially if we look at the evolution of
programming languages, the use of a type system in XML (as well as in
languages for manipulating XML documents) is not mandatory.
Domainspecific languages for XML proposed in the last few years are all,
with the exception of XDuce, dynamically typed. Experience has showed us that
staticallytyped languages are better suited for realworld applications than
dynamicallytyped ones. Despite requiring more from the programmer's
perspective (at least in the absence of a typeinference engine) the
benefits greatly outcast the requirements. Specifically, the use of a
staticallytyped language ensures that the final product is free of an
important number of errors (socalled type errors). In Part II, we
present the dlcalculus, a staticallytyped calculus for manipulating XML
documents, and show how it can be integrated with existing XML technology
by means of two algorithms for importing and exporting documents into the
calculus. }techreport{Amt+Con+Dan+Mal:BRICS2001
author = {Torben Amtoft and Charles Consel and Olivier Danvy
and Karoline Malmkj\aer} title = {The Abstraction and Instantiation of StringMatching
Programs} institution = {DAIMI, Department of Computer Science, University of
Aarhus} year = {2001} churchreport = {yes} type = {Technical Report} number = {BRICS RS0112} abstract = { We consider a naive, quadratic string matcher testing whether a pattern occurs in a text; we equip it with a cache mediating its access to the
text; and we abstract the traversal policy of the pattern, the
cache, and the text. We then specialize this abstracted program
with respect to a pattern, using the offtheshelf partial
evaluator Similix. Instantiating the abstracted program with a
lefttoright traversal policy yields the lineartime behavior of
Knuth, Morris and Pratt's string matcher. Instantiating it with a
righttoleft policy yields the lineartime behavior of Boyer and
Moore's string matcher. }month = {April} note = {An extended version of an essay which appears in \em The Essence of Computation: Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones, editors T. Mogensen and D. Schmidt and I. H Sudborough, Springer LNCS 2566, pp 332357, 2002.} }inproceedings{Amt+Kfo+Per:ESOP2001
author = {Torben Amtoft and Assaf J. Kfoury and Santiago
M. PericasGeertsen} title = {What are PolymorphicallyTyped Ambients?} booktitle = {ESOP 2001, Genova} pages = {206220} year = {2001} churchreport = {yes} editor = {David Sands} volume = {2028} series = {LNCS} month = {April} publisher = {SpringerVerlag} note = {This downloadable extended summary is more detailed
than the article in the ESOP proceedings. A full
report is \citeAmt+Kfo+Per:PTA2000} abstract = { }The Ambient Calculus was developed by Cardelli and
Gordon as a formal framework to study issues of mobility and
migrant code. We consider an Ambient Calculus where ambients
transport and exchange programs rather that just inert data. We
propose different senses in which such a calculus can be said to
be \emphpolymorphically typed, and design accordingly a
\emphpolymorphic type system for it. Our type system assigns
\emphtypes to embedded programs and what we call
\emphbehaviors to processes; a denotational semantics of
behaviors is then proposed, here called \emphtrace semantics,
underlying much of the remaining analysis. We state and prove a
Subject Reduction property for our polymorphicallytyped calculus.
Based on techniques borrowed from finite automata theory,
typechecking of fully typeannotated processes is shown to be
decidable. Our polymorphicallytyped calculus is a conservative
extension of the typed Ambient Calculus originally proposed by
Cardelli and Gordon. }inproceedings{Dim+Wes+Mul+Tur+Wel:ICFP2001
itrspaper = {yes} author = {Allyn Dimock and Ian Westmacott and Robert Muller
and Franklyn Turbak and J. B. Wells} xpdli01title = {FlowBased Function Customization in the Presence of
Representation Pollution} title = {Functioning without Closure: TypeSafe Customized
Function Representations for Standard ML} crossref = {ICFP2001} year = {2001} comment = {change documenturl when this is added as church paper} olddocumenturl = {http://www.macs.hw.ac.uk/~jbw/papers/#Dim+Wes+Mul+Tur+Wel:ICFP2001} documenturl = {http://www.churchproject.org/reports/Dim+Wes+Mul+Tur+Wel:ICFP2001.html} churchreport = {yes} abstract = { The CIL compiler for core Standard ML compiles
whole SML programs using a novel typed intermediate
language that supports the generation of typesafe
customized data representations. In this paper, we
present empirical data comparing the relative
efficacy of several different customization
strategies for function representations. We develop
a cost model to interpret dynamic counts of
operations required for each strategy. One of our
results is data showing that compiling with a
function representation strategy that makes
customization decisions based on the presence or
absence of free variables of a function gives a 26\%
improvement over a uniform closure representation.
We also present data on the relative effectiveness
of various representation pollution removal
strategies. We find that for the types of
representation pollution detected by our compiler,
pollution removal rarely provides any additional
benefit for the representation strategies and
benchmarks considered. }pages = {1425} isbn = {1581134150} }techreport{Dim+Wes+Mul+Tur+Wel+Con:TIC2000TR
itrspaper = {yes} author = {Allyn Dimock and Ian Westmacott and Robert Muller
and Franklyn Turbak and J. B. Wells and Jeffrey
Considine} title = {Program Representation Size in an Intermediate
Language with Intersection and Union Types} month = {March} year = {2001} institution = {Comp.\ Sci.\ Dept., Boston Univ.} longinstitution = {Computer Science Department, Boston University} number = {BUCSTR200102} note = {This is a version of \citeDim+Wes+Mul+Tur+Wel+Con:TIC2000LNCS
extended with an appendix describing the CIL typed intermediate
language.} churchreport = {yes} pdfremark = {nonscalable fonts!} abstract = { }The CIL compiler for core Standard ML compiles whole
programs using a novel typed intermediate language
(TIL) with intersection and union types and flow
labels on both terms and types. The CIL term
representation duplicates portions of the program
where intersection types are introduced and union
types are eliminated. This duplication makes it
easier to represent type information and to
introduce customized data representations. However,
duplication incurs compiletime space costs that are
potentially much greater than are incurred in TILs
employing typelevel abstraction or quantification.
In this paper, we present empirical data on the
compiletime space costs of using CIL as an
intermediate language. The data shows that these
costs can be made tractable by using sufficiently
finegrained flow analyses together with standard
hashconsing techniques. The data also suggests
that nonduplicating formulations of intersection
(and union) types would not achieve significantly
better space complexity. }article{JLC2000v10n3
author = {} title = {} journal = {J.~Logic Comput.} year = {2000} editor = {Fairouz Kamareddine and Jan Willem Klop} xeditor = {"publisher" field is not used by BibTeX style for
"article"!!!} publisher = {Oxford University Press} volume = {10} number = {3} note = {Special issue on Type Theory and Term Rewriting. Kamareddine and Klop (editors)} }techreport{Amt+Kfo+Per:PTA2000
title = {What Are PolymorphicallyTyped Ambients?} author = {Torben Amtoft and Assaf J. Kfoury and Santiago M. PericasGeertsen} month = {December} year = {2000} number = {BUCSTR2000021} institution = {Comp.\ Sci.\ Dept., Boston Univ.} longinstitution = {Computer Science Department, Boston
University} abstract = { The Ambient Calculus was developed by Cardelli and
Gordon as a formal framework to study issues of mobility and
migrant code. We consider an Ambient Calculus where ambients
transport and exchange programs rather that just inert data. We
propose different senses in which such a calculus can be said to
be \emphpolymorphically typed, and design accordingly a
\emphpolymorphic type system for it. Our type system assigns
\emphtypes to embedded programs and what we call
\emphbehaviors to processes; a denotational semantics of
behaviors is then proposed, here called \emphtrace semantics,
underlying much of the remaining analysis. We state and prove a
Subject Reduction property for our polymorphicallytyped calculus.
Based on techniques borrowed from finite automata theory,
typechecking of fully typeannotated processes is shown to be
decidable. Our polymorphicallytyped calculus is a conservative
extension of the typed Ambient Calculus originally proposed by
Cardelli and Gordon. }churchreport = {yes} }inproceedings{Dim+Wes+Mul+Tur+Wel+Con:TIC2000
itrspaper = {yes} author = {Allyn Dimock and Ian Westmacott and Robert Muller
and Franklyn Turbak and J. B. Wells and Jeffrey
Considine} title = {Space Issues in Compiling with Intersection and
Union Types} crossref = {TIC2000} year = {2000} churchreport = {yes} documenturl = {http://www.churchproject.org/reports/Dim+Wes+Mul+Tur+Wel+Con:TIC2000.html} abstract = { The CIL compiler for core Standard ML compiles whole
programs using the CIL typed intermediate language
with flow labels and intersection and union
types. Flow labels embed flow information in the
types and intersection and union types support
precise polyvariant type and flow information,
without the use of typelevel abstraction or
quantification.
\par
Compiletime representations of CIL types and terms
are potentially large compared to those for similar
types and terms in systems based on quantified
types. The listingbased nature of intersection and
union types, together with flow label annotations on
types, contribute to the size of CIL types. The CIL
term representation duplicates portions of the
program where intersection types are introduced and
union types are eliminated. This duplication makes
it easier to represent type information and to
introduce multiple representation conventions, but
incurs a compiletime space cost.
\par
This paper presents empirical data on the
compiletime space costs of using CIL. These costs
can be made tractable using standard hashconsing
techniques. Surprisingly, the duplicating nature of
CIL has acceptable compiletime space performance in
practice on the benchmarks and flow analyses that we
have investigated. Increasing the precision of flow
analysis can significantly reduce compiletime space
costs. There is also preliminary evidence that the
flow information encoded in CIL's type system can
effectively guide data customization. }note = {Superseded by~\citeDim+Wes+Mul+Tur+Wel+Con:TIC2000LNCS} }inproceedings{Dim+Wes+Mul+Tur+Wel+Con:TIC2000LNCS
itrspaper = {yes} author = {Allyn Dimock and Ian Westmacott and Robert Muller
and Franklyn Turbak and J. B. Wells and Jeffrey
Considine} title = {Program Representation Size in an Intermediate
Language with Intersection and Union Types} crossref = {TIC2000LNCS} year = {2000} pages = {2752} churchreport = {yes} documenturl = {http://www.churchproject.org/reports/Dim+Wes+Mul+Tur+Wel+Con:TIC2000LNCS.html} xpdf = {http://www.churchproject.org/reports/electronic/Dim+Wes+Mul+Tur+Wel+Con:TIC2000LNCS.pdf.gz} xpostscript = {http://www.churchproject.org/reports/electronic/Dim+Wes+Mul+Tur+Wel+Con:TIC2000LNCS.ps.gz} xnote = {Supersedes~\citeDim+Wes+Mul+Tur+Wel+Con:TIC2000} abstract = { }The CIL compiler for core Standard ML compiles whole
programs using the CIL typed intermediate language
with flow labels and intersection and union
types. Flow labels embed flow information in the
types and intersection and union types support
precise polyvariant type and flow information,
without the use of typelevel abstraction or
quantification.
\par
Compiletime representations of CIL types and terms
are potentially large compared to those for similar
types and terms in systems based on quantified
types. The listingbased nature of intersection and
union types, together with flow label annotations on
types, contribute to the size of CIL types. The CIL
term representation duplicates portions of the
program where intersection types are introduced and
union types are eliminated. This duplication makes
it easier to represent type information and to
introduce multiple representation conventions, but
incurs a compiletime space cost.
\par
This paper presents empirical data on the
compiletime space costs of using CIL. These costs
can be made tractable using standard hashconsing
techniques. Surprisingly, the duplicating nature of
CIL has acceptable compiletime space performance in
practice on the benchmarks and flow analyses that we
have investigated. Increasing the precision of flow
analysis can significantly reduce compiletime space
costs. There is also preliminary evidence that the
flow information encoded in CIL's type system can
effectively guide data customization. }inproceedings{Bawden:POPL2000
author = {Alan Bawden} title = {Firstclass Macros Have Types} crossref = {POPL2000} year = {2000} pages = {133141} churchreport = {yes} documenturl = {http://www.linearity.org/bawden/mtt/} postscript = {http://www.linearity.org/bawden/mtt/popl00.ps.gz} abstract = { }In modern Scheme, a macro captures the lexical environment
where it is defined. This creates an opportunity for extending
Scheme so that macros are firstclass values. The key to
achieving this goal, while preserving the ability to compile
programs into reasonable code, is the addition of a type
system. Many interesting things can be done with firstclass
macros, including the construction of a useful module system in
which modules are also firstclass. }inproceedings{Law+Mai:ESOP2000
author = {Julia L. Lawall and Harry G. Mairson} title = {Sharing continuations: proofnets for languages with
explicit control} crossref = {ESOP2000} year = {2000} pages = {245259} abstract = { We introduce graph reduction technology that
implements functional languages with control, such
as Scheme with \tt call/cc, where continuations
can be manipulated explicitly as values, and can be
optimally reduced in the sense of Lévy. The
technology is founded on \em proofnets for
multiplicativeexponential linear logic, extending
the techniques originally proposed by Lamping, where
we adapt the continuationpassing style
transformation to yield a new understanding of
sharable values. Confluence is maintained by
returning multiple answers to a (shared)
continuation. \par \em Proofnets provide a \em
concurrent version of linear logic proofs,
eliminating structurally irrelevant
sequentialization, and ignoring asymmetric
distinctions between inputs and outputsdually,
expressions and continuations. While Lamping's
graphs and their variants encode an embedding of
intuitionistic logic into linear logic, our
construction implicitly contains an embedding of
classical logic into linear logic.\par We propose a
family of translations, produced uniformly by
beginning with a continuationpassing style
semantics for the languages, employing standard
codings into proofnets using callbyvalue,
callbynameor hybrids of the twoto locate
proofnet \em boxes, and converting the proofnets
to direct style. The resulting graphs can be reduced
simply (cut elimination for linear logic), have a
consistent semantics that is preserved by reduction
(geometry of interaction, via the socalled \em
context semantics), and allow shared, incremental
evaluation of continuations (optimal reduction). }churchreport = {yes} }phdthesis{Stewart:fortcf
author = {C. A. Stewart} title = {On the formulaeastypes correspondence for
classical logic} school = {Oxford University} year = {2000} churchreport = {yes} postscript_a4 = {http://www.linearity.org/cas/thesis/thesisA4.ps} documenturl = {http://www.linearity.org/cas/thesis} abstract = { }The CurryHoward correspondence states the equivalence between the
constructions implicit in intuitionistic logic and those described in
the simplytyped lambdacalculus. It is an insight of great importance
in theoretical computer science, and is fundamental in modern
approaches to constructive type theory. The possibility of a similar
formulaeastypes correspondence for classical logic looks to be a
seminal development in this area, but whilst promising results have
been achieved, there does not appear to be much agreement of what is
at stake in claiming that such a correspondence exists. Consequently
much work in this area suffers from several weaknesses; in particular
the status of the new rules needed to describe the distinctively
classical inferences is unclear.
We show how to situate the formulaeastypes correspondence within the
prooftheoretic account of logical semantics arising from the work of
Michael Dummett and Dag Prawitz, and demonstrate that the
admissibility of Prawitz's inversion principle, which we argue should
be strengthened, is essential to the good behaviour of intuitionistic
logic.
By regarding the rules which determine the deductive strength of
classical logic as structural rules, as opposed to the logical rules
associated with specific logical connectives, we extend Prawitz's
inversion principle to classical propositional logic, formulated in a
theory of Parigot's lambdamu calculus with eta expansions.
We then provide a classical analogue of a subsystem of MartinL\"of's
type theory corresponding to Peano Arithmetic and show its soundness,
appealing to an extension of Tait's reducibility method. Our treatment
is the first treatment of induction in classical arithmetic that truly
falls under the aegis of the formulaeastypes correspondence, as it
is the first that is consistent with the intensional reading of
propositional equality. }article{Kfoury:JLC2000v10n3
itrspaper = {yes} author = {Assaf J. Kfoury} title = {A Linearization of the LambdaCalculus} crossref = {JLC2000v10n3} year = {2000} pages = {411436} documenturl = {http://www.churchproject.org/reports/Kfoury:JLC2000v10n3.html} churchreport = {yes} abstract = { }We embed the standard $\lambda$calculus, denoted $\Lambda$,
into two larger $\lambda$calculi,
denoted $\Lambda^\land$ and $&\Lambda^\land$.
The standard notion of $\beta$reduction for $\Lambda$
corresponds to two new notions of reduction,
$\beta^\land$ for $\Lambda^\land$ and
$&\beta^\land$ for $&\Lambda^\land$. A distinctive
feature of our new calculus $\Lambda^\land$
(resp., $&\Lambda^\land$) is that, in every function
application, an argument is used at most once (resp.,
exactly once) in the body of the function. We establish
various connections between the three notions of reduction,
$\beta$, $\beta^\land$ and $&\beta^\land$. As a
consequence, we provide an alternative framework to study
the relationship between $\beta$weak normalization and
$\beta$strong normalization, and give a new proof of the
oftmentioned equivalence between $\beta$strong
normalization of standard $\lambda$terms and typability in
a system of ``intersection types''. }article{Bug+Per:IAC2000
author = {Michele Bugliesi and Santiago M. PericasGeertsen} title = {Type Inference for variant object types} journal = {Inform.\ & Comput.} year = {2000} postscript = {http://cspeople.bu.edu/santiago/Papers/Bug+Per:TR2000.ps} xurl = {http://cspeople.bu.edu/santiago/Papers/Bug+Per:TR2000.pdf} churchreport = {yes} abstract = { }Existing type systems for object calculi are based
on invariant subtyping. Subtyping invariance is
required for soundness of static typing in the
presence of method overrides, but it is often in the
way of the expressive power of the type
system. Flexibility of static typing can be
recovered in different ways: in firstorder systems,
by the adoption of object types with
\textitvariance annotations, in secondorder
systems by resorting to \textitSelf types. Type
inference is known to be Pcomplete for firstorder
systems of finite and recursive object types, and
NPcomplete for a restricted version of Self
types. The complexity of type inference for systems
with variance annotations is yet unknown. This paper
presents a new object type system based on the
notion of \textemSplit types, a form of object
types where every method is assigned two types,
namely, an update type and a select type. The
subtyping relation that arises for Split types is
\textemvariant and, as a result, subtyping can be
performed both in \textemwidth and in
\textemdepth. The new type system generalizes all
the existing firstorder type systems for objects,
including systems based on variance
annotations. Interestingly, the additional
expressive power does not affect the complexity of
the type inference problem, as we show by presenting
an O(n^3) inference algorithm. }inproceedings{Bug+Per:FOOL2000
author = {Michele Bugliesi and Santiago M. PericasGeertsen} title = {Depth Subtyping and Type Inference for Object
Calculi} crossref = {FOOL2000} year = {2000} xurl = {http://cspeople.bu.edu/santiago/Papers/Bug+Per:EAFOOL2000.pdf} abstract = { We present a new type system based on the notion of
Split types. In this system, every method is
assigned two types, namely, an update type and a
select type. We show that the system of Split types
is strictly more powerful than the system of
recursive types and that type inference remains
decidable and feasible. We include a number of
interesting examples not typable with recursive
types that are typable with Split types. In
particular, we demonstrate that the subtyping
relation needed to encode the \lambdacalculus into
the Abadi and Cardelli's \varsigmacalculus
holds. We also present a polynomialtime algorithm
that infers Split types and show that it is sound
and complete with respect to the type system. We
conclude our presentation by relating the typing
power of Split types to the typing power of other
systems, such as, the system of recursive types with
variance annotations and the system of Self types. }churchreport = {yes} }inproceedings{Amt+Tur:FaithfulESOP2000
itrspaper = {yes} title = {Faithful Translations between Polyvariant Flows and Polymorphic Types} author = {Torben Amtoft and Franklyn Turbak} crossref = {ESOP2000} year = {2000} pages = {2640} documenturl = {http://www.churchproject.org/reports/Amt+Tur:FaithfulESOP2000.html} abstract = { Recent work has shown equivalences between
various type systems and flow logics.
Ideally, the translations upon which such
equivalences are based should be \em faithful
in the sense that information is not lost in
roundtrip translations from flows to types and back
or from types to flows and back. Building on the work of
Nielson & Nielson and of Palsberg & Pavlopoulou,
we present the first faithful translations between
a class of finitary polyvariant flow analyses
and a type system supporting polymorphism
in the form of intersection and union types.
Additionally,
our flow/type correspondence solves several
open problems posed by Palsberg & Pavlopoulou:
(1) it expresses callstring based polyvariance
(such as kCFA) as well as argument based polyvariance;
(2) it enjoys a subject reduction property
for flows as well as for types; and
(3) it supports a floworiented perspective
rather than a typeoriented one. }churchreport = {yes} }techreport{Amt+Tur:FaithfulTR2000
itrspaper = {yes} title = {Faithful Translations between Polyvariant Flows and Polymorphic Types} author = {Torben Amtoft and Franklyn Turbak} year = {2000} institution = {Comp.\ Sci.\ Dept., Boston Univ.} longinstitution = {Computer Science Department, Boston University} number = {BUCSTR200001} abstract = { Recent work has shown equivalences between
various type systems and flow logics.
Ideally, the translations upon which such
equivalences are based should be \em faithful
in the sense that information is not lost in
roundtrip translations from flows to types and back
or from types to flows and back. Building on the work of
Nielson & Nielson and of Palsberg & Pavlopoulou,
we present the first faithful translations between
a class of finitary polyvariant flow analyses
and a type system supporting polymorphism
in the form of intersection and union types.
Additionally,
our flow/type correspondence solves several
open problems posed by Palsberg & Pavlopoulou:
(1) it expresses callstring based polyvariance
(such as kCFA) as well as argument based polyvariance;
(2) it enjoys a subject reduction property
for flows as well as for types; and
(3) it supports a floworiented perspective
rather than a typeoriented one. }note = {Expanded full report of \citeAmt+Tur:FaithfulESOP2000. To appear Fall 2000, but a \emphDRAFT version is available.} churchreport = {yes} }inproceedings{Mac+Tur:LinktimeESOP2000
title = {A Calculus for Linktime Compilation} author = {Elena Machkasova and Franklyn A. Turbak} crossref = {ESOP2000} pages = {260274} churchreport = {yes} year = {2000} }techreport{Mac+Tur:LinktimeTR2000
title = {A Calculus for Linktime Compilation} author = {Elena Machkasova and Franklyn Turbak} year = {2000} institution = {Comp.\ Sci.\ Dept., Boston Univ.} longinstitution = {Computer Science Department, Boston University} note = {Expanded full report of
\citeMac+Tur:LinktimeESOP2000} churchreport = {yes} }incollection{Kfoury:Rasiowamemorial
itrspaper = {yes} author = {Assaf J. Kfoury} title = {BetaReduction as Unification} year = {1999} booktitle = {Logic, Algebra, and Computer Science (H. Rasiowa
Memorial Conference, December 1996), Banach Center
Publication, Volume 46} pages = {137158} publisher = {SpringerVerlag} editor = {D. Niwinski} documenturl = {http://www.churchproject.org/reports/Kfoury:Rasiowamemorial.html} note = {Supersedes~\citeKfoury:TTTR1996a but omits a few proofs included in the
latter} abstract = { We define a new unification problem, which we call
\em $\beta$unification and which can be used to characterize
the $\beta$strong normalization of terms in the
$\lambda$calculus. We prove the undecidability
of $\beta$unification, its connection with the system
of intersection types, and several of its basic
properties. }churchreport = {yes} }inproceedings{Kfo+Per:LICS1999
title = {Type Inference for Recursive Definitions} author = {Assaf J. Kfoury and Santiago M. PericasGeertsen} pages = {119128} year = {1999} crossref = {LICS1999} xurl = {http://cspeople.bu.edu/santiago/Papers/Kfo+Per:EALICS1999.pdf} abstract = { We consider type systems that combine universal types,
recursive types, and object types. We study type inference
in these systems under a rank restriction, following
Leivant's notion of rank.
To motivate our work, we present several examples showing
how our systems can be used to type programs encountered
in practice.
We show that type inference in the rankk system is decidable
for k < }churchreport = {yes} }techreport{Kfo+Per:TIRD1999
title = {Type Inference for Recursive Definitions} author = {Assaf J. Kfoury and Santiago M. PericasGeertsen} postscript = {http://cspeople.bu.edu/santiago/Papers/Kfo+Per:TR1999.ps} xurl = {http://cspeople.bu.edu/santiago/Papers/Kfo+Per:TR1999.pdf} year = {1999} institution = {Comp.\ Sci.\ Dept., Boston Univ.} longinstitution = {Computer Science Department, Boston University} note = {Expanded full report of \citeKfo+Per:LICS1999} abstract = { We consider type systems that combine universal types,
recursive types, and object types. We study type inference
in these systems under a rank restriction, following
Leivant's notion of rank.
To motivate our work, we present several examples showing
how our systems can be used to type programs encountered
in practice.
We show that type inference in the rankk system is decidable
for k < }churchreport = {yes} }techreport{Amt:BRICS1999
author = {Torben Amtoft} title = {Partial Evaluation for ConstraintBased Program
Analyses} institution = {BRICS, Dept. of Computer Science, University of
Aarhus} year = {1999} churchreport = {yes} number = {BRICSRS9945} note = {} abstract = { }We report on a case study in the application of partial evaluation,
initiated by the desire to speed up a constraintbased algorithm
for controlflow analysis. We designed and implemented a dedicated
partial evaluator, able to specialize the analysis wrt. a given
constraint graph and thus remove the interpretive overhead, and
measured it with Feeley's Scheme benchmarks. Even though the gain
turned out to be rather limited, our investigation yielded
valuable feed back in that it provided a better understanding of
the analysis, leading us to (re)invent an incremental version. We
believe this phenomenon to be a quite frequent spinoff from using
partial evaluation, since the removal of interpretive overhead
makes the flow of control more explicit and hence pinpoints
sources of inefficiency. Finally, we observed that partial
evaluation in our case yields such regular, lowlevel specialized
programs that it begs for runtime code generation. }inproceedings{Kfo+Mai+Tur+Wel:ICFP1999
itrspaper = {yes} author = {Assaf J. Kfoury and Harry G. Mairson and Franklyn
A. Turbak and J. B. Wells} title = {Relating Typability and Expressibility in
FiniteRank Intersection Type Systems} crossref = {ICFP1999} pages = {90101} year = {1999} abstract = { We investigate finiterank intersection type
systems, analyzing the complexity of their type
inference problems and their relation to the problem
of recognizing semantically equivalent
terms. Intersection types allow something of type
$\tau_1\land\tau_2$ to be used in some places at
type $\tau_1$ and in other places at type
$\tau_2$. A \emphfiniterank intersection type
system bounds how deeply the $\land$ can appear in
type expressions. Such type systems enjoy strong
normalization, subject reduction, and computable
type inference, and they support a pragmatics for
implementing parametric polymorphism. As a
consequence, they provide a conceptually simple and
tractable alternative to the impredicative
polymorphism of System F and its extensions, while
typing many more programs than the HindleyMilner
type system found in ML and Haskell. \par While type
inference is computable at every rank, we show that
its complexity grows exponentially as rank
increases. Let $\mathbfK(0,n) }documenturl = {http://www.churchproject.org/reports/Kfo+Mai+Tur+Wel:ICFP1999.html} churchreport = {yes} }inproceedings{Kfo+Wel:POPL1999
itrspaper = {yes} author = {Assaf J. Kfoury and J. B. Wells} title = {Principality and Decidable Type Inference for
FiniteRank Intersection Types} crossref = {POPL1999} year = {1999} pages = {161174} documenturl = {http://www.churchproject.org/reports/Kfo+Wel:POPL1999.html} acmurla = {http://portal.acm.org/citation.cfm?id} acmurlb = {http://doi.acm.org/10.1145/292540.292556} churchreport = {yes} note = {Superseded by \citeKfo+Wel:TCSB2004v311n13} abstract = { }Principality of typings is the property that for
each typable term, there is a typing from which all
other typings are obtained via some set of
operations. Type inference is the problem of finding
a typing for a given term, if possible. We define an
intersection type system which has principal typings
and types exactly the strongly normalizable
$\lambda$terms. More interestingly, every
finiterank restriction of this system (using
Leivant's first notion of rank) has principal
typings and also has decidable type inference. This
is in contrast to System~F where the finite rank
restriction for every finite rank at 3 and above has
neither principal typings nor decidable type
inference. This is also in contrast to earlier
presentations of intersection types where the status
(decidable or undecidable) of these properties is
unknown for the finiterank restrictions at 3 and
above. Furthermore, the notion of principal typings
for our system involves only one operation,
substitution, rather than several operations (not
all substitutionbased) as in earlier presentations
of principality for intersection types (without rank
restrictions). In our system the earlier notion of
\emphexpansion is integrated in the form of
\emphexpansion variables, which are subject to
substitution as are ordinary variables. A
unificationbased type inference algorithm is
presented using a new form of unification,
$\beta$unification. }article{Wells:APAL1999v98nonote
author = {J. B. Wells} title = {Typability and Type Checking in System F Are Equivalent and Undecidable} journal = {Ann.\ Pure Appl.\ Logic} volume = {98} number = {13} year = {1999} pages = {111156} churchreport = {yes} documenturl = {http://www.churchproject.org/reports/Wells:APAL1999v98nonote.html} xnote = {Supersedes \citeWells:LICS1994} abstract = { }Girard and Reynolds independently invented System~F
(a.k.a.\ the secondorder polymorphically typed
lambda calculus) to handle problems in logic and
computer programming language design, respectively.
Viewing F in the \emphCurry style, which
associates types with untyped lambda terms, raises
the questions of \emphtypability and \emphtype
checking. Typability asks for a term whether there
exists some type it can be given. Type checking
asks, for a particular term and type, whether the
term can be given that type. The decidability of
these problems has been settled for restrictions and
extensions of F and related systems and complexity
lowerbounds have been determined for typability in
F, but this report is the first to resolve whether
these problems are decidable for System~F.
\par
This report proves that type checking in F is
undecidable, by a reduction from
\emphsemiunification, and that typability in F is
undecidable, by a reduction from type checking.
Because there is an easy reduction from typability
to type checking, the two problems are equivalent.
The reduction from type checking to typability uses
a novel method of constructing lambda terms that
simulate arbitrarily chosen type environments. All
of the results also hold for the $\lambda
I$calculus. }inproceedings{Ong+Ste:curhff
author = {C.H. L. Ong and C. A. Stewart} title = {A CurryHoward foundation for functional
computation with control} pages = {215  217} year = {1997} crossref = {POPL1997} churchreport = {yes} xpostscript = {ftp://achilles.linearity.org/pub/cas/popl97.ps} documenturl = {http://www.linearity.org/cas/abstracts/popl97.html} abstract = { }We introduce the type theory lambdamuv, a callbyvalue variant of
Parigot's lambdamu calculus, as a CurryHoward representation theory
of classical propositional proofs. The associated rewrite system is
ChurchRosser and strongly normalising, and definitional equality of
the type theory is consistent, compatible with cut, congruent and
decidable. The attendant callbyvalue programming language muPCF is
obtained from lambdamuv by augmenting it by basic arithmetic,
conditionals and fixpoints. We study the behavioural properties of
muPCFv and show that, though simple, it is a very general language
for functional computation with control: it can express all the main
control constructs such as exceptions and firstclass continuations.
Proof theoretically the dual lambdamuv constructs of naming and mu
abstraction witness the introduction and elimination rules of
absurdity respectively. Computationally they give succinct expression
to a kind of generic (forward) `jump' operator, which may be regarded
as a unifying control construct for functional computation. Our goal
is that lambdamuv and muPCFv respectively should be to functional
computation with firstclass access to the flow of control what
lambdacalculus and PCF respectively are to pure functional
programming: lambdamuv gives the logical basis via the CurryHoward
correspondence, and muPCFv is a prototypical language albeit in a
purified form. }inproceedings{Banerjee:ICFP1997
itrspaper = {yes} author = {Anindya Banerjee} title = {A Modular, Polyvariant, and TypeBased Closure
Analysis} crossref = {ICFP1997} year = {1997} pages = {110} abstract = { We observe that the \textitprincipal typing
property of a type system is the enabling
technology for \textitmodularity and
\textitseparate compilation
\citeJim:MITLCS1995532. We use this technology
to formulate a modular and polyvariant closure
analysis, based on the rank 2 intersection types
annotated with controlflow information.\par
Modularity manifests itself in a syntaxdirected,
annotatedtype inference algorithm that can analyse
\textitprogram fragments containing free
variables: a \textitprincipal typing property is
used to formalise it. Polyvariance manifests itself
in the separation of different behaviours of the
same function at its different uses: this is
formalised via the rank 2 intersection types. As the
rank 2 intersection type discipline types at least
all (core) ML programs, our analysis can be used in
the separate compilation of such programs. }documenturl = {http://www.churchproject.org/reports/Banerjee:ICFP1997.html} churchreport = {yes} }inproceedings{Wel+Dim+Mul+Tur:TAPSOFT1997
itrspaper = {yes} author = {J. B. Wells and Allyn Dimock and Robert Muller and Franklyn Turbak} title = {A Typed Intermediate Language for FlowDirected Compilation} crossref = {TAPSOFT1997} year = {1997} pages = {757771} abstract = { We present a typed intermediate language
$\lambda^\mathrmCIL$ for optimizing compilers
for functionoriented and polymorphically typed
programming languages (e.g., ML). The language
$\lambda^\mathrmCIL$ is a typed lambda calculus
with product, sum, intersection, and union types as
well as function types annotated with flow labels. A
novel formulation of intersection and union types
supports encoding flow information in the typed
program representation. This flow information can
direct optimization. }note = {Superseded by \citeWel+Dim+Mul+Tur:JFP2002v12n3} comment = {The disabling of the PDF links is disabled as the
fonts does not look that bad when printed and PDF
provides the best way to ensure that the files are
printable under Windows. Please do not disable this
diabling without discussing it with me (Peter). The
PDF link is disabled because the PDF file has
absolutely horrible and unreadable fonts. Please do
not enable it until the fonts are fixed.} documenturl = {http://www.churchproject.org/reports/Wel+Dim+Mul+Tur:TAPSOFT1997.html} churchreport = {yes} }inproceedings{Dim+Mul+Tur+Wel:ICFP1997
itrspaper = {yes} author = {Allyn Dimock and Robert Muller and Franklyn Turbak
and J. B. Wells} title = {Strongly Typed FlowDirected Representation
Transformations} crossref = {ICFP1997} year = {1997} pages = {1124} abstract = { We present a new framework for transforming data
representations in a strongly typed intermediate
language. Our method allows both value producers
(sources) and value consumers (sinks) to support
multiple representations, automatically inserting
any required code. Specialized representations can
be easily chosen for particular source/sink
pairs. The framework is based on these techniques:
\beginenumerate \item \emphFlow annotated types
encode the ``flowsfrom'' (source) and ``flowsto''
(sink) information of a flow graph. \item
\emphIntersection and union types support (a)
encoding precise flow information, (b) separating
flow information so that transformations can be well
typed, (c) automatically reorganizing flow paths to
enable multiple representations. \endenumerate As
an instance of our framework, we provide a function
representation transformation that encompasses both
closure conversion and inlining. Our framework is
adaptable to data other than functions. }comment = {The disabling of the PDF links is disabled as the fonts does not look that bad when printed and PDF provides the best way to ensure that the files are printable under Windows. Please do not disable this diabling without discussing it with me (Peter).
The PDF link is disabled because the PDF file has absolutely
horrible and unreadable fonts. Please do not enable it until the
fonts are fixed.} documenturl = {http://www.churchproject.org/reports/Dim+Mul+Tur+Wel:ICFP1997.html} churchreport = {yes} }inproceedings{Tur+Dim+Mul+Wel:CPPFT1997
itrspaper = {yes} author = {Franklyn Turbak and Allyn Dimock and Robert Muller
and J. B. Wells} title = {Compiling with Polymorphic and Polyvariant Flow
Types} crossref = {TIC1997} year = {1997} abstract = { Optimizing compilers for functionoriented and
objectoriented languages exploit type and flow
information for efficient implementation. Although
type and flow information (both control and data
flow) are inseparably intertwined, compilers usually
compute and represent them separately. Partially,
this has been a result of the usual polymorphic type
systems using $\forall$ and $\exists$ quantifiers,
which are difficult to use in combination with flow
annotations. \par In the Church Project, we are
experimenting with intermediate languages that
integrate type and flow information into a single
\em flow type framework. This integration
facilitates the preservation of flow and type
information through program transformations. In this
paper we describe $\lambda^\mathrmCIL$, an
intermediate language supporting polymorphic types
and polyvariant flow information and describe its
application in program optimiziation. We are
experimenting with this intermediate language in a
flow and typedirected compiler for a functional
language. The types of $\lambda^\mathrmCIL$
encode flow information (1) via labels that
approximate sources and sinks of values and (2) via
intersection and union types, finitary versions of
universal and existential types that support type
polymorphism and (in combination with the labels)
polyvariant flow analysis. Accurate flow types
expose opportunities for a wide range of optimizing
program transformations. }comment = {The disabling of the PDF links is disabled as the fonts does not look that bad when printed and PDF provides the best way to ensure that the files are printable under Windows. Please do not disable this diabling without discussing it with me (Peter).
The PDF link is disabled because the PDF file has absolutely
horrible and unreadable fonts. Please do not enable it until the
fonts are fixed.} documenturl = {http://www.churchproject.org/reports/Tur+Dim+Mul+Wel:CPPFT1997.html} postscript = {http://www.cs.bc.edu/~muller/TIC97/Turbak.ps.gz} churchreport = {yes} note = {} xnote = {note field must be blank to prevent getting
crossref's note!} }unpublished{Kfoury:TTTR1996a
itrspaper = {yes} author = {Assaf J. Kfoury} title = {BetaReduction as Unification} note = {A refereed extensively edited version
is~\citeKfoury:Rasiowamemorial. This preliminary version was
presented at the Helena Rasiowa Memorial Conference} year = {1996} month = {July} churchreport = {yes} abstract = { }We define a new unification problem, which we call
\em $\beta$unification and which can be used to characterize
the $\beta$strong normalization of terms in the
$\lambda$calculus. We prove the undecidability
of $\beta$unification, its connection with the system
of intersection types, and several of its basic
properties. }unpublished{Kfoury:TTTR1996
author = {Assaf J. Kfoury} title = {A Linearization of the LambdaCalculus} note = {A refereed version
is~\citeKfoury:JLC2000v10n3. This version was
presented at the Glasgow Int'l School on Type Theory
& Term Rewriting} year = {1996} month = {September} churchreport = {yes} abstract = { We embed the standard $\lambda$calculus, denoted $\Lambda$,
into two larger $\lambda$calculi,
denoted $\Lambda^\land$ and $&\Lambda^\land$.
The standard notion of $\beta$reduction for $\Lambda$
corresponds to two new notions of reduction,
$\beta^\land$ for $\Lambda^\land$ and
$&\beta^\land$ for $&\Lambda^\land$. A distinctive
feature of our new calculus $\Lambda^\land$
(resp., $&\Lambda^\land$) is that, in every function
application, an argument is used at most once (resp.,
exactly once) in the body of the function. We establish
various connections between the three notions of reduction,
$\beta$, $\beta^\land$ and $&\beta^\land$. As a
consequence, we provide an alternative framework to study
the relationship between $\beta$weak normalization and
$\beta$strong normalization, and give a new proof of the
oftmentioned equivalence between $\beta$strong
normalization of standard $\lambda$terms and typability in
a system of ``intersection types''. }comment = {The funny order of things in the note is to work
around the way BibTeX would really screw up the
meaning by putting ", 1996" after the note, implying
falsely that the year has something to do with the
note.} }techreport{Wells:TUFE1996
author = {J. B. Wells} title = {Typability is Undecidable for F+Eta} institution = {Comp.\ Sci.\ Dept., Boston Univ.} year = {1996} number = {96022} type = {Tech.\ Rep.} month = {March} documenturl = {http://www.churchproject.org/reports/Wells:TUFE1996.html} churchreport = {yes} abstract = { }System F is the wellknown polymorphicallytyped
$\lambda$calculus with
universal quantifiers (``$\forall$'').
F+$\eta$ is System F extended with the eta rule,
which says that if term $M$ can be given type $\tau$
and $M$ $\eta$reduces to $N$, then $N$ can also be
given the type $\tau$.
Adding the eta rule to System F is equivalent to
adding the subsumption rule using the subtyping
(``containment'') relation that Mitchell defined and
axiomatized.
The subsumption rule says that if $M$ can be given
type $\tau$ and $\tau$ is a subtype of type
$\sigma$, then $M$ can be given type $\sigma$.
Mitchell's subtyping relation involves no extensions
to the syntax of types, i.e., no bounded
polymorphism and no supertype of all types, and is
thus unrelated to the system $\mathrmF_\leq$
(``Fsub'').
\par
Typability for F+$\eta$ is the problem of
determining for any term $M$ whether there is any
type $\tau$ that can be given to it using the type
inference rules of F+$\eta$.
Typability has been proven undecidable for System
F (without the eta rule), but the decidability of
typability has been an open problem for F+$\eta$.
Mitchell's subtyping relation has recently been
proven undecidable, implying the undecidability of
``type checking'' for F+$\eta$.
This paper reduces the problem of subtyping to the
problem of typability for F+$\eta$, thus proving the
undecidability of typability.
The proof methods are similar in outline to those
used to prove the undecidability of typability for
System F, but the fine details differ greatly. }techreport{Jim:RTTS1995
itrspaper = {yes} author = {Trevor Jim} title = {Rank 2 Type Systems and Recursive Definitions} institution = {Massachusetts Institute of Technology} year = {1995} number = {MIT/LCS/TM531} month = {November} abstract = { }We demonstrate the pragmatic value of the
\emphprincipal typing property, a property
distinct from ML's principal type property, by
studying a type system with principal typings. The
type system is based on rank 2 intersection types
and is closely related to ML\ }techreport{Jim:MITLCS1995532
itrspaper = {yes} author = {Trevor Jim} title = {What Are Principal Typings and What Are They Good
For?} institution = {MIT} year = {1995} type = {Tech.\ memo.} number = {MIT/LCS/TM532} abstract = { }We demonstrate the pragmatic value of the
\emphprincipal typing property, a property
distinct from ML's principal type property, by
studying a type system with principal typings. The
type system is based on rank 2 intersection types
and is closely related to ML\ }inproceedings{Kfo+Wel:LICS1995
itrspaper = {yes} title = {New Notions of Reduction and NonSemantic Proofs of
$\beta$Strong Normalization in Typed
$\lambda$Calculi} author = {A. J. Kfoury and J. B. Wells} pages = {311321} crossref = {LICS1995} comment = {The PDF link is disabled because the PDF file has
absolutely horrible and unreadable fonts. Please do
not enable it until the fonts are fixed.} documenturl = {http://www.churchproject.org/reports/Kfo+Wel:LICS1995.html} source = {ftp://theory.lcs.mit.edu/pub/meyer/lics.bib} churchreport = {yes} year = {1995} abstract = { }Two notions of reduction for terms of the
$\lambda$calculus are introduced and the question
of whether a $\lambda$term is $\beta$strongly
normalizing is reduced to the question of whether a
$\lambda$term is merely normalizing under one of
the notions of reduction. This gives a method to
prove strong $\beta$normalization for typed
$\lambda$calculi. Instead of the usual semantic
proof style based on Tait's realizability or
Girard's ``candidats de réductibilité'',
termination can be proved using a decreasing metric
over a wellfounded ordering. This proof method is
applied to the simplytyped $\lambda$calculus and
the system of intersection types, giving the first
nonsemantic proof for a polymorphic extension of
the $\lambda$calculus. }techreport{Wells:UMSR1995
author = {J. B. Wells} title = {The Undecidability of Mitchell's Subtyping Relation} institution = {Comp.\ Sci.\ Dept., Boston Univ.} year = {1995} month = {December} documenturl = {http://www.churchproject.org/reports/Wells:UMSR1995.html} postscriptb = {http://www.cs.bu.edu/techreports/ps/1995019subtypingundecidable.ps.gz} remark = {Doesn't have tech report number in the actual paper.} type = {Tech.\ Rep.} number = {95019} churchreport = {yes} abstract = { }Mitchell defined and axiomatized a subtyping
relationship (also known as \emphcontainment,
\emphcoercibility, or \emphsubsumption) over the
types of System F (with ``$\to$'' and
``$\forall$''). This subtyping relationship is quite
simple and does not involve bounded
quantification. Tiuryn and Urzyczyn quite recently
proved this subtyping relationship to be
undecidable. This paper supplies a new
undecidability proof for this subtyping
relationship. First, a new syntaxdirected
axiomatization of the subtyping relationship is
defined. Then, this axiomatization is used to prove
a reduction from the undecidable problem of
\emphsemiunification to subtyping. The
undecidability of subtyping implies the
undecidability of \emphtype checking for System F
extended with Mitchell's subtyping, also known as
``F plus eta''. } 
