111 Cummington St., Boston, MA 02215 | cs-www.bu.edu
Last updated: July 20, 2010

Reports

Below is a selection of papers that report work done as part of the Church Project.

Members: To add a new report, consult the maintenance page.


Copyright notice: The documents contained in these pages are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

techreport{BUCS-TR-2010-011
author = {Andrei Lapets and Assaf Kfoury}
title = {A User-friendly Interface for a Lightweight Verification System}
institution = {CS Dept., Boston University}
number = {BUCS-TR-2010-011}
month = {May}
year = {2010}
abstract = {
User-friendly interfaces can play an important role in bringing the benefits of a machine-readable representation of formal arguments to a wider audience. The ``aartifact" system is an easy-to-use lightweight verifier for formal arguments that involve logical and algebraic manipulations of common mathematical concepts. The system provides validation capabilities by utilizing a database of propositions governing common mathematical concepts. The ``aartifact" system's multi-faceted interactive user interface combines several approaches to user-friendly interface design: (1) a familiar and natural syntax based on existing conventions in mathematical practice, (2) a real-time keyword-based lookup mechanism for interactive, context-sensitive discovery of the syntactic idioms and semantic concepts found in the system's database of propositions, and (3) immediate validation feedback in the form of reformatted raw input. The system's natural syntax and database of propositions allow it to meet a user's expectations in the formal reasoning scenarios for which it is intended. The real-time keyword-based lookup mechanism and validation feedback allow the system to teach the user about its capabilities and limitations in an immediate, interactive, and context-aware manner.
}
}
techreport{BUCS-TR-2010-010
author = {Andrei Lapets}
title = {User-friendly Support for Common Concepts in a Lightweight Verifier}
institution = {CS Dept., Boston University}
number = {BUCS-TR-2010-010}
month = {May}
year = {2010}
abstract = {
Machine verification of formal arguments can only increase our confidence in the correctness of those arguments, but the costs of employing machine verification still outweigh the benefits for some common kinds of formal reasoning activities. As a result, usability is becoming increasingly important in the design of formal verification tools. We describe the ``aartifact" lightweight verification system, designed for processing formal arguments involving basic, ubiquitous mathematical concepts. The system is a prototype for investigating potential techniques for improving the usability of formal verification systems. It leverages techniques drawn both from existing work and from our own efforts. In addition to a parser for a familiar concrete syntax and a mechanism for automated syntax lookup, the system integrates (1) a basic logical inference algorithm, (2) a database of propositions governing common mathematical concepts, and (3) a data structure that computes congruence closures of expressions involving relations found in this database. Together, these components allow the system to better accommodate the expectations of users interested in verifying formal arguments involving algebraic and logical manipulations of numbers, sets, vectors, and related operators and predicates. We demonstrate the reasonable performance of this system on typical formal arguments and briefly discuss how the system's design contributed to its usability in two case studies.
}
}
techreport{BUCS-TR-2009-030
author = {Andrei Lapets and Assaf Kfoury}
title = {Verification with Natural Contexts: Soundness of Safe Compositional Network Sketches}
institution = {CS Dept., Boston University}
number = {BUCS-TR-2009-030}
month = {October}
year = {2009}
abstract = {
In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, accessibility has not been a priority in the design of formal verification tools that can provide these benefits. In earlier work \citeBUCS-TR-2009-015, we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a \emphnatural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. We evaluate our proposed design criteria by utilizing within the context of novel research a formal reasoning system that is designed according to these criteria. In particular, we consider how the design and capabilities of the formal reasoning system that we employ influence, aid, or hinder our ability to accomplish a formal reasoning task -- the assembly of a machine-verifiable proof pertaining to the NetSketch formalism. NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. It provides capabilities for compositional analysis based on a strongly-typed Domain-Specific Language (DSL) for describing and reasoning about constrained-flow networks and invariants that need to be enforced thereupon. In a companion paper \citeBUCS-TR-2009-028, we overview NetSketch, highlight its salient features, and illustrate how it could be used in actual applications. In this paper, we define using a machine-readable syntax major parts of the formal system underlying the operation of NetSketch, along with its semantics and a corresponding notion of validity. We then provide a proof of soundness for the formalism that can be partially verified using a lightweight formal reasoning system that simulates natural contexts. A traditional version of these definitions and arguments can be found in a related paper \citeBUCS-TR-2009-029.
}
}
unpublished{Car+Wel:AOE-2008
itrspaper = {yes}
churchreport = {yes}
author = {Sébastien Carlier and J. B. Wells}
title = {The Algebra of Expansion}
note = {Draft corresponding roughly to an ITRS 2008 workshop talk}
month = {March}
date = {2008-03-25}
year = {2008}
abstract = {
Expansion is an operation on typings (pairs of type environments and result types) in type systems for the $\lambda$-calculus. Expansion was originally introduced for calculating possible typings of a term in systems with intersection types. Unfortunately, definitions of expansion (even the most modern ones) have been difficult for outsiders to absorb. This paper aims to clarify expansion and make it more accessible to non-specialists by isolating the pure notion of expansion on its own, independent of type systems and types. We show how expansion can be seen as a simple algebra on terms with variables, substitutions, composition, and miscellaneous constructors such that the algebra satisfies 8 simple axioms and axiom schemas: the 3 standard axioms of a monoid, 4 standard axioms or axiom schemas of substitutions (including one that corresponds to the usual ``substitution lemma''), and 1 axiom schema for expansion itself. This presentation should help make more accessible to a wider community theory and techniques involving intersection types and type inference with flexible precision.
}
}
techreport{Don+Hal+Kfo:BUCS-TR-2005-X
author = {Kevin Donnelly and Joseph J. Hallett and Assaf J. Kfoury}
title = {Formal Semantics For Weak References}
institution = {Department of Computer Science, Boston University}
year = {2005}
month = {July}
churchreport = {yes}
type = {Technical Report}
number = {Don+Hal+Kfo:BUCS-TR-2005-X}
xpostscript = {}
xpdf = {}
itrspaper = {no}
abstract = {
Weak references are references that do not prevent the object they point to from being garbage collected. Many realistic languages, including Java, SML/NJ, and Haskell to name a few, support weak references. However, there is no generally accepted formal semantics for weak references. Without such a formal semantics it becomes impossible to formally prove properties of such a language and the programs written in it. \par We give a formal semantics for a calculus called lambda-weak that includes weak references and is derived from Morrisett, Felleisen, and Harper's lambda-gc. The semantics is used to examine several issues involving weak references. We use the framework to formalize the semantics for the key/value weak references found in Haskell. Furthermore, we consider a type system for the language and show how to extend the earlier result that type inference can be used to collect reachable garbage. In addition we show how to allow collection of weakly referenced garbage without incurring the computational overhead often associated with collecting a weak reference which may be later used. Lastly, we address the non-determinism of the semantics by providing both an effectively decidable syntactic restriction and a more general semantic criterion, which guarantee a unique result of evaluation.
}
}
techreport{Don+Kfo:BUCS-TR-2005-X
author = {Kevin Donnelly and Assaf J. Kfoury}
title = {Some Considerations on Formal Semantics For Weak References}
institution = {Department of Computer Science, Boston University}
year = {2005}
month = {July}
churchreport = {yes}
type = {Technical Report}
number = {Don+Kfo:BUCS-TR-2005-X}
xpostscript = {}
xpdf = {}
itrspaper = {no}
abstract = {
Weak references are references that do not prevent the object they point to from being garbage collected. Most realistic languages, including Java, SML/NJ, and OCaml to name a few, have some facility for programming with weak references. Weak references are used in implementing idioms like memoizing functions and hash-consing in order to avoid potential memory leaks. However, the semantics of weak references in many languages are not clearly specified. Without a formal semantics for weak references it becomes impossible to prove the correctness of implementations making use of this feature. Previous work by Hallett and Kfoury \citeHK-weak extends $\gc$ \citeMor+Fel+Har:FPCA-1995, a language for modeling garbage collection, to $\weak$, a similar language with weak references. Using this previously formalized semantics for weak references, we consider two issues related to well-behavedness of programs. Firstly, we provide a new, simpler proof of the well-behavedness of the syntactically restricted fragment of $\weak$ defined in \citeHK-weak. Secondly, we give a natural semantic criterion for well-behavedness much broader than the syntactic restriction, which is useful as principle for programming with weak references. Furthermore we extend the result, proved in \citeMor+Fel+Har:FPCA-1995, which allows one to use type-inference to collect some reachable objects that are never used. We prove that this result holds of our language, and we extend this result to allow the collection of weakly-referenced reachable garbage without incurring the computation overhead sometimes associated with collecting weak bindings (e.g. the need to recompute a memoized function). Lastly we use extend the semantic framework to model the key/value weak references found in Haskell and we prove the Haskell is semantics equivalent to a simpler semantics due to the lack of side-effects in our language.
}
}
techreport{Hal+Kfo:BUCS-TR-2005-031
author = {Joseph J. Hallett and Assaf J. Kfoury}
title = {A Formal Semantics For Weak References}
institution = {Department of Computer Science, Boston University}
year = {2005}
month = {May}
churchreport = {yes}
type = {Technical Report}
number = {Hal+Kfo:BUCS-TR-2005-031}
xpostscript = {}
xpdf = {}
itrspaper = {no}
abstract = {
A weak reference is a reference to an object that is not followed by the pointer tracer when garbage collection is called. That is, a weak reference cannot prevent the object it references from being garbage collected. Weak references remain a troublesome programming feature largely because there is not an accepted, precise semantics that describes their behavior (in fact, we are not aware of any formalization of their semantics). The trouble is that weak references allow reachable objects to be garbage collected, therefore allowing garbage collection to influence the result of a program. Despite this difficulty, weak references continue to be used in practice for reasons related to efficient storage management, and are included in many popular programming languages (Standard ML, Haskell, OCaml, and Java). We give a formal semantics for a calculus called $\weak$ that includes weak references and is derived from Morrisett, Felleisen, and Harper's $\gc$. $\gc$ formalizes the notion of garbage collection by means of a rewrite rule. Such a formalization is required to precisely characterize the semantics of weak references. However, the inclusion of a garbage-collection rewrite-rule in a language with \em weak references introduces non-deterministic evaluation, even if the parameter-passing mechanism is deterministic (call-by-value in our case). This raises the question of confluence for our rewrite system. We discuss natural restrictions under which our rewrite system is confluent, thus guaranteeing uniqueness of program result. We define conditions that allow other garbage collection algorithms to co-exist with our semantics of weak references. We also introduce a polymorphic type system to prove the absence of erroneous program behavior (i.e., the absence of ``stuck evaluation'') and a corresponding type inference algorithm. We prove the type system sound and the inference algorithm sound and complete.
}
}
techreport{Bak+Kfo:TR-2005a
author = {Adam Bakewell and Assaf J. Kfoury}
title = {Properties of a Rewrite System for Unification with Expansion Variables}
institution = {Church Project, Boston University}
year = {2005}
month = {April}
churchreport = {yes}
type = {Technical Report}
}
techreport{Bak+Car+Kfo+Wel:BUCS-TR-2005a
author = {Adam Bakewell and Sébastien Carlier and A. J. Kfoury and J. B. Wells}
title = {Inferring Intersection Typings that are Equivalent to Call-by-Name and Call-by-Value Evaluations}
institution = {Church Project, Boston University}
year = {2005}
month = {April}
date = {2005-04-13}
churchreport = {yes}
type = {Technical Report}
abstract = {
We present a procedure to infer a typing for an arbitrary $\lambda$-term $M$ in an intersection-type system that translates into exactly the call-by-name (resp., call-by-value) evaluation of $M$. Our framework is the recently developed System~E which augments intersection types with \emphexpansion variables. The inferred typing for $M$ is obtained by setting up a unification problem involving both type variables and expansion variables, which we solve with a confluent rewrite system. The inference procedure is \emphcompositional in the sense that typings for different program components can be inferred in \emphany order, and without knowledge of the definition of other program components. Using expansion variables lets us achieve a compositional inference procedure easily. Termination of the procedure is generally undecidable. The procedure terminates and returns a typing iff the input $M$ is normalizing according to call-by-name (resp., call-by-value). The inferred typing is \emphexact in the sense that the exact call-by-name (resp., call-by-value) behaviour of $M$ can be obtained by a (polynomial) transformation of the typing. The inferred typing is also \emphprincipal in the sense that any other typing that translates the call-by-name (resp., call-by-value) evaluation of $M$ can be obtained from the inferred typing for $M$ using a substitution-based transformation.
}
}
inproceedings{Zhu+Xi:PADL-2005
author = {Dengping Zhu and Hongwei Xi}
title = {Safe Programming with Pointers through Stateful Views}
booktitle = {Proceedings of the 7th International Symposium on Practical Aspects of Declarative Languages}
publisher = {Springer-Verlag LNCS}
year = {2005}
month = {January}
pages = {TBA}
address = {Long Beach, CA}
churchreport = {yes}
abstract = {
The need for direct memory manipulation through pointers is essential in many applications. However, it is also commonly understood that the use (or probably misuse) of pointers is often a rich source of program errors. Therefore, approaches that can effectively enforce safe use of pointers in programming are highly sought after. ATS is a programming language with a type system rooted in a recently developed framework Applied Type System, and a novel and desirable feature in ATS lies in its support for safe programming with pointers through a novel notion of \em stateful views. In particular, even pointer arithmetic is allowed in ATS and guaranteed to be safe by the type system of ATS. In this paper, we give an overview of this feature in ATS, presenting some interesting examples based on a prototype implementation of ATS to demonstrate the practicality of safe programming with pointer through stateful views.
}
}
inproceedings{Mak+Wel:ESOP-2005
author = {Henning Makholm and J. B. Wells}
title = {Instant Polymorphic Type Systems for Mobile Process Calculi: Just Add Reduction Rules and Close}
churchreport = {yes}
dartreport = {yes}
crossref = {ESOP2005}
pages = {389--407}
note = {A more detailed predecessor is \citeMak+Wel:PolyStar-2004}
abstract = {
Many different \emphmobile process calculi have been invented, and for each some number of type systems has been developed. Soundness and other properties must be proved separately for each calculus and type system. We present the \emphgeneric polymorphic type system Poly$\star$ which works for a wide range of mobile process calculi, including the $\pi$-calculus and Mobile Ambients. For any calculus satisfying some general syntactic conditions, well-formedness rules for types are derived automatically from the reduction rules and Poly$\star$ works otherwise unchanged. The derived type system is automatically sound (i.e., has subject reduction) and often more precise than previous type systems for the calculus, due to Poly$\star$'s \emphspatial polymorphism. We present an implemented type inference algorithm for Poly$\star$ which automatically constructs a typing given a set of reduction rules and a term to be typed. The generated typings are \emphprincipal with respect to certain natural type shape constraints.
}
}
inproceedings{Mak+Wel:ICFP-2005
author = {Henning Makholm and J. B. Wells}
title = {Type Inference, Principal Typings, and Let-Polymorphism for First-Class Mixin Modules}
crossref = {ICFP2005}
churchreport = {yes}
dartreport = {yes}
pages = {156--167}
abstract = {
A \emphmixin module is a programming abstraction that simultaneously generalizes $\lambda$-abstractions, records, and mutually recursive definitions. Although various mixin module type systems have been developed, no one has investigated \emphprincipal typings or developed \emphtype inference for first-class mixin modules, nor has anyone added Milner's \emphlet-polymorphism to such a system. \par This paper proves that typability is NP-complete for the naive approach followed by previous mixin module type systems. Because a $\lambda$-calculus extended with \emphrecord concatenation is a simple restriction of our mixin module calculus, we also prove the folk belief that typability is NP-complete for the naive early type systems for record concatenation. \par To allow feasible type inference, we present \textsfMartini, a new system of \emphsimple types for mixin modules with \emphprincipal typings. \textsfMartini is conceptually simple, with no subtyping and a clean and balanced separation between unification-based type inference with type and row variables and constraint solving for safety of linking and field extraction. We have implemented a type inference algorithm and we prove its complexity to be $O(n^2)$, or $O(n)$ given a fixed bound on the number of field labels. (Technically, there is also a factor of $\alpha(n)$, but $\alpha(n)\leq 4$ for $n<10^10^100$ (a ``googolplex'').) To prove the complexity, we need to present an algorithm for \emphrow unification that may have been implemented by others, but which we could not find written down anywhere. Because \textsfMartini has principal typings, we successfully extend it with Milner's let-polymorphism.
}
}
techreport{Bak+Kfo:BUCS-TR-2004-0xx
author = {Adam Bakewell and Assaf J. Kfoury}
title = {Unification with Expansion Variables}
institution = {Department of Computer Science, Boston University}
year = {2004}
month = {December}
churchreport = {yes}
type = {Technical Report}
}
techreport{Bak+Car+Kfo+Wel:BUCS-TR-2004-0xy
author = {Adam Bakewell and Sébastien Carlier and A. J. Kfoury and J. B. Wells}
title = {Exact Intersection Typing Inference and Call-by-Name Evaluation}
institution = {Department of Computer Science, Boston University}
year = {2004}
month = {December}
note = {Superseded by \citeBak+Car+Kfo+Wel:BUCS-TR-2005a}
churchreport = {yes}
type = {Technical Report}
}
techreport{Mak+Wel:PolyStar-2004
author = {Henning Makholm and J. B. Wells}
title = {Instant Polymorphic Type Systems for Mobile Process Calculi: Just Add Reduction Rules and Close}
year = {2004}
month = {November}
churchreport = {yes}
dartreport = {yes}
institution = {Heriot-Watt Univ., School of Math.\ & Comput.\ Sci.}
number = {HW-MACS-TR-0022}
xnote = {Available at http://www.macs.hw.ac.uk:8080/}
note = {A shorter successor is~\citeMak+Wel:ESOP-2005}
abstract = {
Many different \emphmobile process calculi have been invented, and for each some number of type systems has been developed. Soundness and other properties must be proved separately for each calculus and type system. \par We present the \emphgeneric polymorphic type system Poly$\star$ which works for a wide range of mobile process calculi. For any calculus satisfying some general syntactic conditions, well-formedness rules for types are derived automatically from the reduction rules and Poly$\star$ works otherwise unchanged. The derived type system is automatically sound and often more precise than previous type systems for the calculus, due to Poly$\star$'s \emphspatial polymorphism. \par We present an implemented type inference algorithm for Poly$\star$ which automatically constructs a typing given a set of reduction rules and a term to be typed. The generated typings are \emphprincipal with respect to certain natural type shape constraints.
}
}
inproceedings{Chen+Zhu+Xi:PADL-2004
author = {Chiyan Chen and Dengping Zhu and Hongwei Xi}
title = {Implementing Cut Elimination: A Case Study of Simulating Dependent Types in Haskell}
booktitle = {Proceedings of the 6th International Symposium on Practical Aspects of Declarative Languages}
publisher = {Springer-Verlag LNCS vol. 3057}
year = {2004}
month = {June}
pages = {239-254}
address = {Dallas, TX}
churchreport = {yes}
abstract = {
Gentzen's Hauptsatz -- cut elimination theorem -- in sequent calculi reveals a fundamental property on logic connectives in various logics such as classical logic and intuitionistic logic. In this paper, we implement a procedure in Haskell to perform cut elimination for intuitionistic sequent calculus, where we use types to guarantee that the procedure can only return a cut-free proof of the same sequent when given a proof of a sequent that may contain cuts. The contribution of the paper is two-fold. On the one hand, we present an interesting (and somewhat unexpected) application of the current type system of Haskell, illustrating through a concrete example how some typical use of dependent types can be simulated in Haskell. On the other hand, we identify several problematic issues with such a simulation technique and then suggest some approaches to addressing these issues in Haskell.
}
}
inproceedings{Chen+Shi+Xi:PADL-2004
author = {Chiyan Chen and Rui Shi and Hongwei Xi}
title = {A Typeful Approach to Object-Oriented Programming with Multiple Inheritance}
booktitle = {Proceedings of the 6th International Symposium on Practical Aspects of Declarative Languages}
publisher = {Springer-Verlag LNCS vol. 3057}
pages = {23-38}
year = {2004}
month = {June}
address = {Dallas, TX}
churchreport = {yes}
abstract = {
The wide practice of objected oriented programming (OOP) in current software practice is evident. Despite extensive studies on typing programming objects, it is still undeniably a challenging research task to design a type system that can satisfactorily account for a variety of features (e.g., binary methods and multiple inheritance) in OOP. In this paper, we present a typeful approach to implementing objects that makes use of a recently introduced notion of guarded datatypes. In particular, we demonstrate how the feature of multiple inheritance can be supported with this approach, presenting a simple and general account for multiple inheritance in a typeful manner.
}
}
techreport{Amt+Mak+Wel:PolyA-2004a
author = {Torben Amtoft and Henning Makholm and J. B. Wells}
title = {PolyA: True Type Polymorphism for Mobile Ambients}
year = {2004}
month = {February}
institution = {Heriot-Watt Univ., School of Math.\ & Comput.\ Sci.}
number = {HW-MACS-TR-0015}
churchreport = {yes}
dartreport = {yes}
note = {A shorter successor is \citeAmt+Mak+Wel:TCS-2004}
abstract = {
Previous type systems for mobility calculi (the original Mobile Ambients, its variants and descendants, e.g., Boxed Ambients and Safe Ambients, and other related systems) offer little support for generic mobile agents. Previous systems either do not handle communication at all or globally assign fixed communication types to ambient names that do not change as an ambient moves around or interacts with other ambients. This makes it hard to type examples such as a ``messenger'' ambient that uses communication primitives to collect a message of non-predetermined type and deliver it to a non-predetermined destination. \par In contrast, we present our new type system $\mathsfPolyA$. Instead of assigning communication types to ambient names, $\mathsfPolyA$ assigns a type to each process $P$ that gives upper bounds on (1) the possible ambient nesting shapes of any process $P'$ to which $P$ can evolve, (2) the values that may be communicated at each location, and (3) the capabilities that can be used at each location. Because $\mathsfPolyA$ can type generic mobile agents, we believe $\mathsfPolyA$ is the first type system for a mobility calculus that provides type polymorphism comparable in power to polymorphic type systems for the $\lambda$-calculus. $\mathsfPolyA$ is easily extended to ambient calculus variants. A restriction of $\mathsfPolyA$ has principal typings.
}
}
techreport{Hal+Kfo:BUCS-TR-2004-004
author = {Joseph J. Hallett and Assaf J. Kfoury}
title = {Programming Examples Needing Polymorphic Recursion}
institution = {Department of Computer Science, Boston University}
year = {2004}
month = {January}
churchreport = {yes}
type = {Technical Report}
number = {BUCS-TR-2004-004}
xpostscript = {http://www.cs.bu.edu/techreports/2004-004-polymorphic-recursion-progams.ps.Z}
xpdf = {http://www.cs.bu.edu/techreports/pdf/2004-004-polymorphic-recursion-progams.pdf}
itrspaper = {yes}
abstract = {
Inferring types for polymorphic recursive function definitions (abbreviated to \em polymorphic recursion) is a recurring topic on the mailing lists of popular typed programming languages. This is despite the fact that type inference for polymorphic recursion using $\forall$-types has been proved undecidable. This report presents several programming examples involving polymorphic recursion and determines their typability under various type systems, including the Hindley-Milner system, an intersection-type system, and extensions of these two. The goal of this report is to show that many of these examples are typable using a system of intersection types as an alternative form of polymorphism. By accomplishing this, we hope to lay the foundation for future research into a decidable intersection-type inference algorithm. \par We do not provide a comprehensive survey of type systems appropriate for polymorphic recursion, with or without type annotations inserted in the source language. Rather, we focus on examples for which types may be inferred without type annotations, with an emphasis on systems of intersection-types.
}
}
inproceedings{Xi:TYPES-2004
author = {Hongwei Xi}
title = {Applied Type System (extended abstract)}
booktitle = {post-workshop Proceedings of TYPES 2003}
publisher = {Springer-Verlag LNCS 3085}
year = {2004}
pages = {394--408}
churchreport = {yes}
abstract = {
The framework Pure Type System (PTS) offers a simple and general approach to designing and formalizing type systems. However, in the presence of dependent types, there often exist some acute problems that make it difficult for PTS to accommodate many common realistic programming features such as general recursion, recursive types, effects (e.g., exceptions, references, input/output), etc. In this paper, we propose a new framework Applied Type System (ATS) to allow for designing and formalizing type systems that can readily support common realistic programming features. The key salient feature of ATS lies in a complete separation between statics, in which types are formed and reasoned about, and dynamics, in which programs are constructed and evaluated. With this separation, it is no longer possible for a program to occur in a type as is otherwise allowed in PTS. We present not only a formal development of ATS but also mention some examples in support of using ATS as a framework to form type systems for practical programming.
}
}
article{Andrews+etc:JAR-2004
author = {Peter Andrews and Matthew Bishop and Chad Brown and Sunil Issar and Frank Pfenning and Hongwei Xi}
title = {ETPS: A System to Help Students to Write Formal Proofs}
journal = {Journal of Automated Reasoning}
volume = {32}
year = {2004}
pages = {75--92}
churchreport = {yes}
abstract = {
ETPS (Educational Theorem Proving System) is a program which logic students and use to write formal proofs in first-order logic or higher-order logic. It help students to concentrate on the essential logic problems involved in proving theorems, and automatically checks the proofs.
}
}
techreport{Mak+Wel:TIP-2004
author = {Henning Makholm and J. B. Wells}
title = {Type Inference for PolyA}
year = {2004}
month = {January}
churchreport = {yes}
dartreport = {yes}
disabledpdf = {http://www.macs.hw.ac.uk:8080/techreps/docs/files/HW-MACS-TR-0013.pdf}
institution = {Heriot-Watt Univ., School of Math.\ & Comput.\ Sci.}
number = {HW-MACS-TR-0013}
abstract = {
We present an automatic type inference algorithm for PolyA, a type system for Mobile Ambients presented in earlier work by us together with Torben Amtoft. We present not only a basic inference algorithm, but also several optimizations to it aimed at reducing the size of the inferred types. The final algorithm has been implemented and verified to work on small examples. We discuss some small problems that still exist and methods for solving them.
}
}
inproceedings{Amt+Mak+Wel:TCS-2004
churchreport = {yes}
dartreport = {yes}
author = {Torben Amtoft and Henning Makholm and J. B. Wells}
title = {PolyA: True Type Polymorphism for Mobile Ambients}
crossref = {TCS2004}
year = {2004}
pages = {591--604}
note = {A more detailed predecessor is \citeAmt+Mak+Wel:PolyA-2004a}
abstract = {
Previous type systems for mobility calculi (the original Mobile Ambients, its variants and descendants, e.g., Boxed Ambients and Safe Ambients, and other related systems) offer little support for generic mobile agents. Previous systems either do not handle communication at all or globally assign fixed communication types to ambient names that do not change as an ambient moves around or interacts with other ambients. This makes it hard to type examples such as a ``messenger'' ambient that uses communication primitives to collect a message of non-predetermined type and deliver it to a non-predetermined destination. \par In contrast, we present our new type system $\mathsfPolyA$. Instead of assigning communication types to ambient names, $\mathsfPolyA$ assigns a type to each process $P$ that gives upper bounds on (1) the possible ambient nesting shapes of any process $P'$ to which $P$ can evolve, (2) the values that may be communicated at each location, and (3) the capabilities that can be used at each location. Because $\mathsfPolyA$ can type generic mobile agents, we believe $\mathsfPolyA$ is the first type system for a mobility calculus that provides type polymorphism comparable in power to polymorphic type systems for the $\lambda$-calculus. $\mathsfPolyA$ is easily extended to ambient calculus variants. A restriction of $\mathsfPolyA$ has principal typings.
}
}
article{Haa+Wel:SCP-2004-v50
itrspaper = {yes}
author = {Christian Haack and J. B. Wells}
title = {Type Error Slicing in Implicitly Typed Higher-Order Languages}
journal = {Sci.\ Comput.\ Programming}
dartreport = {yes}
volume = {50}
pages = {189--224}
year = {2004}
pdftodo = {*** rename this file ***}
churchreport = {yes}
note = {Supersedes \citeHaa+Wel:ESOP-2003}
doihttp = {http://dx.doi.org/10.1016/j.scico.2004.01.004}
doi = {doi:10.1016/j.scico.2004.01.004}
abstract = {
Previous methods have generally identified the location of a type error as a particular program point or the program subtree rooted at that point. We present a new approach that identifies the location of a type error as a set of program points (a \emphslice) all of which are necessary for the type error. We identify the criteria of \emphcompleteness and \emphminimality for type error slices. We discuss the advantages of complete and minimal type error slices over previous methods of presenting type errors. We present and prove the correctness of algorithms for finding complete and minimal type error slices for implicitly typed higher-order languages like Standard~ML.
}
}
inproceedings{Car+Pol+Wel+Kfo:ESOP-2004
itrspaper = {yes}
churchreport = {yes}
dartreport = {yes}
author = {Sébastien Carlier and Jeff Polakow and J. B. Wells and A. J. Kfoury}
title = {System E: Expansion Variables for Flexible Typing with Linear and Non-linear Types and Intersection Types}
crossref = {ESOP2004}
year = {2004}
pages = {294--309}
altpdf = {http://www.macs.hw.ac.uk/DART/reports/D2.3/CPWK04.pdf}
abstract = {
Types are often used to control and analyze computer programs. Intersection types give a type system great flexibility, but have been difficult to implement. The $!$ operator, used to distinguish between linear and non-linear types, has good potential for improving tracking of resource usage, but has not been as flexible as one might want and has been difficult to use in compositional analysis. We introduce System~E, a type system with expansion variables, linear intersection types, and the $!$ type constructor for creating non-linear types. System~E is designed for maximum flexibility in automatic type inference and for ease of automatic manipulation of type information. Expansion variables allow postponing the choice of which typing rules to use until later constraint solving gives enough information to allow making a good choice. System~E removes many limitations and technical difficulties that expansion variables had in the earlier System~I and extends expansion variables to work with $!$ in addition to the intersection type constructor. We present subject reduction results for call-by-need evaluation and discuss approaches for implementing program analysis in System~E.
}
}
inproceedings{Car+Wel:PPDP-2004
itrspaper = {yes}
churchreport = {yes}
dartreport = {yes}
year = {2004}
author = {Sébastien Carlier and J. B. Wells}
title = {Type Inference with Expansion Variables and Intersection Types in System~E and an Exact Correspondence with $\beta$-Reduction}
crossref = {PPDP2004}
xpages = {132--143}
note = {Completely supersedes \citeCar+Wel:TIEV-2004}
abstract = {
System~E is a recently designed type system for the $\lambda$-calculus with intersection types and \emphexpansion variables. During automatic type inference, expansion variables allow postponing decisions about which non-syntax-driven typing rules to use until the right information is available and allow implementing the choices via substitution. \par This paper uses expansion variables in a unification-based automatic type inference algorithm for System~E that succeeds for every $\beta$-normalizable $\lambda$-term. We have implemented and tested our algorithm and released our implementation publicly. Each step of our unification algorithm corresponds to exactly one $\beta$-reduction step, and \emphvice versa. This formally verifies and makes precise a step-for-step correspondence between type inference and $\beta$-reduction. This also shows that type inference with intersection types and expansion variables can, in effect, carry out an arbitrary amount of partial evaluation of the program being analyzed.
}
}
phdthesis{neergaard-capld-2004
author = {Peter Møller Neergaard}
title = {Complexity Aspects of Programming Language Design---From Logspace to Elementary Time via Proofnets and Intersection Types}
school = {Brandeis University}
year = {2004}
month = {October}
abstract = {
More than being just a tool for expressing algorithms, a well-designed programming language allows the user to express her ideas efficiently. The design choices, however, affect the efficiency of the algorithms written in the languages. It is therefore important to understand how such choices effect the expressibility of programming languages. The dissertation approaches this problem from two different angles. \par The dissertation presents what appears to be the first resource independent characterization of the \textsclogspace-computable \emphfunctions. This is done in the form of a \emphfunction algebra~BC$^\mbox-_\varepsilon$ that is sound and complete for \textsclogspace. BC$^\mbox-_\varepsilon$ provides insights into restricting recursion that might be useful to provide compile-time guarantees on resource bounds. Furthermore, by comparison with work by Bellantoni and Cook, it highlights linearity as a potential distinction between~\textsclogspace\ and \textscptime. \par The dissertation also formalizes how successful type inference fundamentally depends on the \emphamnesia of the type system: the fact that the type system forgets information about the program. \emphIntersection type systems without \emphidempotency provides an exact analysis where all information is retained. The dissertation shows that in such intersection type systems, normalization is equivalent to type inference in \emphevery single case. Time bounds on type inference and normalization are thus identical and doing compile-time type inference is useless. \par The latter part reveals that \emphexpansion variables proposed for type inference are closely related to the boxes already known from the \emphlinear logic technology of \emphproofnets. This hithertho unknown connection between the two technologies suggests fruitful contributions between the two fields. Moreover, it extends and formalizes a previously known connection between intersection types and proofnets.
}
pmnwww = {dissertation}
churchreport = {yes}
}
article{neergaard-jfp-ta
author = {Peter Møller Neergaard}
title = {A Bargain for Intersection Types: A Simple Strong Normalization Proof}
journal = {J.~Funct.\ Programming}
note = {To appear}
pmnwww = {journal}
churchreport = {yes}
abstract = {
This pearl gives a discount proof of the folklore theorem that every strongly $\beta$-normalizing $\lambda$-term is typable with an intersection type. (We consider typings that do not use the empty intersection~$ømega$ which can type any term.) The proof uses the perpetual reduction strategy which finds a longest path. This is a simplification over existing proofs that consider any longest reduction path. The choice of reduction strategy avoids the need for weakening or strengthening of type derivations. The proof becomes a bargain because it works for more intersection type systems, while being simpler than existing proofs.
}
}
techreport{Neergaard-brtcl-2004
author = {Peter Møller Neergaard}
title = {BC$^\textrm-_\epsilon$: A Recursion-Theoretic Characterization of \textsclogspace}
institution = {Brandeis University}
year = {2004}
month = {March}
note = {Preliminary version}
churchreport = {yes}
abstract = {
We present BC$^\textrm-_\epsilon$ which is a function algebra that is sound and complete for \textsclogspace. It is based on the novel recursion-theoretic principle of generalized recursion where the step length of primitive recursion can be varied at each step. This allows elegant representations of functions like logarithm and division. Unlike characterizations found in the literature, it is noticeable that there does not appear to be a way to represent pairs in the algebra.\par The soundness proof uses a simulation based on ``computational amnesia'' where, analogously to tail recursion optimization, a recursive call replaces its own activation record. Even though the call is not necessarily tail, we recover the full recursion by repeatedly restarting the computation.
}
pmnwww = {techreport}
}
inproceedings{neergaard-aplas-2004
author = {Peter Møller Neergaard}
title = {A Functional Language for Logarithmic Space}
crossref = {APLAS2004}
year = {2004}
pages = {311--326}
doi = {http://www.springerlink.com/(qe2jbx55cx2ecx55i0bnjp45)/app/home/contribution.asp?referrer}
abstract = {
More than being just a tool for expressing algorithms, a well-designed programming language allows the user to express her ideas efficiently. The design choices however effect the efficiency of the algorithms written in the languages. It is therefore important to understand how such choices effect the expressibility of programming languages. \par The paper pursues the very low complexity programs by presenting a first-order function algebra~\textrmBC$^\textrm-_\varepsilon$ that captures exactly \textsclogspace, the functions computable in logarithmic space. This gives insights into the expressiveness of recursion. \par The important technical features of \textrmBC$^\textrm-_\varepsilon$ are (1) a separation of variables into safe and normal variables where recursion can only be done over the latter; (2) linearity of the recursive call; and (3) recursion with a variable step length (course-of-value recursion). Unlike formulations of \textsclogspace via Turing machines, \textrmBC$^\textrm-_\varepsilon$ makes no references to outside resource measures, e.g., the size of the memory used. This appears to be the first such characterization of \textsclogspace-computable functions (not just predicates). \par The proof that all \textrmBC$^\textrm-_\varepsilon$-programs can be evaluated in \textsclogspace is of separate interest to programmers: it trades space for time and evaluates recursion with at most one recursive call without a call stack.
}
churchreport = {yes}
pmnwww = {conference}
copyright = {\copyright\ Springer-Verlag; Appears in the \beginrawhtml\endrawhtmlLecture Notes in Computer Science\beginrawhtml\endrawhtml series}
}
inproceedings{Nee-Mai-icfp-2004
itrspaper = {yes}
author = {Peter Møller Neergaard and Harry G. Mairson}
title = {Types, Potency, and Idempotency: Why Nonlinearity and Amnesia Make a Type System Work}
year = {2004}
abstract = {
Useful type inference must be faster than normalization. Otherwise, you could check safety conditions by running the program. We analyze the relationship between bounds on normalization and type inference. We show how the success of type inference is fundamentally related to the \emphamnesia of the type system: the \emphnonlinearity by which all instances of a variable are constrained to have the same type. \par Recent work on \emphintersection types has advocated their usefulness for static analysis and modular compilation. We analyze system~$\mathbbI$ (and some instances of its descendant, system~E), an intersection type system with a type inference algorithm. Because system~$\mathbbI$ lacks \emphidempotency, each occurrence of a variable requires a distinct type. Consequently, type inference is equivalent to normalization in \emphevery single case, and time bounds on type inference and normalization are identical. Similar relationships hold for other intersection type systems without idempotency. \par The analysis is founded on an investigation of the relationship between \emphlinear logic and intersection types. We show a lockstep correspondence between normalization and type inference. The latter shows the promise of intersection types to facilitate static analyses of varied granularity, but also belies an immense challenge: to add amnesia to such analysis without losing all of its benefits.
}
crossref = {ICFP2004}
churchreport = {yes}
pmnwww = {conference}
copyright = {\copyrightACM, 2004. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM SIGPLAN Notices, Volume~39, Issue~9 (September 2004)}
doi = {http://portal.acm.org/citation.cfm?doid}
}
article{Kfo+Wel:TCSB-2004-v311n1-3
itrspaper = {yes}
churchreport = {yes}
darttodo = {Mention this.}
author = {Assaf J. Kfoury and J. B. Wells}
title = {Principality and Type Inference for Intersection Types Using Expansion Variables}
journal = {Theoret.\ Comput.\ Sci.}
volume = {311}
number = {1--3}
pages = {1--70}
xnumbernote = {Elsevier refers to "issues" 1--3, not "number"}
year = {2004}
doi = {doi:10.1016/j.tcs.2003.10.032}
sciencedirect = {http://authors.elsevier.com/sd/article/S0304397503005772}
tcsref = {TCS4942}
documenturl = {http://www.church-project.org/reports/Kfo+Wel:TCSB-2004-v311n1-3.html}
note = {Supersedes \citeKfo+Wel:POPL-1999. For omitted proofs, see the longer report \citeKfo+Wel:PTI-2003}
abstract = {
Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable $\lambda$-terms. More interestingly, every finite-rank restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference. This is in contrast to System~F where the finite rank restriction for every finite rank at 3 and above has neither principal typings nor decidable type inference. Furthermore, the notion of principal typings for our system involves only one operation, substitution, rather than several operations (not all substitution-based) as in earlier presentations of principality for intersection types (without rank restrictions). In our system the earlier notion of \emphexpansion is integrated in the form of \emphexpansion variables, which are subject to substitution as are ordinary variables. A unification-based type inference algorithm is presented using a new form of unification, $\beta$-unification.
}
}
inproceedings{Zhu+Xi:APLAS-2003
author = {Dengping Zhu and Hongwei Xi}
title = {A Typeful and Tagless Representation for XML Documents}
booktitle = {Proceedings of the First Asian Symposium on Programming Languages and Systems}
year = {2003}
month = {November}
address = {Beijing, China}
pages = {89-104}
churchreport = {yes}
abstract = {
Abstract: When constructing programs to process XML documents, we immediately face the question as to how XML documents should be represented internally in the programming language we use. Currently, most representations for XML documents are typeless in the sense that the type information of an XML document cannot be reflected in the type of the representation of the document (if the representation is assumed to be typed). Though convenient to construct, a typeless representation for XML documents often makes use of a large number of representation tags, which not only require some significant amount of space to store but may also incur numerous run-time tag checks when the represented documents are processed. Moreover, with a typeless representation for XML documents, it becomes difficult or even impossible to statically capture program invariants that are related to the type information of XML documents. Building upon our recent work on guarded recursive datatypes, we present an approach to representing XML documents in this paper that not only allows the type information of an XML document to be reflected in the type of the representation of the document but also significantly reduces the need for representation tags that are required in typeless representations. With this approach, we become able to process XML documents in a typeful manner, thus reaping various well-known software engineering benefits from the presence of types.
}
}
inproceedings{Taha+Ellner+Xi:EMSOFT-2003
author = {Walid Taha and Stephan Ellner and Hongwei Xi}
title = {Generating Imperative, Heap-Bounded Programs in a Functional Setting}
booktitle = {Proceedings of the Third International Conference on Embedded Software}
publisher = {Springer-Verlag LNCS 2855}
year = {2003}
month = {October}
address = {Philadelphia, PA}
pages = {340--355}
churchreport = {yes}
abstract = {
High-level programming languages offer significant expressivity but provide little or no guarantees about resource utilization. Resource-bounded languages provide strong guarantees about the runtime behavior of programs but often lack mechanisms that allow programmers to write more structured, modular, and reusable programs. To overcome this basic tension in language design, this paper advocates taking into account the natural distinction between the development platform and the deployment platform for resource-sensitive software. To illustrate this approach, we develop the meta-theory for GeHB, a two-level language in which first stage computations can involve arbitrary resource consumption, but the second stage can only involve functional programs that do not require new heap allocations. As an example of a such a second-stage language we use the recently proposed first-order functional language LFPL. LFPL can be compiled directly to malloc-free, imperative C code. We show that all generated programs in GeHB can be transformed into well-typed LFPL programs, thus ensuring that the results established for LFPL are directly applicable to GeHB.
}
}
inproceedings{Xi:SEFM-2003
author = {Hongwei Xi}
title = {Facilitating Program Verification with Dependent Types}
booktitle = {Proceedings of the International Conference on Software Engineering and Formal Methods}
year = {2003}
month = {September}
address = {Brisbane, Australia}
pages = {72--81}
churchreport = {yes}
abstract = {
The use of types in capturing program invariants is overwhelming in practical programming. The type systems in languages such as ML and Java scale convincingly to realistic programs but they are of relatively limited expressive power. In this paper, we show that the use of a restricted form of dependent types can enable us to capture many more program invariants such as memory safety while retaining practical type-checking. The programmer can encode program invariants with type annotations and then verify these invariants through static type-checking. Also the type annotations can serve as informative program documentation, which are mechanically verified and can thus be fully trusted. We argue with realistic examples that this restricted form of dependent types can significantly facilitate program verification as well as program documentation.
}
}
inproceedings{Chen+Xi:ICFP-2003
author = {Chiyan Chen and Hongwei Xi}
title = {Meta-Programming through Typeful Code Representation}
booktitle = {Proceedings of the Eighth ACM SIGPLAN International Conference on Functional Programming}
year = {2003}
month = {August}
address = {Uppsala, Sweden}
pages = {275--286}
churchreport = {yes}
abstract = {
By allowing the programmer to write code that can generate code at run-time, meta-programming offers a powerful approach to program construction. For instance, meta-programming can often be employed to enhance program efficiency and facilitate the construction of generic programs. However, meta-programming, especially in an untyped setting, is notoriously error-prone. In this paper, we aim at making meta-programming less error-prone by providing a type system to facilitate the construction of correct meta-programs. We first introduce some code constructors for constructing typeful code representation in which program variables are replaced with deBruijn indices, and then formally demonstrate how such typeful code representation can be used to support meta-programming. The main contribution of the paper lies in recognition and then formalization of a novel approach to typed meta-programming that is practical, general and flexible.
}
}
inproceedings{Chen+Xi:PEPM-2003
author = {Chiyan Chen and Hongwei Xi}
title = {Implementing Typeful Program Transformation}
booktitle = {Proceedings of ACM SIGPLAN Workshop on Partial Evaluation and Semantics Based Program Manipulation}
year = {2003}
month = {June}
churchreport = {yes}
address = {San Diego, CA}
abstract = {
The notion of program transformation is ubiquitous in programming language studies on interpreters, compilers, partial evaluators, etc. In order to implement a program transformation, we need to choose a representation in the meta language, that is, the programming language in which we construct programs, for representing object programs, that is, the programs in the object language on which the program transformation is to be performed. In practice, most representations chosen for typed object programs are typeless in the sense that the type of an object program cannot be reflected in the type of its representation. This is unsatisfactory as such typeless representations make it impossible to capture in the type system of the meta language various invariants in a program transformation that are related to the types of object programs. In this paper, we propose an approach to implementing program transformations that makes use of a first-order typeful program representation formed in Dependent ML (DML), where the type of an object program as well as the types of the free variables in the object program can be reflected in the type of the representation of the object program. We introduce some programming techniques needed to handle this typeful program representation, and then present an implementation of a CPS transform function where the relation between the type of an object program and that of its CPS transform is captured in the type system of DML. In a broader context, we claim to have taken a solid step along the line of research on constructing certifying compilers.
}
}
inproceedings{Xi:SBLP-2003
author = {Hongwei Xi}
title = {Dependently Typed Pattern Matching}
booktitle = {Proceedings of Simposio Brasileiro de Linguagens de Programacao}
year = {2003}
month = {May}
address = {Ouro Preto, Brazil}
churchreport = {yes}
abstract = {
The mechanism for declaring datatypes to model data structures in functional programming languages such as Standard ML and Haskell can offer both convenience in programming and clarity in code. With the introduction of dependent datatypes in DML, the programmer can model data structures with more accuracy, thus capturing more program invariants. In this paper, we study some practical aspects of dependent datatypes that affect both type-checking and compiling pattern matching. The results, which have already been tested, demonstrate that dependent datatype can not only offer various programming benefits but also lead to performance gains, yielding a concrete case where safer programs run faster.
}
}
unpublished{Wel+Plu+Kam:DMP-2003
author = {J. B. Wells and Detlef Plump and Fairouz Kamareddine}
title = {Diagrams for Meaning Preservation}
note = {Long version of \citeWel+Plu+Kam:RTA-2003}
month = {April}
year = {2003}
churchreport = {yes}
abstract = {
This paper presents an abstract framework and multiple diagram-based methods for proving \emphmeaning preservation, i.e., that all rewrite steps of a rewriting system preserve the meaning given by an operational semantics based on a rewriting strategy. While previous rewriting-based methods have generally needed the treated rewriting system as a whole to have such properties as, e.g., \emphconfluence, \emphstandardization, and/or \emphtermination or \emphboundedness of \emphdevelopments, our methods can work when \emphall of these conditions fail, and thus can handle more rewriting systems. We isolate the new \emphlift/project with termination diagram as the key proof idea and show that previous rewriting-based methods (Plotkin's method based on \emphconfluence and \emphstandardization and Machkasova and Turbak's method based on distinct \emphlift and \emphproject properties) implicitly use this diagram. Furthermore, our framework and proof methods help reduce the proof burden substantially by, e.g., supporting separate treatment of partitions of the rewrite steps, needing only \emphelementary diagrams for rewrite step interactions, excluding many rewrite step interactions from consideration, needing weaker termination properties, and providing generic support for using developments in combination with any method.
}
}
article{Xi:JUCS-2003
author = {Hongwei Xi}
title = {Dependently Typed Pattern Matching}
journal = {Journal of Universal Computer Science}
volume = {9}
number = {8}
year = {2003}
pages = {851-872}
churchreport = {yes}
abstract = {
The mechanism for declaring datatypes to model data structures in programming languages such as Standard ML and Haskell can offer both convenience in programming and clarity in code. With the introduction of dependent datatypes in DML, the programmer can model data structures with more accuracy, thus capturing more program invariants. In this paper, we study some practical aspects of dependent datatypes that affect both type-checking and compiling pattern matching. The results, which have already been tested, demonstrate that dependent datatype can not only offer various programming benefits but also lead to performance gains, yielding a concrete case where safer programs run faster.
}
}
inproceedings{Xi+Chen+Chen:POPL-2003
author = {Hongwei Xi and Chiyan Chen and Gang Chen}
title = {Guarded Recursive Datatype Constructors}
booktitle = {Proceedings of the 30th ACM SIGPLAN Symposium on Principles of Programming Languages}
year = {2003}
month = {January}
address = {New Orleans}
pages = {224--235}
churchreport = {yes}
abstract = {
Abstract: We introduce a notion of guarded recursive (g.r.) datatype constructors, generalizing the notion of recursive datatypes in functional programming languages such as ML and Haskell. We address both theoretical and pragmatic issues resulted from this generalization. On one hand, we design a type system to formalize the notion of g.r. datatype constructors and then prove the soundness of the type system. On the other hand, we present some significant applications (e.g., implementing objects, implementing staged computation, etc.) of g.r. datatype constructors, arguing that g.r. datatype constructors can have far-reaching consequences in programming. The main contribution of the paper lies in the recognition and then the formalization of a programming notion that is of both theoretical interest and practical use.
}
}
inproceedings{Wel+Plu+Kam:RTA-2003
author = {J. B. Wells and Detlef Plump and Fairouz Kamareddine}
title = {Diagrams for Meaning Preservation}
note = {A long version is \citeWel+Plu+Kam:DMP-2003}
crossref = {RTA2003}
year = {2003}
pages = {88--106}
churchreport = {yes}
abstract = {
This paper presents an abstract framework and multiple diagram-based methods for proving \emphmeaning preservation, i.e., that all rewrite steps of a rewriting system preserve the meaning given by an operational semantics based on a rewriting strategy. While previous rewriting-based methods have generally needed the treated rewriting system as a whole to have such properties as, e.g., \emphconfluence, \emphstandardization, and/or \emphtermination or \emphboundedness of \emphdevelopments, our methods can work when \emphall of these conditions fail, and thus can handle more rewriting systems. We isolate the new \emphlift/project with termination diagram as the key proof idea and show that previous rewriting-based methods (Plotkin's method based on \emphconfluence and \emphstandardization and Machkasova and Turbak's method based on distinct \emphlift and \emphproject properties) implicitly use this diagram. Furthermore, our framework and proof methods help reduce the proof burden substantially by, e.g., supporting separate treatment of partitions of the rewrite steps, needing only \emphelementary diagrams for rewrite step interactions, excluding many rewrite step interactions from consideration, needing weaker termination properties, and providing generic support for using developments in combination with any method.
}
}
misc{Chen:JLC-2003
author = {Gang Chen}
title = {Soundness of Coercion in the Calculus of Constructions}
churchreport = {yes}
year = {2003}
keywords = {Subtyping, coersion, Calculus of Constructions}
xpostscript = {http://www.church-project.org/reports/electronic/Chen:JLC-2003.ps.gz}
xurl = {http://www.church-project.org/reports/electronic/Chen:JLC-2003.pdf.gz}
xpdf = {http://www.church-project.org/reports/electronic/Chen:JLC-2003.pdf}
abstract = {
We present a coercive subtyping system for the calculus of constructions. The proposed system $\lambda C_\leq^co$ is obtained essentially by adding coercions and $\eta$-conversion to $\lambda C_\leq$\citeChen97a, which is a subtyping extension to the calculus of constructions without coercions. We proved that a coercion $c : A \leq B$ is semantically sound. That is, 1) $c$ is a term of type $A\to B$; 2) $c$ behaves like an identity function. This development follows the spirit of semantical interpretation of subtyping in \citeLMS95,LMS98.
}
note = {To appear in Journal of Logic and Computation}
}
inproceedings{Chen:POPL-2003
author = {Gang Chen}
title = {Coercive Subtyping for the Calculus of Constructions (extended abstract)}
churchreport = {yes}
crossref = {POPL2003}
keywords = {Subtyping, coersion, Calculus of Constructions}
year = {2003}
abstract = {
We present a coercive subtyping system for the calculus of constructions. The proposed system $\lambda C_\leq^co$ is obtained essentially by adding coercions and $\eta$-conversion to $\lambda C_\leq$\citeChen97a, which is a subtyping extension to the calculus of constructions without coercions. Following \citeLMS95,LMS98, the coercive subtyping $c : A \leq B$ is understood as a special case of typing in arrow type $c : A \to B$ such that the term $c$ behaves like an identity function. We prove that, with respect to this semantic interpretation, the proposed coercive subtyping system is sound and complete, and that this completeness leads to transitivity elimination. In addition, we establish the equivalence between $\lambda C_\leq^co$ and $\lambda C_\beta \eta$, this fact implies that $\lambda C_\leq^co$ has confluence, subject reduction and strong normalization. We propose a formalization of coercion inference problem and present a sound and complete coercion inference algorithm.
}
}
unpublished{Nee-Mai-rank-bnd-inter-ta
author = {Peter Møller Neergaard and Harry G. Mairson}
title = {Rank Bounded Intersection: Types, Potency, and Idempotency}
abstract = {
Intersection type systems realize a \emphfinite polymorphism where different types for a term are itemized explicitly. We analyze System~$\mathbbI$, a rank-bounded intersection type system where intersection is not \emphassociative, commutative, or \emphidempotent (ACI), but includes a substitution mechanism employing \em expansion variables that facilitates modular program composition and flow analysis. This type system is used in a prototype intersection type compiler for the Church project. We prove that the problem of type inference is exactly as hard as the problem of normalization: the worst-case cost of both is an elementary function, where the iterated exponential depends on the rank. The key to these results is that simply-typed terms must be linear without ACI, but have the usual nonelementary power with ACI. Further, type inference is \emphalways synonymous with normalization: the cost of computing the principal typing of any term is exactly the cost of computing its normal form. These results do not hold when AC, and particularly I, is added.
}
note = {This paper is obsolete and has been replaced with \citeNee-Mai-icfp-2004}
year = {2003}
churchreport = {yes}
}
inproceedings{Haa+Wel:ESOP-2003
itrspaper = {yes}
author = {Christian Haack and J. B. Wells}
title = {Type Error Slicing in Implicitly Typed Higher-Order Languages}
year = {2003}
crossref = {ESOP2003}
pages = {284--301}
churchreport = {yes}
note = {Superseded by \citeHaa+Wel:SCP-2004-v50}
dartreport = {yes}
xdartreport = {not a DART paper, although its journal version is part of a deliverable}
abstract = {
Previous methods have generally identified the location of a type error as a particular program point or the program subtree rooted at that point. We present a new approach that identifies the location of a type error as a set of program points (a \emphslice) all of which are necessary for the type error. We describe algorithms for finding minimal type error slices for implicitly typed higher-order languages like Standard ML.
}
}
unpublished{Kfo+Wel:PTI-2003
author = {Assaf J. Kfoury and J. B. Wells}
title = {Principality and Type Inference for Intersection Types Using Expansion Variables}
churchreport = {yes}
darttodo = {Mention this.}
note = {Supersedes \citeKfo+Wel:POPL-1999}
month = {August}
year = {2003}
abstract = {
Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable $\lambda$-terms. More interestingly, every finite-rank restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference. This is in contrast to System~F where the finite rank restriction for every finite rank at 3 and above has neither principal typings nor decidable type inference. Furthermore, the notion of principal typings for our system involves only one operation, substitution, rather than several operations (not all substitution-based) as in earlier presentations of principality for intersection types (without rank restrictions). In our system the earlier notion of \emphexpansion is integrated in the form of \emphexpansion variables, which are subject to substitution as are ordinary variables. A unification-based type inference algorithm is presented using a new form of unification, $\beta$-unification.
}
}
unpublished{Nee-Mai-how-light-is-safe-rec-LICS03
author = {Peter Møller Neergaard and Harry G. Mairson}
title = {How Light is Safe Recursion? Translations Between Logics of Polynomial Time}
note = {Unpublished note}
year = {2003}
}
unpublished{Nee-Mai-how-light-is-safe-rec
crossref = {Nee-Mai-how-light-is-safe-rec-LICS03}
year = {2003}
abstract = {
We investigate the simulation of Bellantoni and Cook's function algebra BC by Girard's Light Linear Logic LAL (as amended by Asperti). Both of these languages characterize exactly the polynomial time functions. Given a $\textscptime$ function $f$ described in BC program $\phi_f$, how can we translate $\phi_f$ into LAL? \par The best known translation, due to Murawski and Ong, compiles only the \emphlinear fragment $\textscbc^-$, where \emphsafe variables cannot be shared. We show that this fragment can be evaluated in $\textsclogspace$. Next, we prove that extensions of their approach to the full BC algebra, as well as any translation that interprets integers and primitive recursion via their standard, System F-inspired inductive type coding, cannot provide a correct translation into LAL. Finally, we provide a correct translation, based on generalizing Turing machine simulations into an SECD compiler, realized in LAL. We further elaborate this translation so that it is compositional on the BC combinators.
}
churchreport = {yes}
pmnwww = {conference}
}
inproceedings{Amt+Mul:IpRA-2002
author = {Torben Amtoft and Robert Muller}
title = {Inferring Annotated Types for Inter-procedural Register Allocation with Constructor Flattening}
booktitle = {The 2003 ACM SIGPLAN International Workshop on Types in Language Design and Implementation (TLDI'03)}
location = {New Orleans, Louisiana, USA}
pages = {86--97}
year = {2003}
month = {January}
publisher = {ACM Press}
churchreport = {yes}
abstract = {
We introduce an annotated type system for a compiler intermediate language. The type system is designed to support inter-procedural register allocation and the representation of tuples and variants directly in the register file. We present an algorithm for assigning annotations and prove its soundness with respect to the type system.
}
}
article{Nee-Sor-IAC-2002-v178n1
crossref = {Nee+Sor:IAC-ta}
pmnwww = {journal}
churchreport = {yes}
year = {2002}
abstract = {
For a notion of reduction in a $\lambda$-calculus one can ask whether a term satisfies \emphconservation and \emphuniform normalization. Conservation means that single-step reductions of the term preserve infinite reduction paths from the term. Uniform normalization means that either the term will have no reduction paths leading to a normal form, or all reduction paths will lead to a normal form. \par In the classical \emphconservation theorem for $\Lambda^I$ the distinction between the two notions is not clear: uniform normalization implies conservation, and conservation also implies uniform normalization. The reason for this is that $\Lambda^I$ is closed under reduction, due to the fact that reductions never erase terms in $\Lambda^I$. More generaly for non-erasing reductions, the two notions are equivalent on a set closed under reduction. However, when turning to erasing reductions the distinction becomes important as conservation no longer implies uniform normalization. \par This paper presents a new technique for finding uniformly normalizing subsets of a $\lambda$-calculus. This is done by combining a syntactic and a semantic criterion. The technique is demonstrated by several applications. The technique is used to present a new uniformly normalizing subset of the pure $\lambda$-calculus; this subset is a superset of~$\Lambda^I$ and thus contains erasing $\mathbfK$-redexes. The technique is also used to prove strong normalization from weak normalization of the simply typed $\lambda$-calculus extended with pairs; this is an extension of techniques developed recently by Sørensen and Xi. Before presenting the technique the paper presents a simple proof of a slightly weaker form of the characterization of perpetual redexes by Bergstra and Klop; this is a stepping for the later applications of the technique.
}
copyright = {\copyrightThis paper is Copyright Elsevier.}
}
article{Nee+Sor:IAC-ta
author = {Peter Møller Neergaard and Morten Heine Bjørnlund Sørensen}
title = {Conservation and Uniform Normalization in $\lambda$-calculi with Erasing Reductions}
journal = {Information and Computation}
volume = {178}
number = {1}
pages = {149--179}
month = {October}
year = {2002}
}
inproceedings{Nee-Mai-LIS-ICC02
author = {Peter Møller Neergaard and Harry G. Mairson}
title = {LAL Is Square: Representation and Expressiveness in Light Affine Logic}
crossref = {ICC2002}
year = {2002}
churchreport = {yes}
pmnwww = {conference}
abstract = {
We focus on how the choice of input-output representation has a crucial impact on the expressiveness of so-called ``logics of polynomial time.'' Our analysis illustrates this dependence in the context of \emphLight Affine Logic (LAL), which is both a restricted version of Linear Logic, and a primitive functional programming language with restricted sharing of arguments. By slightly relaxing representation conventions, we derive doubly-exponential expressiveness bounds for this ``logic of polynomial time.'' We emphasize that \emphsquaring is the unifying idea that relates upper bounds on cut elimination for LAL with lower bounds on representation. Representation issues arise in the simulation of \textscdtime$(2^2^n)$, where we construct a \em uniform family of proof-nets encoding Turing Machine; specifically, the dependence on $n$ only affects the number of enclosing \em boxes. A related technical improvement is the simulation of \textscdtime$(n^k)$ in depth~$O(\log k)$ LAL proof-nets. The resulting upper bounds on cut elimination then satisfy the properties of a \emphfirst-class polynomial Turing Machine simulation, where there is a fixed polynomial slowdown in the simulation of any polynomial computation.
}
}
article{Xi:JHOSL-2002
author = {Hongwei Xi}
title = {Dependent Types for Program Termination Verification}
journal = {Journal of Higher-Order and Symbolic Computation}
year = {2002}
month = {October}
volume = {15}
pages = {91--131}
churchreport = {yes}
abstract = {
Program termination verification is a challenging research subject of significant practical importance. While there is already a rich body of literature on this subject, it is still undeniably a difficult task to design a termination checker for a realistic programming language that supports general recursion. In this paper, we present an approach to program termination verification that makes use of a form of dependent types developed in Dependent ML (DML), demonstrating a novel application of such dependent types to establishing a liveness property. We design a type system that enables the programmer to supply metrics for verifying program termination and prove that every well-typed program in this type system is terminating. We also provide realistic examples, which are all verified in a prototype implementation, to support the effectiveness of our approach to program termination verification as well as its unobtrusiveness to programming. The main contribution of the paper lies in the design of an approach to program termination verification that smoothly combines types with metrics, yielding a type system capable of guaranteeing program termination that supports a general form of recursion (including mutual recursion), higher-order functions, algebraic datatypes, and polymorphism.
}
}
inproceedings{Xi:ASIA-PEPM-2002
author = {Hongwei Xi}
title = {Unifying Object-Oriented Programming with Typed Functional Programming}
booktitle = {Proceedings of ASIAN Symposium on Partial Evaluation and Semantics-Based Program Manipulation (ASIA-PEPM)}
year = {2002}
month = {September}
pages = {117--125}
address = {Aizu-Wakamatsu, Japan}
churchreport = {yes}
abstract = {
The wide practice of object-oriented programming in current software construction is evident. Despite extensive studies on typing programming objects, it is still undeniably a challenging research task to design a type system for object-oriented programming that is both effective in capturing program errors and unobtrusive to program construction. In this paper, we present a novel approach to typing objects that makes use of a recently invented notion of guarded dependent datatypes. We show that our approach can address various difficult issues (e.g., handling ``self'' type, typing binary methods, etc.) in a simple and natural type-theoretical manner, remedying the deficiencies in many existing approaches to typing objects.
}
}
inproceedings{Chen:PPDP-2002
author = {Gang Chen}
title = {Full Integration of Subtyping and If-expression}
crossref = {PPDP2002}
year = {2002}
keywords = {Subtyping, If-expression, Java, Programming Language}
churchreport = {yes}
abstract = {
The combination of full subsumption and conditional expression is a challenging problem, because in such a system, a term might not have a type which is a representative of all types of the term. Therefore, the traditional type checking technique fails. Due to such a difficulty, Java typing rule for if-expression uses only a restricted form of subtyping. In this paper, we propose a type system which includes both of the above mentioned features and enjoys decidable type checking. We also show that the system has the subject reduction property. It is expected that this technique could be used to improve the type system of Java with full subsumption.
}
}
techreport{Amt+Wel:MPD-2002
author = {Torben Amtoft and J. B. Wells}
title = {Mobile Processes with Dependent Communication Types and Singleton Types for Names and Capabilities}
month = {December}
year = {2002}
institution = {Kansas State University, Department of Computing and Information Sciences}
number = {2002-3}
xnote = {Torben has replaced the file supposed linked to as KSU CIS TR 2002-3 with a later version which was submitted to SAS '03. The original KSU CIS TR 2002-03 is no longer available!!! It might be possible to reconstruct something like it via CVS time travel, but it would be a pain. Torben, 02/07/04: the link now again points to the original report!!}
dartreport = {yes}
churchreport = {yes}
abstract = {
There are many calculi for reasoning about concurrent communicating processes which have locations and are mobile. Examples include the original Ambient Calculus and its many variants, the Seal Calculus, the MR-calculus, the M-calculus, etc. It is desirable to use such calculi to describe the behavior of mobile agents. It seems reasonable that mobile agents should be able to follow non-predetermined paths and to carry non-predetermined types of data from location to location, collecting and delivering this data using communication primitives. Previous type systems for ambient calculi make this difficult or impossible to express, because these systems (if they handle communication at all) have always globally mapped each ambient name to a type governing the type of values that can be communicated locally or with adjacent locations, and this type can not depend on where the ambient has traveled. \par We present a new type system where there are no global assignments of types to ambient names. Instead, the type of an ambient process $P$ not only indicates what can be locally communicated but also gives an upper bound on the possible ambient nesting shapes of any process $P'$ to which $P$ can evolve, as well as the possible capabilities and names that can be exhibited or communicated at each location. Because these shapes can depend on which capabilities and names are actually communicated, the types support this with explicit dependencies on communication. This system is thus the first type system for an ambient calculus which provides type polymorphism of the kind that is usually present in polymorphic type systems for the $\lambda$-calculus.
}
}
inproceedings{Carlier:ITRS-2002
itrspaper = {yes}
author = {Sébastien Carlier}
title = {Polar Type Inference with Intersection Types and $ømega$}
crossref = {ITRS2002}
year = {2002}
churchreport = {yes}
noteremark = {The note field is blank to suppress the note field from ITRS2002.}
note = {}
abstract = {
We present a type system featuring intersection types and $ømega$, a type constant which is assigned to unused terms. We exploit and extend the technology of expansion variables from the recently developed System I, with which we believe our system shares many interesting properties, such as strong normalization, principal typings, and compositional analysis. Our presentation emphasizes a polarity discipline and shows its benefits. We syntactically distinguish positive and negative types, and give them different interpretations. We take the point of view that the interpretation of a type is intrinsic to it, and should not change implicitly when it appears at the opposite polarity. Our system is the result of a process which started with an extension of Trevor Jim's Polar Type System.
}
}
inproceedings{Kfo+Was+Wel:ITRS-2002
itrspaper = {yes}
author = {Assaf J. Kfoury and Geoff Washburn and J. B. Wells}
title = {Implementing Compositional Analysis Using Intersection Types With Expansion Variables}
crossref = {ITRS2002}
year = {2002}
noteremark = {The note field is blank to suppress the note field from ITRS2002.}
note = {}
dartreport = {yes}
churchreport = {yes}
nonexistentdvi = {http://www.church-project.org/reports/electronic/Kfo+Was+Wel:ITRS-2002.dvi}
elsevierurlb = {http://www.elsevier.nl/gej-ng/31/29/23/125/51/show/Products/notes/index.htt}
elsevierurla = {http://www.elsevier.nl/locate/entcs/volume70.html}
abstract = {
A program analysis is \emphcompositional when the analysis result for a particular program fragment is obtained solely from the results for its immediate subfragments via some composition operator. This means the subfragments can be analyzed independently in any order. Many commonly used program analysis techniques (in particular, most abstract interpretations and most uses of the Hindley/Milner type system) are not compositional and require the entire text of a program for sound and complete analysis. \par System I is a recent type system for the pure $\lambda$-calculus with intersection types and the new technology of expansion variables. System I supports compositional analysis because it has the \emphprincipal typings property and an algorithm based on the new technology of $\beta$-unification has been developed that finds these principal typings. In addition, for each natural number $k$, typability in the rank-$k$ restriction of System I is decidable, so a complete and terminating analysis algorithm exists for the rank-$k$ restriction. \par This paper presents new understanding that has been gained from working with multiple implementations of System I and $\beta$-unification-based analysis algorithms. The previous literature on System I presented the type system in a way that helped in proving its more important theoretical properties, but was not as easy for implementers to follow as it could be. This paper provides a presentation of many aspects of System I that should be clearer as well as a discussion of important implementation issues.
}
}
article{Amt+Kfo+Per:CL-2002
author = {Torben Amtoft and Assaf J. Kfoury and Santiago M. Pericas-Geertsen}
title = {Orderly Communication in the Ambient Calculus}
journal = {Computer Languages}
publisher = {Elsevier Science}
year = {2002}
volume = {28}
pages = {29--60}
churchreport = {yes}
abstract = {
The Ambient Calculus (henceforth, \bf AC) was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code. We present a type system for \bf AC that allows the type of exchanged data within the same ambient to vary over time. Our type system assigns what we call \emphbehaviors to processes; a denotational semantics of behaviors is proposed, here called \emphtrace semantics, underlying much of the remaining analysis. We state and prove a Subject Reduction property for our typed version of \bf AC. Based on techniques borrowed from finite automata theory, type checking of fully type-annotated processes is shown to be decidable. We show that the typed version of \bf AC originally proposed by Cardelli and Gordon can be naturally embedded into our typed version of \bf AC.
}
}
article{Wel+Dim+Mul+Tur:JFP-2002-v12n3
itrspaper = {yes}
author = {J. B. Wells and Allyn Dimock and Robert Muller and Franklyn Turbak}
title = {A Calculus with Polymorphic and Polyvariant Flow Types}
journal = {J.~Funct.\ Programming}
year = {2002}
volume = {12}
number = {3}
month = {May}
pages = {183--227}
note = {Supersedes \citeWel+Dim+Mul+Tur:TAPSOFT-1997}
churchreport = {yes}
documenturl = {http://www.church-project.org/reports/Wel+Dim+Mul+Tur:JFP-2002-v12n3.html}
abstract = {
We present $\lambda^\mathrmCIL$, a typed $\lambda$-calculus which serves as the foundation for a typed intermediate language for optimizing compilers for higher-order polymorphic programming languages. The key innovation of $\lambda^\mathrmCIL$ is a novel formulation of intersection and union types and flow labels on both terms and types. These \emphflow types can encode polyvariant control and data flow information within a polymorphically typed program representation. Flow types can guide a compiler in generating customized data representations in a strongly typed setting. Since $\lambda^\mathrmCIL$ enjoys confluence, standardization, and subject reduction properties, it is a valuable tool for reasoning about programs and program transformations.
}
}
inproceedings{Wells:ICALP-2002
itrspaper = {yes}
author = {J. B. Wells}
title = {The Essence of Principal Typings}
crossref = {ICALP2002}
year = {2002}
documenturl = {http://www.church-project.org/reports/Wells:ICALP-2002.html}
dartreport = {yes}
churchreport = {yes}
pages = {913--925}
springerurl = {http://link.springer.de/link/service/series/0558/bibs/2380/23800913.htm}
abstract = {
Let $S$ be some type system. A \emphtyping in $S$ for a typable term $M$ is the collection of all of the information other than $M$ which appears in the final judgement of a proof derivation showing that $M$ is typable. For example, suppose there is a derivation in $S$ ending with the judgement $A\vdashM\mathrel:\tau$ meaning that $M$ has result type $\tau$ when assuming the types of free variables are given by $A$. Then $(A,\tau)$ is a typing for $M$. \par A \emphprincipal typing in $S$ for a term $M$ is a typing for $M$ which somehow represents all other possible typings in $S$ for $M$. It is important not to confuse this notion with the weaker notion of \emphprincipal type often mentioned in connection with the Hindley/Milner type system. Previous definitions of principal typings for specific type systems have involved various syntactic operations on typings such as \emphsubstitution of types for type variables, \emphexpansion, \emphlifting, etc. \par This paper presents a new general definition of principal typings which does not depend on the details of any particular type system. This paper shows that the new general definition correctly generalizes previous system-dependent definitions. This paper explains why the new definition is the right one. Furthermore, the new definition is used to prove that certain polymorphic type systems using $\forall$-quantifiers, namely System F and the Hindley/Milner system, do not have principal typings.
}
}
unpublished{Wells:EPT-2002
itrspaper = {yes}
author = {J. B. Wells}
title = {The Essence of Principal Typings}
documenturl = {http://www.church-project.org/reports/Wells:EPT-2002.html}
dartreport = {yes}
churchreport = {yes}
year = {2002}
note = {A longer version of \citeWells:ICALP-2002 with a 3-page appendix with extra proofs, a page of extra discussion of non-related work, and other minor changes}
abstract = {
Let $S$ be some type system. A \emphtyping in $S$ for a typable term $M$ is the collection of all of the information other than $M$ which appears in the final judgement of a proof derivation showing that $M$ is typable. For example, suppose there is a derivation in $S$ ending with the judgement $A\vdashM\mathrel:\tau$ meaning that $M$ has result type $\tau$ when assuming the types of free variables are given by $A$. Then $(A,\tau)$ is a typing for $M$. \par A \emphprincipal typing in $S$ for a term $M$ is a typing for $M$ which somehow represents all other possible typings in $S$ for $M$. It is important not to confuse this notion with the weaker notion of \emphprincipal type often mentioned in connection with the Hindley/Milner type system. Previous definitions of principal typings for specific type systems have involved various syntactic operations on typings such as \emphsubstitution of types for type variables, \emphexpansion, \emphlifting, etc. \par This paper presents a new general definition of principal typings which does not depend on the details of any particular type system. This paper shows that the new general definition correctly generalizes previous system-dependent definitions. This paper explains why the new definition is the right one. Furthermore, the new definition is used to prove that certain polymorphic type systems using $\forall$-quantifiers, namely System F and the Hindley/Milner system, do not have principal typings.
}
}
article{Wel+Haa:IAC-ta
itrspaper = {yes}
darttodo = {Mention this.}
author = {J. B. Wells and Christian Haack}
title = {Branching Types}
journal = {Inform.\ & Comput.}
year = {200X}
xvolume = {}
xnumber = {}
xpages = {}
xmonth = {}
note = {Completely supersedes \citeWel+Haa:ESOP-2002. Accepted subject to revisions}
churchreport = {yes}
abstract = {
Although systems with intersection types have many unique capabilities, there has never been a fully satisfactory explicitly typed system with intersection types. We introduce and prove the basic properties of $\lambda^\mathrmB$, a typed $\lambda$-calculus with \emphbranching types and types with quantification over \emphtype selection parameters. The new system $\lambda^\mathrmB$ is an explicitly typed system with the same expressiveness as a system with intersection types. Typing derivations in $\lambda^\mathrmB$ use branching types to squash together what would be separate parallel derivations in earlier systems with intersection types.
}
}
inproceedings{Wel+Haa:ESOP-2002
itrspaper = {yes}
author = {J. B. Wells and Christian Haack}
title = {Branching Types}
year = {2002}
crossref = {ESOP2002}
pages = {115--132}
documenturl = {http://www.church-project.org/reports/Wel+Haa:ESOP-2002.html}
springerurl = {http://link.springer.de/link/service/series/0558/bibs/2305/23050115.htm}
churchreport = {yes}
note = {Completely superseded by \citeWel+Haa:IAC-ta}
abstract = {
Although systems with intersection types have many unique capabilities, there has never been a fully satisfactory explicitly typed system with intersection types. We introduce $\lambda^\mathrmB$ with \emphbranching types and types which are quantified over \emphtype selectors to provide an explicitly typed system with the same expressiveness as a system with intersection types. Typing derivations in $\lambda^\mathrmB$ effectively squash together what would be separate parallel derivations in earlier systems with intersection types.
}
}
unpublished{Amt:CauMov-2001
author = {Torben Amtoft}
title = {Causal Type System for Ambient Movements}
note = {Submitted for publication}
year = {2002}
churchreport = {yes}
abstract = {
The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code. We present a type system for the calculus, parameterized by security constraints expressing where a given ambient may reside and where it may be dissolved. A subject reduction property then guarantees that a well-typed process never violates these constraints; additionally it ensures that communicating subprocesses agree on their ``topic of conversation''. Based on techniques borrowed from finite automata theory, type checking of type-annotated processes is decidable. The type system employs a notion of causality in that processes are assigned ``behaviors''. This significantly increases the precision of the analysis and compensates for the lack of ``co-capabilities'' (an otherwise increasingly popular extension to the ambient calculus); also it allows (in contrast to other approaches) an ambient to hold multiple topics of conversation.
}
}
inproceedings{Tur+Wel:PPDP-2001
author = {Franklyn Turbak and J. B. Wells}
title = {Cycle Therapy: A Prescription for Fold and Unfold on Regular Trees}
crossref = {PPDP2001}
year = {2001}
pages = {137--149}
churchreport = {yes}
documenturl = {http://www.church-project.org/reports/Tur+Wel:PPDP-2001.html}
overheadsurl = {http://cs.wellesley.edu/~fturbak/talks/cycle-therapy.pdf}
overheadscomment = {to make printable version: pdf2ps cycle-therapy.pdf,ps then "psnup -b1cm -c -w20.99cm -h29.7cm -W29.7cm -H20.99cm -4 cycle-therapy.ps,-4up"}
abstract = {
Cyclic data structures can be tricky to create and manipulate in declarative programming languages. In a declarative setting, a natural way to view cyclic structures is as denoting \emphregular trees, those trees which may be infinite but have only a finite number of distinct subtrees. This paper shows how to implement the \emphunfold (\emphanamorphism) operator in both eager and lazy languages so as to create cyclic structures when the result is a regular tree as opposed to merely infinite lazy structures. The usual \emphfold (\emphcatamorphism) operator when used with a strict combining function on any infinite tree yields an undefined result. As an alternative, this paper defines and show how to implement a \emphcycfold operator with more useful semantics when used with a strict function on cyclic structures representing regular trees. This paper also introduces an abstract data type (\emphcycamores) to simplify the use of cyclic structures representing regular trees in both eager and lazy languages. Implementions of cycamores in both SML and Haskell are presented.
}
}
phdthesis{Per:PhD-2001
author = {Santiago M. Pericas-Geertsen}
title = {XML-Fluent Mobile Ambients}
institution = {Ph.D. Thesis Comp.\ Sci.\ Dept., Boston Univ.}
school = {Boston University}
year = {2001}
churchreport = {yes}
abstract = {
The Internet is considered to be among the greatest inventions of the twentieth century. Despite its immense importance we, programming language designers, have yet to propose a programming language that can exploit a network that spans the entire globe. In this thesis, we introduce a language for a programming model based on XML-speaking mobile agents. The thesis is divided into three parts: the first part defines the mobility aspect of the language, the second part introduces a typed calculus for manipulating XML documents, and the third part proposes a way of integrating these two calculi. The final goal is to design a typed language in which computation can be carried out by mobile agents exchanging data in the form of XML documents. The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant. In Part I, we consider an extension to the Ambient Calculus where ambients are equipped with a set of local channels over which processes can communicate. This extension incorporates certain features from the Pi-Calculus, such as the ability to communicate over higher-order channels, but without violating design principles of the Ambient Calculus (e.g., that two processes are unable to communicate if located in different administrative domains). For the extended calculus, we introduce an appropriate operational semantics and a provably sound type system. We conclude this part by showing an encoding of the Ambient Calculus into our calculus that is both semantics preserving and type preserving. XML is a simple and portable way of representing structured data in the form of trees. It is an ideal language for machine-to-machine (or peer-to-peer) exchange of data because (i) it is portable, so there is no need to worry about things like integer representations (big endian vs.~little endian) or string representations (null terminated vs.~fixed length) and (ii) structured data is easily represented in the form of trees. Interestingly, the ability of XML to represent almost any form of structured data is often an important weakness at the same time: for a data exchange to be successful, a producer and a consumer must agree on the structure of the exchanged documents so that the latter can find/extract what it needs. This problem can be addressed by the use of a type system. A type system is needed in order to keep track of the meta-data (i.e.~data about the data) which can be used by a consumer as a guideline for data extraction. A number of type systems for XML have been proposed in recent years. Perhaps surprisingly, specially if we look at the evolution of programming languages, the use of a type system in XML (as well as in languages for manipulating XML documents) is not mandatory. Domain-specific languages for XML proposed in the last few years are all, with the exception of XDuce, dynamically typed. Experience has showed us that statically-typed languages are better suited for real-world applications than dynamically-typed ones. Despite requiring more from the programmer's perspective (at least in the absence of a type-inference engine) the benefits greatly outcast the requirements. Specifically, the use of a statically-typed language ensures that the final product is free of an important number of errors (so-called type errors). In Part II, we present the dlcalculus, a statically-typed calculus for manipulating XML documents, and show how it can be integrated with existing XML technology by means of two algorithms for importing and exporting documents into the calculus.
}
}
techreport{Amt+Con+Dan+Mal:BRICS-2001
author = {Torben Amtoft and Charles Consel and Olivier Danvy and Karoline Malmkj\aer}
title = {The Abstraction and Instantiation of String-Matching Programs}
institution = {DAIMI, Department of Computer Science, University of Aarhus}
year = {2001}
churchreport = {yes}
type = {Technical Report}
number = {BRICS RS-01-12}
abstract = {
We consider a naive, quadratic string matcher testing whether a pattern occurs in a text; we equip it with a cache mediating its access to the text; and we abstract the traversal policy of the pattern, the cache, and the text. We then specialize this abstracted program with respect to a pattern, using the off-the-shelf partial evaluator Similix. Instantiating the abstracted program with a left-to-right traversal policy yields the linear-time behavior of Knuth, Morris and Pratt's string matcher. Instantiating it with a right-to-left policy yields the linear-time behavior of Boyer and Moore's string matcher.
}
month = {April}
note = {An extended version of an essay which appears in \em The Essence of Computation: Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones, editors T. Mogensen and D. Schmidt and I. H Sudborough, Springer LNCS 2566, pp 332--357, 2002.}
}
inproceedings{Amt+Kfo+Per:ESOP-2001
author = {Torben Amtoft and Assaf J. Kfoury and Santiago M. Pericas-Geertsen}
title = {What are Polymorphically-Typed Ambients?}
booktitle = {ESOP 2001, Genova}
pages = {206--220}
year = {2001}
churchreport = {yes}
editor = {David Sands}
volume = {2028}
series = {LNCS}
month = {April}
publisher = {Springer-Verlag}
note = {This downloadable extended summary is more detailed than the article in the ESOP proceedings. A full report is \citeAmt+Kfo+Per:PTA-2000}
abstract = {
The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code. We consider an Ambient Calculus where ambients transport and exchange programs rather that just inert data. We propose different senses in which such a calculus can be said to be \emphpolymorphically typed, and design accordingly a \emphpolymorphic type system for it. Our type system assigns \emphtypes to embedded programs and what we call \emphbehaviors to processes; a denotational semantics of behaviors is then proposed, here called \emphtrace semantics, underlying much of the remaining analysis. We state and prove a Subject Reduction property for our polymorphically-typed calculus. Based on techniques borrowed from finite automata theory, type-checking of fully type-annotated processes is shown to be decidable. Our polymorphically-typed calculus is a conservative extension of the typed Ambient Calculus originally proposed by Cardelli and Gordon.
}
}
inproceedings{Dim+Wes+Mul+Tur+Wel:ICFP-2001
itrspaper = {yes}
author = {Allyn Dimock and Ian Westmacott and Robert Muller and Franklyn Turbak and J. B. Wells}
xpdli01title = {Flow-Based Function Customization in the Presence of Representation Pollution}
title = {Functioning without Closure: Type-Safe Customized Function Representations for Standard ML}
crossref = {ICFP2001}
year = {2001}
comment = {change documenturl when this is added as church paper}
olddocumenturl = {http://www.macs.hw.ac.uk/~jbw/papers/#Dim+Wes+Mul+Tur+Wel:ICFP-2001}
documenturl = {http://www.church-project.org/reports/Dim+Wes+Mul+Tur+Wel:ICFP-2001.html}
churchreport = {yes}
abstract = {
The CIL compiler for core Standard ML compiles whole SML programs using a novel typed intermediate language that supports the generation of type-safe customized data representations. In this paper, we present empirical data comparing the relative efficacy of several different customization strategies for function representations. We develop a cost model to interpret dynamic counts of operations required for each strategy. One of our results is data showing that compiling with a function representation strategy that makes customization decisions based on the presence or absence of free variables of a function gives a 26\% improvement over a uniform closure representation. We also present data on the relative effectiveness of various representation pollution removal strategies. We find that for the types of representation pollution detected by our compiler, pollution removal rarely provides any additional benefit for the representation strategies and benchmarks considered.
}
pages = {14--25}
isbn = {1-58113-415-0}
}
techreport{Dim+Wes+Mul+Tur+Wel+Con:TIC-2000-TR
itrspaper = {yes}
author = {Allyn Dimock and Ian Westmacott and Robert Muller and Franklyn Turbak and J. B. Wells and Jeffrey Considine}
title = {Program Representation Size in an Intermediate Language with Intersection and Union Types}
month = {March}
year = {2001}
institution = {Comp.\ Sci.\ Dept., Boston Univ.}
longinstitution = {Computer Science Department, Boston University}
number = {BUCS-TR-2001-02}
note = {This is a version of \citeDim+Wes+Mul+Tur+Wel+Con:TIC-2000-LNCS extended with an appendix describing the CIL typed intermediate language.}
churchreport = {yes}
pdfremark = {non-scalable fonts!}
abstract = {
The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that non-duplicating formulations of intersection (and union) types would not achieve significantly better space complexity.
}
}
article{JLC-2000-v10n3
author = {}
title = {}
journal = {J.~Logic Comput.}
year = {2000}
editor = {Fairouz Kamareddine and Jan Willem Klop}
xeditor = {"publisher" field is not used by BibTeX style for "article"!!!}
publisher = {Oxford University Press}
volume = {10}
number = {3}
note = {Special issue on Type Theory and Term Rewriting. Kamareddine and Klop (editors)}
}
techreport{Amt+Kfo+Per:PTA-2000
title = {What Are Polymorphically-Typed Ambients?}
author = {Torben Amtoft and Assaf J. Kfoury and Santiago M. Pericas-Geertsen}
month = {December}
year = {2000}
number = {BUCS-TR-2000-021}
institution = {Comp.\ Sci.\ Dept., Boston Univ.}
longinstitution = {Computer Science Department, Boston University}
abstract = {
The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code. We consider an Ambient Calculus where ambients transport and exchange programs rather that just inert data. We propose different senses in which such a calculus can be said to be \emphpolymorphically typed, and design accordingly a \emphpolymorphic type system for it. Our type system assigns \emphtypes to embedded programs and what we call \emphbehaviors to processes; a denotational semantics of behaviors is then proposed, here called \emphtrace semantics, underlying much of the remaining analysis. We state and prove a Subject Reduction property for our polymorphically-typed calculus. Based on techniques borrowed from finite automata theory, type-checking of fully type-annotated processes is shown to be decidable. Our polymorphically-typed calculus is a conservative extension of the typed Ambient Calculus originally proposed by Cardelli and Gordon.
}
churchreport = {yes}
}
inproceedings{Dim+Wes+Mul+Tur+Wel+Con:TIC-2000
itrspaper = {yes}
author = {Allyn Dimock and Ian Westmacott and Robert Muller and Franklyn Turbak and J. B. Wells and Jeffrey Considine}
title = {Space Issues in Compiling with Intersection and Union Types}
crossref = {TIC2000}
year = {2000}
churchreport = {yes}
documenturl = {http://www.church-project.org/reports/Dim+Wes+Mul+Tur+Wel+Con:TIC-2000.html}
abstract = {
The CIL compiler for core Standard ML compiles whole programs using the CIL typed intermediate language with flow labels and intersection and union types. Flow labels embed flow information in the types and intersection and union types support precise polyvariant type and flow information, without the use of type-level abstraction or quantification. \par Compile-time representations of CIL types and terms are potentially large compared to those for similar types and terms in systems based on quantified types. The listing-based nature of intersection and union types, together with flow label annotations on types, contribute to the size of CIL types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce multiple representation conventions, but incurs a compile-time space cost. \par This paper presents empirical data on the compile-time space costs of using CIL. These costs can be made tractable using standard hash-consing techniques. Surprisingly, the duplicating nature of CIL has acceptable compile-time space performance in practice on the benchmarks and flow analyses that we have investigated. Increasing the precision of flow analysis can significantly reduce compile-time space costs. There is also preliminary evidence that the flow information encoded in CIL's type system can effectively guide data customization.
}
note = {Superseded by~\citeDim+Wes+Mul+Tur+Wel+Con:TIC-2000-LNCS}
}
inproceedings{Dim+Wes+Mul+Tur+Wel+Con:TIC-2000-LNCS
itrspaper = {yes}
author = {Allyn Dimock and Ian Westmacott and Robert Muller and Franklyn Turbak and J. B. Wells and Jeffrey Considine}
title = {Program Representation Size in an Intermediate Language with Intersection and Union Types}
crossref = {TIC2000LNCS}
year = {2000}
pages = {27--52}
churchreport = {yes}
documenturl = {http://www.church-project.org/reports/Dim+Wes+Mul+Tur+Wel+Con:TIC-2000-LNCS.html}
xpdf = {http://www.church-project.org/reports/electronic/Dim+Wes+Mul+Tur+Wel+Con:TIC-2000-LNCS.pdf.gz}
xpostscript = {http://www.church-project.org/reports/electronic/Dim+Wes+Mul+Tur+Wel+Con:TIC-2000-LNCS.ps.gz}
xnote = {Supersedes~\citeDim+Wes+Mul+Tur+Wel+Con:TIC-2000}
abstract = {
The CIL compiler for core Standard ML compiles whole programs using the CIL typed intermediate language with flow labels and intersection and union types. Flow labels embed flow information in the types and intersection and union types support precise polyvariant type and flow information, without the use of type-level abstraction or quantification. \par Compile-time representations of CIL types and terms are potentially large compared to those for similar types and terms in systems based on quantified types. The listing-based nature of intersection and union types, together with flow label annotations on types, contribute to the size of CIL types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce multiple representation conventions, but incurs a compile-time space cost. \par This paper presents empirical data on the compile-time space costs of using CIL. These costs can be made tractable using standard hash-consing techniques. Surprisingly, the duplicating nature of CIL has acceptable compile-time space performance in practice on the benchmarks and flow analyses that we have investigated. Increasing the precision of flow analysis can significantly reduce compile-time space costs. There is also preliminary evidence that the flow information encoded in CIL's type system can effectively guide data customization.
}
}
inproceedings{Bawden:POPL-2000
author = {Alan Bawden}
title = {First-class Macros Have Types}
crossref = {POPL2000}
year = {2000}
pages = {133--141}
churchreport = {yes}
documenturl = {http://www.linearity.org/bawden/mtt/}
abstract = {
In modern Scheme, a macro captures the lexical environment where it is defined. This creates an opportunity for extending Scheme so that macros are first-class values. The key to achieving this goal, while preserving the ability to compile programs into reasonable code, is the addition of a type system. Many interesting things can be done with first-class macros, including the construction of a useful module system in which modules are also first-class.
}
}
inproceedings{Law+Mai:ESOP-2000
author = {Julia L. Lawall and Harry G. Mairson}
title = {Sharing continuations: proofnets for languages with explicit control}
crossref = {ESOP2000}
year = {2000}
pages = {245--259}
abstract = {
We introduce graph reduction technology that implements functional languages with control, such as Scheme with \tt call/cc, where continuations can be manipulated explicitly as values, and can be optimally reduced in the sense of Lévy. The technology is founded on \em proofnets for multiplicative-exponential linear logic, extending the techniques originally proposed by Lamping, where we adapt the continuation-passing style transformation to yield a new understanding of sharable values. Confluence is maintained by returning multiple answers to a (shared) continuation. \par \em Proofnets provide a \em concurrent version of linear logic proofs, eliminating structurally irrelevant sequentialization, and ignoring asymmetric distinctions between inputs and outputs---dually, expressions and continuations. While Lamping's graphs and their variants encode an embedding of intuitionistic logic into linear logic, our construction implicitly contains an embedding of classical logic into linear logic.\par We propose a family of translations, produced uniformly by beginning with a continuation-passing style semantics for the languages, employing standard codings into proofnets using call-by-value, call-by-name---or hybrids of the two---to locate proofnet \em boxes, and converting the proofnets to direct style. The resulting graphs can be reduced simply (cut elimination for linear logic), have a consistent semantics that is preserved by reduction (geometry of interaction, via the so-called \em context semantics), and allow shared, incremental evaluation of continuations (optimal reduction).
}
churchreport = {yes}
}
phdthesis{Stewart:fortcf
author = {C. A. Stewart}
title = {On the formulae-as-types correspondence for classical logic}
school = {Oxford University}
year = {2000}
churchreport = {yes}
documenturl = {http://www.linearity.org/cas/thesis}
abstract = {
The Curry--Howard correspondence states the equivalence between the constructions implicit in intuitionistic logic and those described in the simply-typed lambda-calculus. It is an insight of great importance in theoretical computer science, and is fundamental in modern approaches to constructive type theory. The possibility of a similar formulae-as-types correspondence for classical logic looks to be a seminal development in this area, but whilst promising results have been achieved, there does not appear to be much agreement of what is at stake in claiming that such a correspondence exists. Consequently much work in this area suffers from several weaknesses; in particular the status of the new rules needed to describe the distinctively classical inferences is unclear. We show how to situate the formulae-as-types correspondence within the proof-theoretic account of logical semantics arising from the work of Michael Dummett and Dag Prawitz, and demonstrate that the admissibility of Prawitz's inversion principle, which we argue should be strengthened, is essential to the good behaviour of intuitionistic logic. By regarding the rules which determine the deductive strength of classical logic as structural rules, as opposed to the logical rules associated with specific logical connectives, we extend Prawitz's inversion principle to classical propositional logic, formulated in a theory of Parigot's lambda-mu calculus with eta expansions. We then provide a classical analogue of a subsystem of Martin-L\"of's type theory corresponding to Peano Arithmetic and show its soundness, appealing to an extension of Tait's reducibility method. Our treatment is the first treatment of induction in classical arithmetic that truly falls under the aegis of the formulae-as-types correspondence, as it is the first that is consistent with the intensional reading of propositional equality.
}
}
article{Kfoury:JLC-2000-v10n3
itrspaper = {yes}
author = {Assaf J. Kfoury}
title = {A Linearization of the Lambda-Calculus}
crossref = {JLC-2000-v10n3}
year = {2000}
pages = {411--436}
documenturl = {http://www.church-project.org/reports/Kfoury:JLC-2000-v10n3.html}
churchreport = {yes}
abstract = {
We embed the standard $\lambda$-calculus, denoted $\Lambda$, into two larger $\lambda$-calculi, denoted $\Lambda^\land$ and $&\Lambda^\land$. The standard notion of $\beta$-reduction for $\Lambda$ corresponds to two new notions of reduction, $\beta^\land$ for $\Lambda^\land$ and $&\beta^\land$ for $&\Lambda^\land$. A distinctive feature of our new calculus $\Lambda^\land$ (resp., $&\Lambda^\land$) is that, in every function application, an argument is used at most once (resp., exactly once) in the body of the function. We establish various connections between the three notions of reduction, $\beta$, $\beta^\land$ and $&\beta^\land$. As a consequence, we provide an alternative framework to study the relationship between $\beta$-weak normalization and $\beta$-strong normalization, and give a new proof of the oft-mentioned equivalence between $\beta$-strong normalization of standard $\lambda$-terms and typability in a system of ``intersection types''.
}
}
article{Bug+Per:IAC-2000
author = {Michele Bugliesi and Santiago M. Pericas-Geertsen}
title = {Type Inference for variant object types}
journal = {Inform.\ & Comput.}
year = {2000}
xurl = {http://cs-people.bu.edu/santiago/Papers/Bug+Per:TR-2000.pdf}
churchreport = {yes}
abstract = {
Existing type systems for object calculi are based on invariant subtyping. Subtyping invariance is required for soundness of static typing in the presence of method overrides, but it is often in the way of the expressive power of the type system. Flexibility of static typing can be recovered in different ways: in first-order systems, by the adoption of object types with \textitvariance annotations, in second-order systems by resorting to \textitSelf types. Type inference is known to be P-complete for first-order systems of finite and recursive object types, and NP-complete for a restricted version of Self types. The complexity of type inference for systems with variance annotations is yet unknown. This paper presents a new object type system based on the notion of \textemSplit types, a form of object types where every method is assigned two types, namely, an update type and a select type. The subtyping relation that arises for Split types is \textemvariant and, as a result, subtyping can be performed both in \textemwidth and in \textemdepth. The new type system generalizes all the existing first-order type systems for objects, including systems based on variance annotations. Interestingly, the additional expressive power does not affect the complexity of the type inference problem, as we show by presenting an O(n^3) inference algorithm.
}
}
inproceedings{Bug+Per:FOOL-2000
author = {Michele Bugliesi and Santiago M. Pericas-Geertsen}
title = {Depth Subtyping and Type Inference for Object Calculi}
crossref = {FOOL2000}
year = {2000}
xurl = {http://cs-people.bu.edu/santiago/Papers/Bug+Per:EA-FOOL-2000.pdf}
abstract = {
We present a new type system based on the notion of Split types. In this system, every method is assigned two types, namely, an update type and a select type. We show that the system of Split types is strictly more powerful than the system of recursive types and that type inference remains decidable and feasible. We include a number of interesting examples not typable with recursive types that are typable with Split types. In particular, we demonstrate that the subtyping relation needed to encode the \lambda-calculus into the Abadi and Cardelli's \varsigma-calculus holds. We also present a polynomial-time algorithm that infers Split types and show that it is sound and complete with respect to the type system. We conclude our presentation by relating the typing power of Split types to the typing power of other systems, such as, the system of recursive types with variance annotations and the system of Self types.
}
churchreport = {yes}
}
inproceedings{Amt+Tur:Faithful-ESOP-2000
itrspaper = {yes}
title = {Faithful Translations between Polyvariant Flows and Polymorphic Types}
author = {Torben Amtoft and Franklyn Turbak}
crossref = {ESOP2000}
year = {2000}
pages = {26--40}
documenturl = {http://www.church-project.org/reports/Amt+Tur:Faithful-ESOP-2000.html}
abstract = {
Recent work has shown equivalences between various type systems and flow logics. Ideally, the translations upon which such equivalences are based should be \em faithful in the sense that information is not lost in round-trip translations from flows to types and back or from types to flows and back. Building on the work of Nielson & Nielson and of Palsberg & Pavlopoulou, we present the first faithful translations between a class of finitary polyvariant flow analyses and a type system supporting polymorphism in the form of intersection and union types. Additionally, our flow/type correspondence solves several open problems posed by Palsberg & Pavlopoulou: (1) it expresses call-string based polyvariance (such as k-CFA) as well as argument based polyvariance; (2) it enjoys a subject reduction property for flows as well as for types; and (3) it supports a flow-oriented perspective rather than a type-oriented one.
}
churchreport = {yes}
}
techreport{Amt+Tur:Faithful-TR-2000
itrspaper = {yes}
title = {Faithful Translations between Polyvariant Flows and Polymorphic Types}
author = {Torben Amtoft and Franklyn Turbak}
year = {2000}
institution = {Comp.\ Sci.\ Dept., Boston Univ.}
longinstitution = {Computer Science Department, Boston University}
number = {BUCS-TR-2000-01}
abstract = {
Recent work has shown equivalences between various type systems and flow logics. Ideally, the translations upon which such equivalences are based should be \em faithful in the sense that information is not lost in round-trip translations from flows to types and back or from types to flows and back. Building on the work of Nielson & Nielson and of Palsberg & Pavlopoulou, we present the first faithful translations between a class of finitary polyvariant flow analyses and a type system supporting polymorphism in the form of intersection and union types. Additionally, our flow/type correspondence solves several open problems posed by Palsberg & Pavlopoulou: (1) it expresses call-string based polyvariance (such as k-CFA) as well as argument based polyvariance; (2) it enjoys a subject reduction property for flows as well as for types; and (3) it supports a flow-oriented perspective rather than a type-oriented one.
}
note = {Expanded full report of \citeAmt+Tur:Faithful-ESOP-2000. To appear Fall 2000, but a \emphDRAFT version is available.}
churchreport = {yes}
}
inproceedings{Mac+Tur:Link-time-ESOP-2000
title = {A Calculus for Link-time Compilation}
author = {Elena Machkasova and Franklyn A. Turbak}
crossref = {ESOP2000}
pages = {260--274}
churchreport = {yes}
year = {2000}
}
techreport{Mac+Tur:Link-time-TR-2000
title = {A Calculus for Link-time Compilation}
author = {Elena Machkasova and Franklyn Turbak}
year = {2000}
institution = {Comp.\ Sci.\ Dept., Boston Univ.}
longinstitution = {Computer Science Department, Boston University}
note = {Expanded full report of \citeMac+Tur:Link-time-ESOP-2000}
churchreport = {yes}
}
incollection{Kfoury:Rasiowa-memorial
itrspaper = {yes}
author = {Assaf J. Kfoury}
title = {Beta-Reduction as Unification}
year = {1999}
booktitle = {Logic, Algebra, and Computer Science (H. Rasiowa Memorial Conference, December 1996), Banach Center Publication, Volume 46}
pages = {137--158}
publisher = {Springer-Verlag}
editor = {D. Niwinski}
documenturl = {http://www.church-project.org/reports/Kfoury:Rasiowa-memorial.html}
note = {Supersedes~\citeKfoury:TTTR-1996a but omits a few proofs included in the latter}
abstract = {
We define a new unification problem, which we call \em $\beta$-unification and which can be used to characterize the $\beta$-strong normalization of terms in the $\lambda$-calculus. We prove the undecidability of $\beta$-unification, its connection with the system of intersection types, and several of its basic properties.
}
churchreport = {yes}
}
inproceedings{Kfo+Per:LICS-1999
title = {Type Inference for Recursive Definitions}
author = {Assaf J. Kfoury and Santiago M. Pericas-Geertsen}
pages = {119--128}
year = {1999}
crossref = {LICS1999}
xurl = {http://cs-people.bu.edu/santiago/Papers/Kfo+Per:EA-LICS-1999.pdf}
abstract = {
We consider type systems that combine universal types, recursive types, and object types. We study type inference in these systems under a rank restriction, following Leivant's notion of rank. To motivate our work, we present several examples showing how our systems can be used to type programs encountered in practice. We show that type inference in the rank-k system is decidable for k <
}
churchreport = {yes}
}
techreport{Kfo+Per:TIRD-1999
title = {Type Inference for Recursive Definitions}
author = {Assaf J. Kfoury and Santiago M. Pericas-Geertsen}
xurl = {http://cs-people.bu.edu/santiago/Papers/Kfo+Per:TR-1999.pdf}
year = {1999}
institution = {Comp.\ Sci.\ Dept., Boston Univ.}
longinstitution = {Computer Science Department, Boston University}
note = {Expanded full report of \citeKfo+Per:LICS-1999}
abstract = {
We consider type systems that combine universal types, recursive types, and object types. We study type inference in these systems under a rank restriction, following Leivant's notion of rank. To motivate our work, we present several examples showing how our systems can be used to type programs encountered in practice. We show that type inference in the rank-k system is decidable for k <
}
churchreport = {yes}
}
techreport{Amt:BRICS-1999
author = {Torben Amtoft}
title = {Partial Evaluation for Constraint-Based Program Analyses}
institution = {BRICS, Dept. of Computer Science, University of Aarhus}
year = {1999}
churchreport = {yes}
number = {BRICS-RS-99-45}
note = {}
abstract = {
We report on a case study in the application of partial evaluation, initiated by the desire to speed up a constraint-based algorithm for control-flow analysis. We designed and implemented a dedicated partial evaluator, able to specialize the analysis wrt. a given constraint graph and thus remove the interpretive overhead, and measured it with Feeley's Scheme benchmarks. Even though the gain turned out to be rather limited, our investigation yielded valuable feed back in that it provided a better understanding of the analysis, leading us to (re)invent an incremental version. We believe this phenomenon to be a quite frequent spinoff from using partial evaluation, since the removal of interpretive overhead makes the flow of control more explicit and hence pinpoints sources of inefficiency. Finally, we observed that partial evaluation in our case yields such regular, low-level specialized programs that it begs for runtime code generation.
}
}
inproceedings{Kfo+Mai+Tur+Wel:ICFP-1999
itrspaper = {yes}
author = {Assaf J. Kfoury and Harry G. Mairson and Franklyn A. Turbak and J. B. Wells}
title = {Relating Typability and Expressibility in Finite-Rank Intersection Type Systems}
crossref = {ICFP1999}
pages = {90--101}
year = {1999}
abstract = {
We investigate finite-rank intersection type systems, analyzing the complexity of their type inference problems and their relation to the problem of recognizing semantically equivalent terms. Intersection types allow something of type $\tau_1\land\tau_2$ to be used in some places at type $\tau_1$ and in other places at type $\tau_2$. A \emphfinite-rank intersection type system bounds how deeply the $\land$ can appear in type expressions. Such type systems enjoy strong normalization, subject reduction, and computable type inference, and they support a pragmatics for implementing parametric polymorphism. As a consequence, they provide a conceptually simple and tractable alternative to the impredicative polymorphism of System F and its extensions, while typing many more programs than the Hindley-Milner type system found in ML and Haskell. \par While type inference is computable at every rank, we show that its complexity grows exponentially as rank increases. Let $\mathbfK(0,n)
}
documenturl = {http://www.church-project.org/reports/Kfo+Mai+Tur+Wel:ICFP-1999.html}
churchreport = {yes}
}
inproceedings{Kfo+Wel:POPL-1999
itrspaper = {yes}
author = {Assaf J. Kfoury and J. B. Wells}
title = {Principality and Decidable Type Inference for Finite-Rank Intersection Types}
crossref = {POPL1999}
year = {1999}
pages = {161--174}
documenturl = {http://www.church-project.org/reports/Kfo+Wel:POPL-1999.html}
acmurla = {http://portal.acm.org/citation.cfm?id}
acmurlb = {http://doi.acm.org/10.1145/292540.292556}
churchreport = {yes}
note = {Superseded by \citeKfo+Wel:TCSB-2004-v311n1-3}
abstract = {
Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable $\lambda$-terms. More interestingly, every finite-rank restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference. This is in contrast to System~F where the finite rank restriction for every finite rank at 3 and above has neither principal typings nor decidable type inference. This is also in contrast to earlier presentations of intersection types where the status (decidable or undecidable) of these properties is unknown for the finite-rank restrictions at 3 and above. Furthermore, the notion of principal typings for our system involves only one operation, substitution, rather than several operations (not all substitution-based) as in earlier presentations of principality for intersection types (without rank restrictions). In our system the earlier notion of \emphexpansion is integrated in the form of \emphexpansion variables, which are subject to substitution as are ordinary variables. A unification-based type inference algorithm is presented using a new form of unification, $\beta$-unification.
}
}
article{Wells:APAL-1999-v98-no-note
author = {J. B. Wells}
title = {Typability and Type Checking in System F Are Equivalent and Undecidable}
journal = {Ann.\ Pure Appl.\ Logic}
volume = {98}
number = {1--3}
year = {1999}
pages = {111--156}
churchreport = {yes}
documenturl = {http://www.church-project.org/reports/Wells:APAL-1999-v98-no-note.html}
xnote = {Supersedes \citeWells:LICS-1994}
abstract = {
Girard and Reynolds independently invented System~F (a.k.a.\ the second-order polymorphically typed lambda calculus) to handle problems in logic and computer programming language design, respectively. Viewing F in the \emphCurry style, which associates types with untyped lambda terms, raises the questions of \emphtypability and \emphtype checking. Typability asks for a term whether there exists some type it can be given. Type checking asks, for a particular term and type, whether the term can be given that type. The decidability of these problems has been settled for restrictions and extensions of F and related systems and complexity lower-bounds have been determined for typability in F, but this report is the first to resolve whether these problems are decidable for System~F. \par This report proves that type checking in F is undecidable, by a reduction from \emphsemi-unification, and that typability in F is undecidable, by a reduction from type checking. Because there is an easy reduction from typability to type checking, the two problems are equivalent. The reduction from type checking to typability uses a novel method of constructing lambda terms that simulate arbitrarily chosen type environments. All of the results also hold for the $\lambda I$-calculus.
}
}
inproceedings{Ong+Ste:curhff
author = {C.-H. L. Ong and C. A. Stewart}
title = {A Curry-Howard foundation for functional computation with control}
pages = {215 -- 217}
year = {1997}
crossref = {POPL1997}
churchreport = {yes}
xpostscript = {ftp://achilles.linearity.org/pub/cas/popl97.ps}
documenturl = {http://www.linearity.org/cas/abstracts/popl97.html}
abstract = {
We introduce the type theory lambda-mu-v, a call-by-value variant of Parigot's lambda-mu calculus, as a Curry-Howard representation theory of classical propositional proofs. The associated rewrite system is Church-Rosser and strongly normalising, and definitional equality of the type theory is consistent, compatible with cut, congruent and decidable. The attendant call-by-value programming language mu-PCF is obtained from lambda-mu-v by augmenting it by basic arithmetic, conditionals and fixpoints. We study the behavioural properties of mu-PCF-v and show that, though simple, it is a very general language for functional computation with control: it can express all the main control constructs such as exceptions and first-class continuations. Proof theoretically the dual lambda-mu-v constructs of naming and mu abstraction witness the introduction and elimination rules of absurdity respectively. Computationally they give succinct expression to a kind of generic (forward) `jump' operator, which may be regarded as a unifying control construct for functional computation. Our goal is that lambda-mu-v and mu-PCF-v respectively should be to functional computation with first-class access to the flow of control what lambda-calculus and PCF respectively are to pure functional programming: lambda-mu-v gives the logical basis via the Curry-Howard correspondence, and mu-PCF-v is a prototypical language albeit in a purified form.
}
}
inproceedings{Banerjee:ICFP-1997
itrspaper = {yes}
author = {Anindya Banerjee}
title = {A Modular, Polyvariant, and Type-Based Closure Analysis}
crossref = {ICFP1997}
year = {1997}
pages = {1--10}
abstract = {
We observe that the \textitprincipal typing property of a type system is the enabling technology for \textitmodularity and \textitseparate compilation \citeJim:MIT-LCS-1995-532. We use this technology to formulate a modular and polyvariant closure analysis, based on the rank 2 intersection types annotated with control-flow information.\par Modularity manifests itself in a syntax-directed, annotated-type inference algorithm that can analyse \textitprogram fragments containing free variables: a \textitprincipal typing property is used to formalise it. Polyvariance manifests itself in the separation of different behaviours of the same function at its different uses: this is formalised via the rank 2 intersection types. As the rank 2 intersection type discipline types at least all (core) ML programs, our analysis can be used in the separate compilation of such programs.
}
documenturl = {http://www.church-project.org/reports/Banerjee:ICFP-1997.html}
churchreport = {yes}
}
inproceedings{Wel+Dim+Mul+Tur:TAPSOFT-1997
itrspaper = {yes}
author = {J. B. Wells and Allyn Dimock and Robert Muller and Franklyn Turbak}
title = {A Typed Intermediate Language for Flow-Directed Compilation}
crossref = {TAPSOFT1997}
year = {1997}
pages = {757--771}
abstract = {
We present a typed intermediate language $\lambda^\mathrmCIL$ for optimizing compilers for function-oriented and polymorphically typed programming languages (e.g., ML). The language $\lambda^\mathrmCIL$ is a typed lambda calculus with product, sum, intersection, and union types as well as function types annotated with flow labels. A novel formulation of intersection and union types supports encoding flow information in the typed program representation. This flow information can direct optimization.
}
note = {Superseded by \citeWel+Dim+Mul+Tur:JFP-2002-v12n3}
comment = {The disabling of the PDF links is disabled as the fonts does not look that bad when printed and PDF provides the best way to ensure that the files are printable under Windows. Please do not disable this diabling without discussing it with me (Peter). The PDF link is disabled because the PDF file has absolutely horrible and unreadable fonts. Please do not enable it until the fonts are fixed.}
documenturl = {http://www.church-project.org/reports/Wel+Dim+Mul+Tur:TAPSOFT-1997.html}
churchreport = {yes}
}
inproceedings{Dim+Mul+Tur+Wel:ICFP-1997
itrspaper = {yes}
author = {Allyn Dimock and Robert Muller and Franklyn Turbak and J. B. Wells}
title = {Strongly Typed Flow-Directed Representation Transformations}
crossref = {ICFP1997}
year = {1997}
pages = {11--24}
abstract = {
We present a new framework for transforming data representations in a strongly typed intermediate language. Our method allows both value producers (sources) and value consumers (sinks) to support multiple representations, automatically inserting any required code. Specialized representations can be easily chosen for particular source/sink pairs. The framework is based on these techniques: \beginenumerate \item \emphFlow annotated types encode the ``flows-from'' (source) and ``flows-to'' (sink) information of a flow graph. \item \emphIntersection and union types support (a) encoding precise flow information, (b) separating flow information so that transformations can be well typed, (c) automatically reorganizing flow paths to enable multiple representations. \endenumerate As an instance of our framework, we provide a function representation transformation that encompasses both closure conversion and inlining. Our framework is adaptable to data other than functions.
}
comment = {The disabling of the PDF links is disabled as the fonts does not look that bad when printed and PDF provides the best way to ensure that the files are printable under Windows. Please do not disable this diabling without discussing it with me (Peter). The PDF link is disabled because the PDF file has absolutely horrible and unreadable fonts. Please do not enable it until the fonts are fixed.}
documenturl = {http://www.church-project.org/reports/Dim+Mul+Tur+Wel:ICFP-1997.html}
churchreport = {yes}
}
inproceedings{Tur+Dim+Mul+Wel:CPPFT-1997
itrspaper = {yes}
author = {Franklyn Turbak and Allyn Dimock and Robert Muller and J. B. Wells}
title = {Compiling with Polymorphic and Polyvariant Flow Types}
crossref = {TIC1997}
year = {1997}
abstract = {
Optimizing compilers for function-oriented and object-oriented languages exploit type and flow information for efficient implementation. Although type and flow information (both control and data flow) are inseparably intertwined, compilers usually compute and represent them separately. Partially, this has been a result of the usual polymorphic type systems using $\forall$ and $\exists$ quantifiers, which are difficult to use in combination with flow annotations. \par In the Church Project, we are experimenting with intermediate languages that integrate type and flow information into a single \em flow type framework. This integration facilitates the preservation of flow and type information through program transformations. In this paper we describe $\lambda^\mathrmCIL$, an intermediate language supporting polymorphic types and polyvariant flow information and describe its application in program optimiziation. We are experimenting with this intermediate language in a flow and type-directed compiler for a functional language. The types of $\lambda^\mathrmCIL$ encode flow information (1) via labels that approximate sources and sinks of values and (2) via intersection and union types, finitary versions of universal and existential types that support type polymorphism and (in combination with the labels) polyvariant flow analysis. Accurate flow types expose opportunities for a wide range of optimizing program transformations.
}
comment = {The disabling of the PDF links is disabled as the fonts does not look that bad when printed and PDF provides the best way to ensure that the files are printable under Windows. Please do not disable this diabling without discussing it with me (Peter). The PDF link is disabled because the PDF file has absolutely horrible and unreadable fonts. Please do not enable it until the fonts are fixed.}
documenturl = {http://www.church-project.org/reports/Tur+Dim+Mul+Wel:CPPFT-1997.html}
churchreport = {yes}
note = {}
xnote = {note field must be blank to prevent getting crossref's note!}
}
unpublished{Kfoury:TTTR-1996a
itrspaper = {yes}
author = {Assaf J. Kfoury}
title = {Beta-Reduction as Unification}
note = {A refereed extensively edited version is~\citeKfoury:Rasiowa-memorial. This preliminary version was presented at the Helena Rasiowa Memorial Conference}
year = {1996}
month = {July}
churchreport = {yes}
abstract = {
We define a new unification problem, which we call \em $\beta$-unification and which can be used to characterize the $\beta$-strong normalization of terms in the $\lambda$-calculus. We prove the undecidability of $\beta$-unification, its connection with the system of intersection types, and several of its basic properties.
}
}
unpublished{Kfoury:TTTR-1996
author = {Assaf J. Kfoury}
title = {A Linearization of the Lambda-Calculus}
note = {A refereed version is~\citeKfoury:JLC-2000-v10n3. This version was presented at the Glasgow Int'l School on Type Theory & Term Rewriting}
year = {1996}
month = {September}
churchreport = {yes}
abstract = {
We embed the standard $\lambda$-calculus, denoted $\Lambda$, into two larger $\lambda$-calculi, denoted $\Lambda^\land$ and $&\Lambda^\land$. The standard notion of $\beta$-reduction for $\Lambda$ corresponds to two new notions of reduction, $\beta^\land$ for $\Lambda^\land$ and $&\beta^\land$ for $&\Lambda^\land$. A distinctive feature of our new calculus $\Lambda^\land$ (resp., $&\Lambda^\land$) is that, in every function application, an argument is used at most once (resp., exactly once) in the body of the function. We establish various connections between the three notions of reduction, $\beta$, $\beta^\land$ and $&\beta^\land$. As a consequence, we provide an alternative framework to study the relationship between $\beta$-weak normalization and $\beta$-strong normalization, and give a new proof of the oft-mentioned equivalence between $\beta$-strong normalization of standard $\lambda$-terms and typability in a system of ``intersection types''.
}
comment = {The funny order of things in the note is to work around the way BibTeX would really screw up the meaning by putting ", 1996" after the note, implying falsely that the year has something to do with the note.}
}
techreport{Wells:TUFE-1996
author = {J. B. Wells}
title = {Typability is Undecidable for F+Eta}
institution = {Comp.\ Sci.\ Dept., Boston Univ.}
year = {1996}
number = {96-022}
type = {Tech.\ Rep.}
month = {March}
documenturl = {http://www.church-project.org/reports/Wells:TUFE-1996.html}
churchreport = {yes}
abstract = {
System F is the well-known polymorphically-typed $\lambda$-calculus with universal quantifiers (``$\forall$''). F+$\eta$ is System F extended with the eta rule, which says that if term $M$ can be given type $\tau$ and $M$ $\eta$-reduces to $N$, then $N$ can also be given the type $\tau$. Adding the eta rule to System F is equivalent to adding the subsumption rule using the subtyping (``containment'') relation that Mitchell defined and axiomatized. The subsumption rule says that if $M$ can be given type $\tau$ and $\tau$ is a subtype of type $\sigma$, then $M$ can be given type $\sigma$. Mitchell's subtyping relation involves no extensions to the syntax of types, i.e., no bounded polymorphism and no supertype of all types, and is thus unrelated to the system $\mathrmF_\leq$ (``F-sub''). \par Typability for F+$\eta$ is the problem of determining for any term $M$ whether there is any type $\tau$ that can be given to it using the type inference rules of F+$\eta$. Typability has been proven undecidable for System F (without the eta rule), but the decidability of typability has been an open problem for F+$\eta$. Mitchell's subtyping relation has recently been proven undecidable, implying the undecidability of ``type checking'' for F+$\eta$. This paper reduces the problem of subtyping to the problem of typability for F+$\eta$, thus proving the undecidability of typability. The proof methods are similar in outline to those used to prove the undecidability of typability for System F, but the fine details differ greatly.
}
}
techreport{Jim:RTTS-1995
itrspaper = {yes}
author = {Trevor Jim}
title = {Rank 2 Type Systems and Recursive Definitions}
institution = {Massachusetts Institute of Technology}
year = {1995}
number = {MIT/LCS/TM-531}
month = {November}
abstract = {
We demonstrate the pragmatic value of the \emphprincipal typing property, a property distinct from ML's principal type property, by studying a type system with principal typings. The type system is based on rank 2 intersection types and is closely related to ML\
}
}
techreport{Jim:MIT-LCS-1995-532
itrspaper = {yes}
author = {Trevor Jim}
title = {What Are Principal Typings and What Are They Good For?}
institution = {MIT}
year = {1995}
type = {Tech.\ memo.}
number = {MIT/LCS/TM-532}
abstract = {
We demonstrate the pragmatic value of the \emphprincipal typing property, a property distinct from ML's principal type property, by studying a type system with principal typings. The type system is based on rank 2 intersection types and is closely related to ML\
}
}
inproceedings{Kfo+Wel:LICS-1995
itrspaper = {yes}
title = {New Notions of Reduction and Non-Semantic Proofs of $\beta$-Strong Normalization in Typed $\lambda$-Calculi}
author = {A. J. Kfoury and J. B. Wells}
pages = {311--321}
crossref = {LICS1995}
comment = {The PDF link is disabled because the PDF file has absolutely horrible and unreadable fonts. Please do not enable it until the fonts are fixed.}
documenturl = {http://www.church-project.org/reports/Kfo+Wel:LICS-1995.html}
source = {ftp://theory.lcs.mit.edu/pub/meyer/lics.bib}
churchreport = {yes}
year = {1995}
abstract = {
Two notions of reduction for terms of the $\lambda$-calculus are introduced and the question of whether a $\lambda$-term is $\beta$-strongly normalizing is reduced to the question of whether a $\lambda$-term is merely normalizing under one of the notions of reduction. This gives a method to prove strong $\beta$-normalization for typed $\lambda$-calculi. Instead of the usual semantic proof style based on Tait's realizability or Girard's ``candidats de réductibilité'', termination can be proved using a decreasing metric over a well-founded ordering. This proof method is applied to the simply-typed $\lambda$-calculus and the system of intersection types, giving the first non-semantic proof for a polymorphic extension of the $\lambda$-calculus.
}
}
techreport{Wells:UMSR-1995
author = {J. B. Wells}
title = {The Undecidability of Mitchell's Subtyping Relation}
institution = {Comp.\ Sci.\ Dept., Boston Univ.}
year = {1995}
month = {December}
documenturl = {http://www.church-project.org/reports/Wells:UMSR-1995.html}
postscriptb = {http://www.cs.bu.edu/techreports/ps/1995-019-subtyping-undecidable.ps.gz}
remark = {Doesn't have tech report number in the actual paper.}
type = {Tech.\ Rep.}
number = {95-019}
churchreport = {yes}
abstract = {
Mitchell defined and axiomatized a subtyping relationship (also known as \emphcontainment, \emphcoercibility, or \emphsubsumption) over the types of System F (with ``$\to$'' and ``$\forall$''). This subtyping relationship is quite simple and does not involve bounded quantification. Tiuryn and Urzyczyn quite recently proved this subtyping relationship to be undecidable. This paper supplies a new undecidability proof for this subtyping relationship. First, a new syntax-directed axiomatization of the subtyping relationship is defined. Then, this axiomatization is used to prove a reduction from the undecidable problem of \emphsemi-unification to subtyping. The undecidability of subtyping implies the undecidability of \emphtype checking for System F extended with Mitchell's subtyping, also known as ``F plus eta''.
}
}