Language and Computation
Author(s)
Title
Abstracts
We consider the three major approaches to semantics that can be distinguished by their formal apparatus: formulaic, geometric, and algebraic and argue that these are trunk, leg, and tail of the same elephant.
Hands-on Distributional Semantics – From first steps to interdisciplinary applications (Introductory course)
Distributional semantic models (DSM) are based on the assumption that the meaning of a word can (at least to a certain extent) be inferred from its usage, i.e. its distribution in text. Therefore, these models dynamically build semantic representations through a statistical analysis of the contexts in which words occur. DSMs are a promising technique for solving the lexical acquisition bottleneck by unsupervised learning, and their distributed representation provides a cognitively plausible, robust and flexible architecture for the organisation and processing of semantic information.
In this introductory course we will highlight the interdisciplinary potential of DSM beyond standard semantic similarity tasks; our overview will touch upon cognitive modeling, theoretical linguistics, as well as computational social science. This course aims to equip participants with the background knowledge and skills needed to build different kinds of DSM representations and apply them to a wide range of tasks. There will be a particular focus on practical exercises with the help of user-friendly software packages and various pre-built models.
Mathias Winther Madsen
This will be a *foundational course* for the *language and computation* track.
Information theory is a branch of probability theory that investigates the fundamental limits on our ability to communicate reliably. By quantifying the concepts of uncertainty and information in a mathematically meaningful way, it allows us to formulate precise results about the amount of information we can hope to transmit in the presence of noise, and evaluate whether a code achieves this maximal efficiency. These results have concrete applications for the design of communication systems, but also profound consequences for linguistics and epistemology.
Embeddings have been the dominating buzzword in 2010s for Natural Language Processing (NLP). Representing knowledge through a low-dimensional vector which is easily integrable in modern machine learning algorithms has played a central role in the development of the field. Embedding techniques initially focused on words but the attention soon started to shift to other forms: from graph structures, such as knowledge bases, to other types of textual content, such as sentences and documents.
This course will provide a high level synthesis of the main embedding techniques in NLP, in the broad sense. We will start by conventional word embeddings (e.g. Word2Vec and GloVe) and then move to other types of embeddings, such as sense-specific and graph alternatives. We will finalise with an overview of the recent successful contextualized representations (e.g. ELMo, BERT) and explain their potential in NLP
Gregory Scontras
Recent advances in computational cognitive science (i.e., simulation-based probabilistic programs) have paved the way for significant progress in formal, implementable models of pragmatics. Rather than describing a pragmatic reasoning process, these models articulate and implement one, deriving both qualitative and quantitative predictions of human behavior—predictions that consistently prove correct, demonstrating the viability and value of the framework. The present course provides a practical introduction to the Bayesian Rational Speech Act modeling framework (Goodman and Frank, 2016). Through hands-on practice deconstructing web-based language models, students will learn the basics of the modeling framework. Students should expect to leave the course having gained the ability to 1) digest the primary modeling literature and 2) independently construct models of their own.
The course covers developments in Combinatory Categorial Grammar (CCG) cross-linguistically regarding word order and asymmetries that arise from it. CCG is known for its account of unbounded constructions and their surface-compositional semantics, for coordination, and for its unorthodox definition of constituency.
In CCG, all language specific grammar is specified in the lexicon, andprojected from there onto the sentences of languages by a smallnumber of universal rules based on application and composition ofcontiguous typed constituents. All grammatical asymmetries such ascase-based differences in extractability must therefore be lexicallyspecified. The course shows that such lexicons can be specified foraccusative, ergative, partial and mixed case systems, which theuniversal syntactic component then projects onto the sentences of thelanguage.
We address asymmetry in constructions cross-linguistically in adatabase of languages that provide a stress test in depth and breadthfor CCG and other formalism. The students can assess the new development indepth with respect to known CCG accounts and in depth and breadth toother proposals.
In this introductory course we will look at what we can learn about the semantics of natural language from corpora that pair natural language expressions with images. Thanks to active interest in tasks like image captioning and image retrieval via written descriptions, language+vision resources are now available that approach purely textual corpora in size. (E.g.: BNC, 100 Million tokens; corpora discussed here, ~50 Million tokens.)
We will look at the available data and explore some questions that can be asked with it, regarding how expressions are to be interpreted in concrete, visually specified situations; and how speakers make linguistic choices in specific, visually represented situations, and we discuss machine learning models for these tasks. The course will consist of lectures and hands-on parts, for which Jupyter notebooks and a remote computing environment will be provided, so that the participants will only need to have access to a web browser.
Jean-Philippe Bernardy and Aleksandre Maskharashvili
An important aspect of human reasoning is to process underspecified information expressed in natural language (here, underspecified means that not enough information is available to make categorical judgments, like ones in logic). Even so, we, humans, are still able to draw conclusions based on underspecified information. While studying probabilistic inference in natural language has a long tradition in logic, linguistics, and philosophy, there is a need for development of a coherent computational approach to it. In the lectures, we will offer a theory of inference under uncertainty (underspecified information) and its computational implementation. We offer a framework based on a Bayesian probabilistic semantics: a inference under uncertainty is computed as a probabilistic inference. In particular, the conclusion is evaluated as a probability that it holds under the constraints imposed by premises. This theory will find concrete illustration, via a system built over the the probabilistic programming paradigm.
Simon Charlow and Dylan Bumford
Computer programs are often factored into pure components --- simple, total functions from inputs to outputs --- and components that may have side effects --- errors, changes to memory, parallel threads, abortion of the current command, etc. In this course, we'll make the case that human languages are similarly organized around the give and pull of pure and effectful processes, and we'll aim to show how denotational techniques from computer science can be leveraged to support elegant and illuminating semantic analyses of natural language phenomena.
Lisa Beinborn and Willem Zuidema
Computational models of language serve as an increasingly popular tool to examine research hypotheses related to language processing. Representing language in a computationally processable way forces us to operationalize our underlying assumptions in a concrete, implementable and falsifiable manner.
In the last decade, distributional representations which interpret words, phrases, sentences, and even full stories as a high-dimensional vector in semantic space have become very popular. They are obtained by training language models on large corpora to optimally encode contextual information. Whereas the most commonly known model word2vec only provides standardized representations for isolated words (Mikolov, Sutskever, Chen, Corrado, & Dean, 2013), more recent models interpret words in context.
In this tutorial, we introduce the participants into the functionality of state-of-the-art contextualized language models and discuss methods for evaluating their cognitive plausibility. We explore a range of evaluation scenarios using cognitive data (e.g., eyetracking, eeg, and fMRI) and provide practical examples.
In a second step, we aim at opening up the blackbox and introduce methods for analyzing the hidden representations of a computational model (e.g., diagnostic classifiers, representational similarity analysis). Participants can explore practical coding examples to analyze and visualize the similarity between computational and human representations of language.
Structured, probabilistic models of language understanding are booming. These models (e.g., the Rational Speech Act modeling framework) treat language understanding as a probabilistic social reasoning process, wherein speaker and listener jointly coordinate on intended meanings. Crucial to these models is the prior distribution over intended meanings, often considered to be a function of world knowledge and which have been the focus of extensive interest in computational cognitive science. In this course, we will introduce techniques for modeling the rich structure of human concepts and everyday reasoning, which will naturally serve as the background knowledge against which language understanding can faithfully occur. We will review techniques from probabilistic models of cognition and discuss their usage in formal models of pragmatics. The course will focus on the practical implementation of models and testing them against intuitions.
Adina Williams and Ryan Cotterell
Since Shannon originally proposed his mathematical theory of communication in the middle of the 20th century, information theory has been an important way of viewing and investigating problems at the interfaces between linguistics, cognitive science, and computation, respectively. With the upsurgence in applying machine learning approaches to linguistics questions, information-theoretic methods are becoming an ever more important tool in the linguist’s toolbox. The course emphasizes interdisciplinary connections between the fields of linguistics and natural language processing. We plan to do this by first establishing a firm mathematical basis, and showing it can be fruitfully applied to several linguistic applications, ranging from semantics, typology, morphology, and phonotactics, to the interface between cognitive science and linguistics.
The goal of this workshop is to bring together people interested in structured representations of semantic information, especially from a computational perspective. In recent years, there has been a growing body of research which aims to integrate structured entities into formal semantic accounts. Important developments in this direction are the introduction of rich type systems and the use of frame-based representations, among others. The workshop is open to both foundational issues of structured semantic representations and applications to specific linguistic phenomena.
The past years have seen a growing interest in analyzing languagebeyond the sentence level, in theoretical linguistics, computationallinguistics, as well as in psycholinguistics. The developments aredriven by a range of factors, from increased interest in authenticlanguage use in context to theoretical and practical motivations forthe integration of top-down and bottom-up analysis. In theoreticallinguistics, advances in formal pragmatics are extending the reach offormal linguistic analysis, and in computational linguistics interestin applications such as dialogue systems and argumentation mining canconnect to early research on dialogue structure and RhetoricalStructure Theory, and multi-layer corpus annotation efforts such asNXT Switchboard or the Penn Discourse Treebank.
In this workshop, we want to bring together researchers from varioussubdisciplines that are working on aspects of discourse annotation. Onthe one hand, we plan to particularly encourage presentation ofresearch combining different aspects of discourse annotation,including i) the annotation of discourse relations (includingargumentation mining), ii) the annotation of information structuralconcepts, such as focus/background or given/new, theme/rheme, and iii)research in the newly emerging field of QUD annotation, which includesaspects from both previous areas, where a QUD is determined for eachsentence, providing information about the information structuring andabout the overall discourse structure in form of a QUD tree.On the other hand, we would also like to include research on differentannotation methods in our workshop, such as manual annotation,(focussing on the evaluation of annotations schemes and thereliability of the annotation in form of agreement values),crowdsourcing annotation, also a form of manual annotation, but not byexperts, but untrained crowd workers, therefore enabling theannotation of larger resources, and finally the automatic labelling ofdiscourse information, and enterprise that is increasing in relevance,but for many types of annotation schemes, compared to manualannotations, it is still difficult to obtain comparable reliability.
The traditional methodology of natural language semantics has used tools largely drawn from mathematical logic to build models designed to predict native speaker judgments of entailment between sentences, truth values of sentences in context, and the like. Recent advances in applying computational and experimental methods in semantics have widened both the scope and type of explanations that can be offered for semantic phenomena. This workshop will bring together people working on these new methodologies, to present both new semantic research using computational and experimental methods as well as to address broader methodological questions.
AREA II is the follow up on the first AREA meeting at LREC 2018 (http://www.areaworkshop.org/).
There has recently been increased interest in modeling actions, as described by natural language expressions and gestures, and as depicted by images and videos. Additionally, action modeling has emerged as an important topic in robotics and HCI. The goal of the AREA II workshop is to gather and discuss advances in research areas where actions are paramount e.g., virtual embodied agents, robotics, HRI, human-computer communication, as well as modeling multimodal human-human interactions involving actions. Action modeling is an inherently multi-disciplinary area, involving contributions from computational linguistics, AI, semantics, robotics, psychology, and formal logic.
Logic and Language
Author(s)
Title
Abstracts
The notion of commitment has a deep history in philosophy, but in recent years it has been gaining traction in other disciplines as well, including logic, linguistics, computer science, and psychology. It is now widely agreed that commitments are essential to social interaction in general and communication in particular. A commitment-based approach to pragmatics is appealing not only because it opens new perspectives on a wide range of pragmatic phenomena, but also because it presents an alternative to the received doctrine that communication is a matter of expressing and recognising communicative intentions. This course discusses various ways in commitments may be involved in communication, and it also shows how commitments are instrumental in planning and coordination, thus linking communication with other forms of social interaction.
This foundational course will serve as an introduction to the formal semantics of natural language, as introduced by Richard Montague, and subsequently developped by followers.
Thomas Graf
This course is aimed at students with a minimum level of mathematicalmaturity (sets, functions, relations, logic) but no previous experiencein linguistics or the theory of computation. It explores the benefits ofa formally rigorous view of language through the lens of subregularcomplexity, expressed as fragments of first-order logic. The centralmessage of the course is that it is easier than ever before to studylanguage from a computationally informed perspective, and that this is avery fertile research perspective with an abundance of low hanging fruitready for the picking.
Topics include the subregular analysis of phonology (sound), morphology(words), and syntax (sentences), implications for learnability, parsing,and typology, and --- depending on student interest --- subregularsemantics or connections to neural networks. Particular emphasis will beput on the concrete analysis of empirical phenomena such aslong-distance harmony or island constraints
We consider the three major approaches to semantics that can be distinguished by their formal apparatus: formulaic, geometric, and algebraic and argue that these are trunk, leg, and tail of the same elephant.
Animal linguistics investigates the linguistic (e.g., semantic and syntactic) properties of non-human animal communication systems, using a combined methodology from both linguistics and ethology. This course aims to introduce linguists to the field of animal linguistics (with a focus on animal semantics) and provide them with the core ethological and linguistic methods and tools needed to understand, criticize, and run an animal linguistic study - including study design, data collection, data extraction, and formal linguistic analysis. The course is structured as follows: we begin with a general introduction to animal linguistics, reviewing some of the key studies that legitimize the field. In classes two and three, we present the main ethological methods used to collect and process data on animal communication; we focus on two case studies: monkey vocalizations and great apes' gestures. Finally, in the last two classes we illustrate how formal semantic analyses can be applied to these data.
Level: advanced
There has been a lot of recent work in formal semantics and philosophy of language on
predicates of personal taste (PPTs) such as "tasty" and "fun". The linguistic behavior of PPTs differs from that of other predicates (OPs), like "round" or "popular", both in grammatical distribution and conversational dynamics: for example, disagreements over tastiness are seen as matters of opinion, not fact. The PPT-OP difference is at the heart of an ongoing contextualism-relativism-expressivism debate.
This course is a focused examination of PPTs—the empirical discoveries, the theoretical landscape, their connection with subjective language—much like has been in done in (von Fintel and Gillies 2008) for epistemic modality. The course is aimed at anyone familiar with intensional semantics and formal pragmatics and does not presuppose prior background on subjectivity, which will be provided.
ADDITIVITY and SCALARITY, linguistically encoded by alternative-sensitive operators, are in principle independent of each other: There are additives which are not scalars (e.g. 'also' / 'too'), as well as scalars which are not additives (e.g. 'only'-like particles).
In this course, we intend to focus on two types of expressions where additivity and scalarity co-exist (sometimes referred to as ‘scalar additives’). The first is illustrated by English 'even', and the second by English additive 'more'.
We will introduce observations, analyses, and debates in the literature, regarding similarities and differences between these types of expressions with respect to, e.g., types of scales, ‘norm-related’ effects, accentuation patterns, anaphoricity, and discourse structure. We will take a cross-linguistic stance including, e.g., data from Hebrew, German, Chinese Russian and African languages. Finally, we will broaden the picture beyond the domain of alternative-sensitive operators considering additivity in the context of expressions of sameness, similarity and distinctness.
Metaphor is a pervasive factor in natural language, underlying the conscious creation of verbal images (e.g., brexit as a divorce), but also the flexibility of our most basic vocabulary (like prepositions). The nature of metaphors (as mappings between conceptual domains) has been the constant concern of cognitive linguistics and psychologist since the eighties and there is a booming interest in detecting them in texts using computational techniques. Interestingly, there is hardly any formal semantic work on metaphors (e.g. their compositionality), which obviously hampers a full understanding of how metaphor works. This course provides a broad introduction to the study of metaphors: how they are defined, analyzed as mappings, identified in corpora, represented in databases, formalized in semantic and pragmatic theories. This will enable students to find their way in this large area and to contribute, from their own area of expertise, to the development of a full-fledged semantics of metaphor.
Elsi Kaiser and Deniz Rudin
Predicates of personal taste (PPTs) like "tasty" and "fun" have prompted much discussion in semantics and philosophy: We can disagree about whether the chili is tasty without either of us being obviously wrong. Philosophically, it's interesting how disagreements can be "faultless." Semantically, how can we capture whose taste is being expressed--by means of extra parameters, covert pronouns, generic operators? These theoretical problems are increasingly investigated using experimental methods. Psycholinguistic tools yield new kinds of data to apply to old questions about PPTs, and highlight new questions at the linguistics/psychology interface. In this advanced, interdisciplinary course, we present the classic observations and seminal proposals about PPTs that crosscut these disciplines, and highlight emerging routes forward in contemporary work, including questions about the class of relevant subjective predicates beyond classics like "tasty" and "fun." Students will learn about key theoretical debates and how to use experimental methods to investigate them.
Judith Tonhauser and Judith Degen
Projective content is utterance content that a listener may take a speaker to be committed to even when the expression associated with the content occurs in an entailment-canceling environment. Well-known classes of projective content are presuppositions and conventional implicatures, but other content may also project, including expressive content, conversational implicatures, as well as aspects of social meaning. This course introduces students to the state of the art in the investigation and formal analysis of projective content: specifically, it introduces students to theoretical analyses of different classes of projective content, to empirical properties of classes of projective content and to how experimental investigations can inform the development of formal analyses of projective content. Whenever possible, the course draws on research on languages other than English to encourage the development of research on projective content in a wider variety of languages. At the end of the course, students are able to develop their own research projects on projective content.
Dynamic inquisitive semantics
This course brings together two important strands in the formal analysis of natural language meaning: dynamic semantics and inquisitive semantics. It develops an integrated logical framework, dynamic inquisitive semantics, and demonstrates how this framework can be used to analyze various dynamic aspects of the meaning of questions in discourse.
Specific topics include anaphora within and across questions, intervention effects in questions analyzed as failed dynamic binding, the semantics of Q particles as dynamic identification operators, dynamic quantification and the derivation of pair-list readings in questions with quantifiers, and connections between weak/strong donkey anaphora and exhaustive/non-exhaustive question readings. We will see that the analysis of these phenomena in the proposed framework goes beyond the current state of the art.
This course introduces two competing approaches to the semantics of intensional constructions, viz. intensionalism and propositionalism, that have recently come to the forefront of discussion in semantics and the philosophy of language (see Zimmermann 2016; Forbes 2018; Grzankowski and Montague 2018). Propositionalism assumes that all intensional constructions can be interpreted as cases of truth-evaluable, clausal embedding. Intensionalism denies this and insists that the complements of certain intensional constructions require interpretation as some type of non-truth-evaluable meaning (e.g. individual concepts or properties). Our course surveys different variants of intensionalism and propositionalism, compares their type-theoretic foundations, and describes their respective methodological and empirical merits. As a result, the course will serve as a theory-driven (second) introduction to intensional semantics. Its integration of research topics from linguistics, logic, and philosophy makes this course well-suited for an interdisciplinary venue like ESSLLI.
This course provides an introduction to abstract argumentation and illustrates its points of contact with other formal disciplines such as game theory and, in particular, modal logic. After introducing the basic concepts behind abstract argumentation, we discuss the definition of attack graph and the key notion of solution concept for it. Subsequently, we illustrate the connection of abstract argumentation with the theory of games played on graphs. We then introduce the use of modal logic techniques for the formalization and analysis of abstract argumentation and explore the expressivity of modal languages with respect to the logical space of solution concepts for attack graphs. Finally we explore a number of open research questions in the field, some of the most relevant extensions of attack graphs and their theoretical underpinnings. The course will be taught at the blackboard/whiteboard.
This course builds on 'Linguistic applications of mereology', offered in the first week, and develops a unified theory of cross-categorial similarities involving the count-mass, singular-plural, telic-atelic, and collective-distributive opposition. The basics of the theory and the notion of stratified reference are developed on Day 1. Day 2 is devoted to issues in the domain of measurement, such as the difference between 'thirty liters of water' and '*thirty degrees Celsius of water'. Day 3 is about differences within the class of collective predicates, as exemplified by the contrast between 'all the students gathered' and '*all the students were numerous'. Day 4 reformulates distributivity operators, extends them to the temporal domain, and explains why indefinites in the syntactic scope of 'for'-adverbials tend not to covary. Day 5 is devoted to the crosslinguistic semantic differences between distance-distributive items such as English 'each' and German 'jeweils' and to the semantics of distributive determiners.
Hans Kamp and Emar Maier
We survey some of the current developments in the dynamic semantic framework of Discourse Representation Theory (DRT). We cover DRT approaches to several key phenomena in formal semantics/pragmatics, like (in)definites, indexicality, and propositional attitudes. Starting from classic DRT and presupposition resolution, we examine various recent extensions of the framework for dealing with rigidity, the representation of complex mental states, fiction interpretation, and pictorial narratives.
Judith Degen, Benjamin Spector and Daniel Lassiter
We propose to organize a workshop which will bring together researchers interested in two prominent threads of work on implicature in natural language. The first is the rational choice approach associated with game-theoretic pragmatics and its close relative, the Bayesian Rational Speech Acts model. The second is the exhaustification-based approach. While these approaches have generally been thought to be in theoretical tension, there are also underexplored ways to combine them, with the potential to benefit both approaches. We will encourage researchers to compare the theoretical resources and empirical predictions of the two approaches separately and also when combined, with the hope of producing a more unified theory of implicature and a more general understanding of the data that such a theory must account for.
Mora Maldonado, Alexander Martin and Jennifer Culbertson
What is the range of variation in human languages? Despite their apparent differences, natural languages seem to exhibit many profound similarities between them. These similarities, often referred to as _linguistic universals_ (though they are often strong statistical tendencies rather than totally universal) occur at all levels of linguistic analysis, ranging from phonology to semantics. In order to identify which properties are constant across languages and which ones are not, researchers have traditionally relied mainly on data from acquisition or typology. More recently, the use of experimental approaches to the study of linguistic universals has served not only to complement traditional sources but also to provide a cognitive explanation for why these universals hold (Culbertson 2018). This workshop aims at bringing together researchers doing experimental work on language universals from syntactic, semantic, and general cognitive science perspectives.
Logic and Computation
Author(s)
Title
Abstracts
During the last decade, there has been a rise of interest in the study of a unified logical theory for the concept of dependence.
A multitude of logics has been introduced in the first-order, propositional, and modal contexts.
The common dominator for these logics is the adaptation of team semantics as a core notion.
These new logics have several applications in many different areas of research such as database theory, linguistics, and philosophy.
The main goal of this course is to introduce a propositional variant of dependence logic and to investigate the computational complexity of satisfiability, validity, and model checking.
The students will acquire technical key observations and techniques in the context of propositional dependence logic which will be useful to improve the overall understanding of complexity, reductions, and algorithms.
Christoph Berkholz and Thomas Zeume
This course offers an introduction to foundations and methods of database theory with a focus on logic-based query languages and their properties. In the first part we introduce classical connections between database query languages and logics. In particular we will see how logical methods help to study the algorithmic complexity of evaluating queries and how they help to understand the expressivity of query languages. In the second part we will apply these methods to dynamic query evaluation. Here we focus on two facets that are active research topics in the database theory community. The first one is the algorithmic complexity of dynamic query evaluation, which studies runtime guarantees for dynamic algorithms. The second one is the dynamic descriptive complexity framework, which studies the expressiveness of logically defined database updates.
Pablo Barceló and Diego Figueira
During the last decade, there has been a rise of interest in the study of a unified logical theory for the concept of dependence.
A multitude of logics has been introduced in the first-order, propositional, and modal contexts.
The common dominator for these logics is the adaptation of team semantics as a core notion.
These new logics have several applications in many different areas of research such as database theory, linguistics, and philosophy.
The main goal of this course is to introduce a propositional variant of dependence logic and to investigate the computational complexity of satisfiability, validity, and model checking.
The students will acquire technical key observations and techniques in the context of propositional dependence logic which will be useful to improve the overall understanding of complexity, reductions, and algorithms.
This course will introduce and discuss the most important types and systems of temporal logics, including instant-based logics of linear and branching time, interval-based temporal logics, hybrid temporal logics, and temporal-epistemic logics.
The course is intended as interdisciplinary, for a broad audience of graduate students interested in logical, philosophical, and computational aspects of temporal reasoning.
Réka Markovich and Leon van der Torre
Introduction to Deontic Logic and Its Applications (Introductory course)
The aim of the course is providing an introduction to deontic logic and its main applications from formal and computational point of view highlighting the basic notions and approaches and surveying the prominent applications of deontic logic: law, AI ethics and social networks. We first examine the building blocks of deontic logics|obligation, preferences, actions|and we introduce the standard system (SDL). Then we present the so-called alternative approach: the rule-based systems discussing the notion of norm, from which we gain the obligation and permission by detachment. We introduce Input/Output Logic, a new standard of the latter type. We discuss three areas of application. By legal applications, we touch upon the formal theory of normative positions.
By social networks we will discuss normative multi-agent systems; concerning
ethics, we discuss the formal reconstruction of conflicting norms and moral dilemmas
introducing a methodology to resolve these among stakeholders of AI systems.
Coalition logics and closely related STIT logics are widely studied formalisms for describing strategic power of agents and their coalitions. Recently, variations of these logics have been used to reason about counterfactual emotions (such as regret, rejoicing, disappointment, elation), socially friendly strategies, and blameworthiness. The course will introduce students to coalition logics, the above mentioned variations of these logics, and explore the directions for the future research in this area.
LEVEL: Introductory
Proof theory for non-classical logics has been deeply investigated. Standard techniques to prove the completeness of a calculus are based on syntactic methods,
mostly relying on cut-elimination. The main drawback is that, from a failed
proof-search, no information about the non-validity of a formula is gained.
An alternative way is to exploit semantics: to prove that a calculus is complete, one has to show that, whenever a formula is not provable in the calculus, a
countermodel for it can be built. A countermodel can be viewed as a certificate of
the non-validity of a formula; moreover, from an inspection of the completeness
proof, one can design efficient proof-search strategies for the calculus at hand.
In the course we present efficient proof-search strategies supporting countermodel generation, focusing on Intuitionistic Propositional Logic. We present the
main ideas beyond the technique and some significant examples.
This course will survey the use of probabilistic methods and computer simulations to study group decision making methods. The course will begin with an introduction to social choice theory (primarily focusing on the mathematical analysis of voting methods), with an emphasis on the use of probabilistic methods to study key issues in social choice theory. Additional topics include: the random utility model; calculating the probability of voting paradoxes (such as the Condorcet paradox); quantitative analysis of voting methods (e.g., finding the Condorcet efficiency and the Nitzan-Kelly index of a voting method); probabilistic voting methods (voting methods in which the output is a lottery over the set of alternatives); the impartial culture assumption (and related assumptions); and the Condorcet jury theorem and related results. Students will be have hands-on experience developing a computer simulation that will illustrate the main topics discussed in the course. Although previous programming experience will be helpful (especially with Python), the course will be accessible to students with no previous programming experience.
Hilbert's 24th problem is the question of when two proofs are the same. The problem is as old as proof theory itself, but there is still no satisfactory solution. The main reason for this is the syntactic bureaucracy of formal proof systems. In order to reduce this bureaucracy, Girard introduced proof nets, a certain kind of graphs, which provide a solution for some fragments of linear logic, but not for classical logic. For this reason, Hughes introduced combinatorial proofs, which can be seen as a combination of a proof net and a skew fibration. The common feature of proof nets and combinatorial proofs is that they are not syntactic objects but combinatorial objects. The interplay between syntax and combinatorics will be the main topic of the course.
During the past fifty years, first-order logic and its variants have been successfully used as a database query language. In traditional applications, the data at hand are assumed to be unambiguous and consistent, which implies that queries (specified in some logical formalism) are posed on a well-defined complete and consistent database. In more recent applications, however, the data may be incomplete, inconsistent, or uncertain. This state of affairs necessitates the introduction and study of the certain answers of database queries, which is an alternative semantics that takes the incompleteness, inconsistency, or uncertainty of the data into account. The aim of this course is to examine the semantics and the algorithmic aspects of the certain answers to queries in four different such contexts, namely, in the contexts of data exchange and integration, inconsistent databases, probabilistic databases, and voting with partial preferences.
Anuj Dawar and Gregory Wilsenach
The notion of a symmetric algorithm, i.e. one with an explicit combinatorial property that guarantees isomorphism-invariant computation, arises naturally in the context of database theory, finite model theory, circuit complexity, the theory of relational machines, and theory of linear programming. The apparently very different models of symmetric computation developed in these disparate fields have recently been shown to be closely related both in terms of expressive power and underlying theory. The set of ideas that emerge in all of these cases have coalesced into a coherent and robust theory of symmetric computation.
This course will introduce this exciting and emerging new theory. In particular, we will present the various symmetric models introduced in each of these fields and establish their close relationship. We will develop the common theory and the methods for proving upper and lower bounds on expressive power. These rest on the Weisfeiler-Leman equivalences, the related notion of counting width, and logical reductions. Lastly, we will discuss extensions of these models and a number of open questions.
Fei Liang and Alessandra Palmigiano
Categories are cognitive tools that humans use to organize their experience, understand and function in the world, and understand and interact with each other, by grouping things together which can be meaningfully compared and evaluated. They are key to the use of language, the construction of knowledge and identity, and the formation of agents' evaluations and decisions. Categorization is the basic operation humans perform e.g.~when they relate experiences/actions/objects in the present to ones in the past, thereby recognizing them as instances of the same type. This is what we do when we try and understand what an object is or does, or what a situation means, and when we make judgments or decisions based on experience. The literature on categorization is expanding rapidly in fields ranging from cognitive linguistics to social and management science to AI, and the emerging insights common to these disciplines concern the dynamic essence of categories, and the tight interconnection between the dynamics of categories and processes of social interaction. However, these key aspects are precisely those that both the extant foundational views on categorization and the extant mathematical models for concept-formation struggle the most to address. The main idea this course will try to convey is that categorization is the single cognitive mechanism underlying meaning-attribution, value-attribution and decision-making. In this course, we will discuss a logical approach which aims at creating an environment in which these three cognitive processes can be analyzed in their relationships to one another, and propose several research directions, developing which, logicians can build novel foundations of categorization theory.
C. I. Lewis invented modern modal logic as a theory of "strict implication". Over the classical propositional calculus one can as well work with the unary box connective. Intuitionistically, however, the strict implication has greater expressive power and allows distinctions invisible in the ordinary syntax. Thus, in this course we study constructive systems of strict implication. We discuss conditions to be imposed on Kripke semantics, axiomatization of the minimal system and some of its extensions, and some basic correspondence results.
We illustrate
- when and how this logic collapses to that of unary box;
- how classical assumptions made the trivialization of Lewis original 1918 system inevitable.
Furthermore, we present two interpretations of this system. The first comes from provability logic, more specifically preservativity in extensions of Heyting's Arithmetic. The second, Curry-Howard one is provided by functional programming: the study of Haskell "arrows" as contrasted with "idioms" or "applicative functors".
This is a workshop on "Logics of dependence and independence" consisting of a 5-day programme of invited and contributed talks. Logics of dependence and independence are novel non-classical logics aiming at characterising dependence and independence notions in sciences. This field of research has grown rapidly in recent years, especially, the logics have found applications also in fields like database theory, linguistics, social choice, quantum physics and so on. This workshop will bring together researchers from all these relevant areas, and it also welcomes researchers and students at all levels.
Natasha Alechina and Brian Logan
This workshop is in the area of logic and computation. It aims to bring together work on using logic, games and automata for automatically generating plans and strategies for AI agents. Topics include but are not limited to: reactive synthesis, strategy synthesis under resource constraints, epistemic planning.