PLDI 2019 Workshops
40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2019)
Powered by
Conference Publishing Consulting

17th ACM SIGPLAN International Symposium on Database Programming Languages (DBPL 2019), June 23, 2019, Phoenix, AZ, USA

DBPL 2019 – Preliminary Table of Contents

Contents - Abstracts - Authors

17th ACM SIGPLAN International Symposium on Database Programming Languages (DBPL 2019)


Title Page

Message from the Chairs


TBA (Keynote)
Jeremy Gibbons
(University of Oxford, UK)

Article Search


Detecting Unsatisfiable CSS Rules in the Presence of DTDs
Nobutaka Suzuki, Takuya Okada, and Yeondae Kwon
(University of Tsukuba, Japan; University of Tokyo, Japan)

Cascading Style Sheets (CSS) is a popular language for describing the styles of XML documents as well as HTML documents. For a DTD D and a list R of CSS rules, due to specificity R may contain “unsatisfiable” rules under D, e.g., rules that are not applied to any element of any document valid to D. In this paper, we consider the problem of detecting unsatisfiable CSS rules under DTDs. We focus on CSS fragments in which descendant, child, adjacent sibling, and general sibling combinators are allowed. We show that the problem is coNP-hard in most cases, even if only one of the four combinators is allowed. We also show that the problem is in coNP or PSPACE depending on restrictions on DTDs and CSS. Finally, we present two conditions under which the problem can be solved in polynomial time.

Article Search
On the Semantics of Cypher's Implicit Group-by
Filip Murlak, Jan Posiadała, and Paweł Susicki
(University of Warsaw, Poland; Nodes and Edges, Poland)
Cypher is a popular declarative query language for property graphs. Despite having been adopted by several graph database vendors, it lacks a comprehensive semantics other than the reference implementation. This paper stems from Cypher.PL, a project aimed at creating an executable (and readable) semantics of Cypher in Prolog, and focuses on Cypher's implicit group-by feature. Rather than being explicitly specified in the query, in Cypher the grouping key is derived from the return expressions. We show how this becomes problematic when a single return expression mixes unaggregated property references and aggregating functions, and discuss ways of giving this construct a proper semantics without defying common sense.
Article Search
Fluid Data Structures
Darshana Balakrishnan, Lukasz Ziarek, and Oliver Kennedy
(SUNY Buffalo, USA)
Functional (aka immutable) data structures are used extensively in data management systems. From distributed systems to data persistence, immutability makes complex programs significantly easier to reason about and implement. However, immutability also makes many runtime optimizations like tree rebalancing, or adaptive organizations, unreasonably expensive. In this paper, we propose Fluid data structures, an approach to data structure design that allows limited physical changes that preserve logical equivalence. As we will show, this approach retains many of the desirable properties of functional data structures, while also allowing runtime adaptation. To illustrate Fluid data structures, we work through the design of a lazy-loading map that we call a Fluid Cog. A Fluid Cog is a lock-free data structure that incrementally organizes itself in the background by applying equivalence-preserving structural transformations. Our experimental analysis shows that the resulting map structure is flexible enough to adapt to a variety of performance goals, while remaining competitive with existing structures like the C++ standard template library map.
Article Search
Language-Integrated Provenance by Trace Analysis
Stefan Fehrenbach and James Cheney
(University of Edinburgh, UK)

Language-integrated provenance builds on language-integrated query techniques to make provenance information explaining query results readily available to programmers. In previous work we have explored language-integrated approaches to provenance in and . However, implementing a new form of provenance in a language-integrated way is still a major challenge. We propose a self-tracing transformation and trace analysis features that, together with existing techniques for type-directed generic programming, make it possible to define different forms of provenance as user code. We present our design as an extension to a core language for Links called TLinks, give examples showing its capabilities, and outline its metatheory and key correctness properties.

Article Search
Arc: An IR for Batch and Stream Programming
Lars Kroll, Klas Segeljakt, Paris Carbone, Christian Schulte, and Seif Haridi
(KTH, Sweden; RISE SICS, Sweden)
In big data analytics, there is currently a large number of data programming models and their respective frontends such as relational tables, graphs, tensors, and streams. This has lead to a plethora of runtimes that typically focus on the efficient execution of just a single frontend. This fragmentation manifests itself today by highly complex pipelines that bundle multiple runtimes to support the necessary models. Hence, joint optimization and execution of such pipelines across these frontend-bound runtimes is infeasible. We propose Arc as the first unified Intermediate Representation (IR) for data analytics that incorporates stream semantics based on a modern specification of streams, windows and stream aggregation, to combine batch and stream computation models. Arc extends Weld, an IR for batch computation and adds support for partitioned, out-of-order stream and window operators which are the most fundamental building blocks in contemporary data streaming.
Towards Compiling Graph Queries in Relational Engines
Ruby Y. Tahboub, Xilun Wu, Grégory M. Essertel, and Tiark Rompf
(Purdue University, USA)
The increasing demand for graph query processing has prompted the addition of support for graph workloads on top of standard relational database management systems (RDBMS). Although this appears like a good idea --- after all, graphs are just relations --- performance is typically suboptimal since graph workloads are naturally iterative and rely extensively on efficient traversal of adjacency structures that are not typically implemented in an RDBMS. Adding such specialized adjacency structures is not at all straightforward due to the complexity of typical RDBMS implementations. The iterative nature of graph queries also practically requires a form of runtime compilation and native code generation which adds another dimension of complexity to the RDBMS implementation and any potential extensions. In this paper, we demonstrate how the idea of the first Futamura projection, which links interpreted query engines and compilers through specialization, can be applied to compile graph workloads in an efficient way that simplifies the construction of relational engines which also support graph workloads. We extend the LB2 main-memory query compiler with graph adjacency structures and operators. We implement a subset of the Datalog logical query language evaluation to enable processing graph and recursive queries efficiently. The graph extension matches, and sometimes outperforms, best-of-breed low-level graph engines.
Article Search
Streaming Saturation for Large RDF Graphs with Dynamic Schema Information
Mohammad Amin Farvardin, Dario Colazzo, Khalid Belhajjame, and Carlo Sartiani
(PSL Research University, France; Université Paris-Dauphine, France; LAMSADE, France; University of Basilicata, Italy)
In the Big Data era, RDF data are produced in high volumes. While there exist proposals for reasoning over large RDF graphs using big data platforms, there is a dearth of solutions that do so in environments where RDF data are dynamic, and where new instance and schema triples can arrive at any time. In this work, we present the first solution for reasoning over large streams of RDF data using big data platforms. In doing so, we focus on the saturation operation, which seeks to infer implicit RDF triples given RDF schema constraints. Indeed, unlike existing solutions which saturate RDF data in bulk, our solution carefully identifies the fragment of the existing (and already saturated) RDF dataset that needs to be considered given the fresh RDF statements delivered by the stream. Thereby, it performs the saturation in an incremental manner. Experimental analysis shows that our solution outperforms existing bulk-based saturation solutions.
Mixing Set and Bag Semantics
Wilmer Ricciotti and James Cheney
(University of Edinburgh, UK)
The conservativity theorem for nested relational calculus implies that query expressions can freely use nesting and unnesting, yet as long as the query result type is a flat relation, these capabilities do not lead to an increase in expressiveness over flat relational queries. Moreover, Wong showed how such queries can be translated to SQL via a constructive rewriting algorithm. While this result holds for queries over either set or multiset semantics, to the best of our knowledge, the questions of conservativity and ormalization have not been studied for queries that mix set and bag collections, or provide duplicate-elimination operations such as SQL's SELECT DISTINCT. In this paper we formalize the problem, and present partial progress: specifically, we introduce a calculus with both set and multiset collection types, along with natural mappings from sets to bags and vice versa, present a set of valid rewrite rules for normalizing such queries, and give an inductive characterization of a set of queries whose normal forms can be translated to SQL. We also consider examples that do not appear straightforward to translate to SQL, illustrating that the relative expressiveness of flat and nested queries with mixed set and multiset semantics remains an open question.
Article Search

proc time: 2.22