Powered by
6th ACM SIGPLAN International Symposium on Machine Programming (MAPS 2022),
June 13, 2022,
San Diego, CA, USA
6th ACM SIGPLAN International Symposium on Machine Programming (MAPS 2022)
Frontmatter
Welcome from the Chairs
Welcome to the 2022 edition of the ACM SIGPLAN Machine Programming Symposium
(MAPS), formerly named MAPL, co-located with PLDI on June 13, 2022. The focus of MAPS is
to advance machine programming by leveraging interdisciplinary research across the fields of
machine learning (ML) and programming languages (PL). This year’s program consists of a
mix of technical research papers and invited talks by top researchers in the field of machine
programming.
Papers
A Systematic Evaluation of Large Language Models of Code
Frank F. Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn
(Carnegie Mellon University, USA)
Large language models (LMs) of code have recently shown tremendous promise in completing code and synthesizing code from natural language descriptions. However, the current state-of-the-art code LMs (e.g., Codex) are not publicly available, leaving many questions about their model and data design decisions. We aim to fill in some of these blanks through a systematic evaluation of the largest existing models: Codex, GPT-J, GPT-Neo, GPT-NeoX-20B, and CodeParrot, across various programming languages. Although Codex itself is not open-source, we find that existing opensource models do achieve close results in some programming languages, although targeted mainly for natural language
modeling. We further identify an important missing piece in the form of a large open-source model trained exclusively on a multi-lingual corpus of code. We release a new model, PolyCoder, with 2.7B parameters based on the GPT-2 architecture, that was trained on 249GB of code across 12 programming
languages on a single machine. In the C programming language, PolyCoder outperforms all models including Codex. Our trained models are open-source and publicly available at https://github.com/VHellendoorn/Code-LMs, which enables future research and application in this area.
We have an online appendix at https://arxiv.org/abs/2202.13169.
@InProceedings{MAPS22p1,
author = {Frank F. Xu and Uri Alon and Graham Neubig and Vincent Josua Hellendoorn},
title = {A Systematic Evaluation of Large Language Models of Code},
booktitle = {Proc.\ MAPS},
publisher = {ACM},
pages = {1--10},
doi = {10.1145/3520312.3534862},
year = {2022},
}
Publisher's Version
A Graph Neural Network-Based Performance Model for Deep Learning Applications
Shikhar Singh, James Hegarty,
Hugh Leather, and Benoit Steiner
(Meta, USA)
The unprecedented proliferation of machine learning based software brings an ever-increasing need to optimize the implementation of such applications. State-of-the-art compilers for neural networks, such as Halide and TVM, incorporate a machine learning-based performance model to search the space of valid implementations of a given deep learning algorithm. For a given application, the model predicts the value of performance metrics such as the run time without executing the application on hardware. Such models speed up the compilation process by obviating the need to benchmark an enormous number of candidate implementations, referred to as schedules, on hardware. Existing performance models employ feed-forward networks, recurrent networks, or decision tree ensembles to estimate the performance of different implementations of a neural network. Graphs present a natural and intuitive way to model deep-learning networks where each node represents a computational stage or operation. Incorporating the inherent graph structure of these workloads in the performance model can enable a better representation and learning of inter-stage interactions. The accuracy of the performance model has direct implications on the efficiency of the search strategy, making it a crucial component of this class of deep-learning compilers. In this work, we develop a novel performance model that adopts a graph representation. In our model, each stage of computation represents a node characterized by features that capture the operations performed by the stage. The interaction between nodes is achieved using graph convolutions. Experimental evaluation shows a 7.75𝑥 and 12𝑥 reduction in prediction error compared to the existing Halide and TVM models, respectively.
@InProceedings{MAPS22p11,
author = {Shikhar Singh and James Hegarty and Hugh Leather and Benoit Steiner},
title = {A Graph Neural Network-Based Performance Model for Deep Learning Applications},
booktitle = {Proc.\ MAPS},
publisher = {ACM},
pages = {11--20},
doi = {10.1145/3520312.3534863},
year = {2022},
}
Publisher's Version
Productivity Assessment of Neural Code Completion
Albert Ziegler,
Eirini Kalliamvakou,
X. Alice Li,
Andrew Rice,
Devon Rifkin,
Shawn Simister,
Ganesh Sittampalam, and
Edward Aftandilian
(GitHub, USA)
Neural code synthesis has reached a point where snippet generation is accurate enough to be considered for integration into human software development workflows. Commercial products aim to increase programmers’ productivity, without being able to measure it directly. In this case study, we asked users of GitHub Copilot about its impact on their productivity, and sought to find a reflection of their perception in directly measurable user data. We find that the rate with which shown suggestions are accepted, rather than more specific metrics regarding the persistence of completions in the code over time, drives developers’ perception of productivity.
@InProceedings{MAPS22p21,
author = {Albert Ziegler and Eirini Kalliamvakou and X. Alice Li and Andrew Rice and Devon Rifkin and Shawn Simister and Ganesh Sittampalam and Edward Aftandilian},
title = {Productivity Assessment of Neural Code Completion},
booktitle = {Proc.\ MAPS},
publisher = {ACM},
pages = {21--29},
doi = {10.1145/3520312.3534864},
year = {2022},
}
Publisher's Version
From Perception to Programs: Regularize, Overparameterize, and Amortize
Hao Tang and
Kevin Ellis
(Cornell University, USA)
Toward combining inductive reasoning with perception abilities, we develop techniques for neurosymbolic program synthesis where perceptual input is first parsed by neural nets into a low-dimensional interpretable representation, which is then processed by a synthesized program. We explore several techniques for relaxing the problem and jointly learning all modules end-to-end with gradient descent: multitask learning; amortized inference; overparameterization; and a differentiable strategy for penalizing lengthy programs. Collectedly this toolbox improves the stability of gradient-guided program search, and suggests ways of learning both how to perceive input as discrete abstractions, and how to symbolically process those abstractions as programs.
@InProceedings{MAPS22p30,
author = {Hao Tang and Kevin Ellis},
title = {From Perception to Programs: Regularize, Overparameterize, and Amortize},
booktitle = {Proc.\ MAPS},
publisher = {ACM},
pages = {30--39},
doi = {10.1145/3520312.3534865},
year = {2022},
}
Publisher's Version
Predictive Synthesis of API-Centric Code
Daye Nam,
Baishakhi Ray, Seohyun Kim, Xianshan Qu, and
Satish Chandra
(Carnegie Mellon University, USA; Columbia University, USA; Meta, USA)
Today’s programmers, especially data science practitioners, make heavy use of data-processing libraries (APIs) such as PyTorch, Tensorflow, NumPy, and the like. Program synthesizers can provide significant coding assistance to this community of users; however program synthesis also can be slow due to enormous search spaces.
In this work, we examine ways in which machine learning can be used to accelerate enumerative program synthesis. We present a deep-learning-based model to predict the sequence of API functions that would be needed to go from a given input to a desired output, both being numeric vectors. Our work is based on two insights. First, it is possible to learn, based on a large number of input-output examples, to predict the likely API function needed in a given situation. Second, and importantly, it is also possible to learn to compose API functions into a sequence, given an input and the desired final output, without explicitly knowing the intermediate values.
We show that we can speed up an enumerative synthesizer by using predictions from our model variants. These speedups significantly outperform previous ways (e.g. DeepCoder) in which researchers have used ML models in enumerative synthesis.
@InProceedings{MAPS22p40,
author = {Daye Nam and Baishakhi Ray and Seohyun Kim and Xianshan Qu and Satish Chandra},
title = {Predictive Synthesis of API-Centric Code},
booktitle = {Proc.\ MAPS},
publisher = {ACM},
pages = {40--49},
doi = {10.1145/3520312.3534866},
year = {2022},
}
Publisher's Version
ExeBench: An ML-Scale Dataset of Executable C Functions
Jordi Armengol-Estapé,
Jackson Woodruff,
Alexander Brauckmann,
José Wesley de Souza Magalhães, and
Michael F. P. O'Boyle
(University of Edinburgh, UK)
Machine-learning promises to transform compilation and software engineering, yet is frequently limited by the scope of available datasets. In particular, there is a lack of runnable, real-world datasets required for a range of tasks ranging from neural program synthesis to machine learning-guided program optimization. We introduce a new dataset, ExeBench, which attempts to address this. It tackles two key issues with real-world code: references to external types and functions and scalable generation of IO examples. ExeBench is the first publicly available dataset that pairs real-world C code taken from GitHub with IO examples that allow these programs to be run. We develop a toolchain that scrapes GitHub, analyzes the code, and generates runnable snippets of code. We analyze our benchmark suite using several metrics, and show it is representative of real-world code. ExeBench contains 4.5M compilable and 700k executable C functions. This scale of executable, real functions will enable the next generation of machine learning-based programming tasks.
@InProceedings{MAPS22p50,
author = {Jordi Armengol-Estapé and Jackson Woodruff and Alexander Brauckmann and José Wesley de Souza Magalhães and Michael F. P. O'Boyle},
title = {ExeBench: An ML-Scale Dataset of Executable C Functions},
booktitle = {Proc.\ MAPS},
publisher = {ACM},
pages = {50--59},
doi = {10.1145/3520312.3534867},
year = {2022},
}
Publisher's Version
Archive submitted (330 kB)
Info
Automatically Debugging AutoML Pipelines using Maro: ML Automated Remediation Oracle
Julian Dolby, Jason Tsay, and Martin Hirzel
(IBM Research, USA)
Machine learning in practice often involves complex pipelines for data cleansing, feature engineering, preprocessing, and prediction. These pipelines are composed of operators, which have to be correctly connected and whose hyperparameters must be correctly configured. Unfortunately, it is quite common for certain combinations of datasets, operators, or hyperparameters to cause failures. Diagnosing and fixing those failures is tedious and error-prone and can seriously derail a data scientist's workflow. This paper describes an approach for automatically debugging an ML pipeline, explaining the failures, and producing a remediation. We implemented our approach, which builds on a combination of AutoML
and SMT, in a tool called Maro. Maro works seamlessly with the familiar data science ecosystem including Python, Jupyter notebooks, scikit-learn, and AutoML tools such as Hyperopt. We empirically evaluate our tool and find that for most cases, a single remediation automatically fixes errors, produces no additional faults, and does not significantly impact optimal accuracy nor time to convergence.
@InProceedings{MAPS22p60,
author = {Julian Dolby and Jason Tsay and Martin Hirzel},
title = {Automatically Debugging AutoML Pipelines using Maro: ML Automated Remediation Oracle},
booktitle = {Proc.\ MAPS},
publisher = {ACM},
pages = {60--69},
doi = {10.1145/3520312.3534868},
year = {2022},
}
Publisher's Version
Syntax-Guided Program Reduction for Understanding Neural Code Intelligence Models
Md Rafiqul Islam Rabin, Aftab Hussain, and
Mohammad Amin Alipour
(University of Houston, USA)
Neural code intelligence (CI) models are opaque black-boxes and offer little insight on the features they use in making predictions. This opacity may lead to distrust in their prediction and hamper their wider adoption in safety-critical applications. Recently, input program reduction techniques have been proposed to identify key features in the input programs to improve the transparency of CI models. However, this approach is syntax-unaware and does not consider the grammar of the programming language.
In this paper, we apply a syntax-guided program reduction technique that considers the grammar of the input programs during reduction. Our experiments on multiple models across different types of input programs show that the syntax-guided program reduction technique is faster and provides smaller sets of key tokens in reduced programs. We also show that the key tokens could be used in generating adversarial examples for up to 65% of the input programs.
@InProceedings{MAPS22p70,
author = {Md Rafiqul Islam Rabin and Aftab Hussain and Mohammad Amin Alipour},
title = {Syntax-Guided Program Reduction for Understanding Neural Code Intelligence Models},
booktitle = {Proc.\ MAPS},
publisher = {ACM},
pages = {70--79},
doi = {10.1145/3520312.3534869},
year = {2022},
}
Publisher's Version
proc time: 2.3