JIT feedback – what experienced developers like about static analysis (2018)
Running static analysis tools in an integrated development environment is like having a peer next to you that continuously reviews and criticises your code. Tymchuk, Ghafari, and Nierstrasz studied whether and why developers think these tools are helpful.
Micro-clones in evolving software (2018)
Codebases often contain code clones: code fragments that are very similar or even completely identical to each other. Until now, only larger clones have been studied thoroughly – not much is known about micro-clones, which are only 1–4 lines of code. Mondai, Roy, and Schneider show that these micro-clones are quite widespread.
Deep code comment generation (2018)
Machine learning models can be used to find relevant code snippets for a natural language description. Does that mean we can also do the opposite and predict natural language descriptions for code snippets that lack comments? Hu, Li, Xia, Lo, and Jin designed a model that does just that.
Deep code search (2018)
Text search is something that (mostly) “just works”, but the same can’t be said of code search. Gu, Zhang, and Kim present a deep neural network that can be used to retrieve code snippets based on natural language queries and a proof-of-concept application that demonstrates the feasibility of this approach.
Loud and interactive paper prototyping in requirements elicitation: What is it good for? (2018)
Paper prototyping can be used to elicit requirements for user-facing applications or evaluate user interface designs. There are several ways to do paper protoyping. Shakeri, Moazzam, Lo, Lan, Frroku, and Kim investigated how interactive and “loud” paper prototyping can be combined to achieve better results.
Learning from mistakes: An empirical study of elicitation interviews performed by novices (2018)
Interviewing is one of the most versatile tools for requirements elicitation. Sadly, it’s also notoriously hard to master. Bano, Zowghi, Ferrari, Spoletini, and Donati studied interviews conducted by postgraduate students, and categorised the different types of interviewing mistakes that one should avoid.
Are developers aware of the architectural impact of their changes? (2017)
Architecturally clean systems are easier to maintain. Changes to a system therefore shouldn’t degrade its architecture. Paixao, Krinke, Han, Ragkhitwetsagul, and Harman studied four large projects to better understand whether and how developers take the system’s architecture into account when making changes.
Are code examples on an online Q&A forum reliable? A study of API misuse on Stack Overflow (2018)
Many posts on Stack Overflow contain code snippets that show how a library can be used to achieve a certain task. Zhang, Upadhyaya, Reinhardt, Rajan, and Kim mined GitHub for API usage “best practices” and conclude that it’s probably not a good idea to reuse online code snippets verbatim.
Building a theory of job rotation in software engineering from an instrumental case study (2016)
Managers sometimes move employees to new projects once in a while, as it’s thought to be beneficial to them and the company. Santos, Da Silva, De Magalhães, and Monteiro conducted a case study at a Brazilian software company that systematically applies job rotation, and describe how this practice actually impacts workers.
Code smells for model-view-controller architectures (2018)
Code smells are poor design and implementation choices that hinder comprehensibility and maintainability of code. Aniche, Bavota, Treude, Gerosa, and Van Deursen introduce a catalog of six code smells in Web applications that make use of the model-view-controller pattern.
Predicting estimated time of arrival for commercial flights (2018)
Arrival times for commercial flights are currently estimated using deterministic models that fail to account for many variables that affect flight time. Ayhan, Costas, and Samet created a model that leverages these traditionally overlooked variables to make more accurate arrival time predictions.
Fake news vs satire: A dataset and analysis (2018)
Social media platforms have a moral obligation to prevent the spread of fake news, while still allowing users to freely share satire. This can only be achieved using advanced content filters. Golbeck et (many, many) al. compiled a dataset of fake news and satire, and built a simple classifier that can tell the two types of stories apart.