Does goal-oriented requirements engineering achieve its goal? (2017)
Building things right is important, but so is building the right thing. There’s plenty of research that suggests that goal-based thinking helps create better requirements, but practitioners don’t appear to be entirely convinced yet. Mavin et al. tried to better understand this gap between academia and industrial practice.
A large-scale empirical study on linguistic antipatterns affecting APIs (2018)
Last week’s summary showed that it’s hard to quantify understandability of code. This week we look at a much simpler problem: the consequences of having bad method names in an API. Aghajani, Nagy, Bavota, and Lanza found that badly named methods have some impact on bugs, but it’s not clear yet why.
Automatically assessing code understandability: How far are we? (2017)
Programmers spend much of their time reading code, so it’s important that it’s easy to understand. It would be nice if we could automatically calculate the understandability of code – unfortunately, Scalabrino et al. discovered that existing metrics aren’t good at predicting code understandability.
Software development waste (2017)
It’s not always easy to eliminate waste in software development, not least because it can be hard to identify. Sedano, Ralph, and Péraire collected well over two years’ worth of data from interviews and observations at Pivotal, and discovered nine types of waste in software development.
Belief & evidence in empirical software engineering (2016)
Programmers can hold very strong beliefs about certain topics. Devanbu, Zimmermann, and Bird conducted a case study on such beliefs at Microsoft, and found that they are mostly based on personal experience rather than empirical evidence, and don’t necessarily correspond with what happens in reality.
How not to structure your database-backed web applications: A study of performance bugs in the wild (2018)
By abstracting away the details of querying databases, Object Relational Mapping (ORM) frameworks allow us to build Web applications efficiently. Sadly, those same abstractions can also prevent them from running efficiently. Yang, Subramaniam, Lu, Yan, and Cheung identified 9 performance anti-patterns and ways to mitigate them.
Do you remember this source code? (2018)
Developers occasionally get questions about code that they have written. Such questions are not always easy to answer, especially if that code was written a long time ago. Krüger, Wiemann, Fenske, Saake, and Leich used an online survey to study how developers lose familiarity with “their” source code over time.
Enhancing person-job fit for talent recruitment: An ability-aware neural network approach (2018)
Person-job fit is the extent to which a candidate is well-suited for a position. Determining this person-job fit is currently a laborious task. Qin et al. introduce a novel approach that uses neural networks to automatically guess the person-job fit based on information extracted from job postings, resumes, and previously filled vacancies.
An industrial evaluation of unit test generation: Finding real faults in a financial application (2017)
Writing tests isn’t something that many developers enjoy, and clients generally don’t like spending money on testing either. Could we try to automate it? Almasi, Hemmati, Fraser, Arcuri, and Benefelds compared two unit test generation tools for Java, and conclude that while they do work, you’ll still have to write tests manually for now.
Online job search: Study of users’ search behavior using search engine query logs (2018)
This week’s paper by Mansouri, Zahedi, Campos, and Farhoodi shows how job seekers use search engines to find new jobs.
We don’t need another hero? The impact of “heroes” in software development (2018)
Hero developers are often thought to be bad for software projects, due to their tendency to write code on their own without collaborating with team members. This paper by Agrawal, Rahman, Krishna, Sobran, and Menzies attemps to debunk that myth, but also leaves the most important question unanswered.
Rethinking thinking aloud: A comparison of three think-aloud protocols (2018)
Think-aloud is one of the most popular methods used to evaluate usability of websites and other types of systems. Various think-aloud methods exist. Alhadreti and Mayhew compared three of them – concurrent, retrospective, and hybrid think-aloud — and found that one clearly outperforms the other two.