Code smells for model-view-controller architectures (2018)
Code smells are poor design and implementation choices that hinder comprehensibility and maintainability of code. Aniche, Bavota, Treude, Gerosa, and Van Deursen introduce a catalog of six code smells in Web applications that make use of the model-view-controller pattern.
Predicting estimated time of arrival for commercial flights (2018)
Arrival times for commercial flights are currently estimated using deterministic models that fail to account for many variables that affect flight time. Ayhan, Costas, and Samet created a model that leverages these traditionally overlooked variables to make more accurate arrival time predictions.
Fake news vs satire: A dataset and analysis (2018)
Social media platforms have a moral obligation to prevent the spread of fake news, while still allowing users to freely share satire. This can only be achieved using advanced content filters. Golbeck et (many, many) al. compiled a dataset of fake news and satire, and built a simple classifier that can tell the two types of stories apart.
Does goal-oriented requirements engineering achieve its goal? (2017)
Building things right is important, but so is building the right thing. There’s plenty of research that suggests that goal-based thinking helps create better requirements, but practitioners don’t appear to be entirely convinced yet. Mavin et al. tried to better understand this gap between academia and industrial practice.
A large-scale empirical study on linguistic antipatterns affecting APIs (2018)
Last week’s summary showed that it’s hard to quantify understandability of code. This week we look at a much simpler problem: the consequences of having bad method names in an API. Aghajani, Nagy, Bavota, and Lanza found that badly named methods have some impact on bugs, but it’s not clear yet why.
Automatically assessing code understandability: How far are we? (2017)
Programmers spend much of their time reading code, so it’s important that it’s easy to understand. It would be nice if we could automatically calculate the understandability of code – unfortunately, Scalabrino et al. discovered that existing metrics aren’t good at predicting code understandability.
Software development waste (2017)
It’s not always easy to eliminate waste in software development, not least because it can be hard to identify. Sedano, Ralph, and Péraire collected well over two years’ worth of data from interviews and observations at Pivotal, and discovered nine types of waste in software development.
Belief & evidence in empirical software engineering (2016)
Programmers can hold very strong beliefs about certain topics. Devanbu, Zimmermann, and Bird conducted a case study on such beliefs at Microsoft, and found that they are mostly based on personal experience rather than empirical evidence, and don’t necessarily correspond with what happens in reality.
How not to structure your database-backed web applications: A study of performance bugs in the wild (2018)
By abstracting away the details of querying databases, Object Relational Mapping (ORM) frameworks allow us to build Web applications efficiently. Sadly, those same abstractions can also prevent them from running efficiently. Yang, Subramaniam, Lu, Yan, and Cheung identified 9 performance anti-patterns and ways to mitigate them.
Do you remember this source code? (2018)
Developers occasionally get questions about code that they have written. Such questions are not always easy to answer, especially if that code was written a long time ago. Krüger, Wiemann, Fenske, Saake, and Leich used an online survey to study how developers lose familiarity with “their” source code over time.
Enhancing person-job fit for talent recruitment: An ability-aware neural network approach (2018)
Person-job fit is the extent to which a candidate is well-suited for a position. Determining this person-job fit is currently a laborious task. Qin et al. introduce a novel approach that uses neural networks to automatically guess the person-job fit based on information extracted from job postings, resumes, and previously filled vacancies.
An industrial evaluation of unit test generation: Finding real faults in a financial application (2017)
Writing tests isn’t something that many developers enjoy, and clients generally don’t like spending money on testing either. Could we try to automate it? Almasi, Hemmati, Fraser, Arcuri, and Benefelds compared two unit test generation tools for Java, and conclude that while they do work, you’ll still have to write tests manually for now.