Software engineering is a human-centered activity involving various stakeholders with different backgrounds that have to communicate and collaborate to reach shared objectives. The emergence of conflicts among stakeholders may lead to undesired effects on software maintainability, yet it is often unavoidable in the long run. Community smells, i.e., sub-optimal communication and collaboration practices, have been defined to map recurrent conflicts among developers. While some community smell detection tools have been proposed in the recent past, these can be mainly used for research purposes because of their limited level of usability and user engagement. To facilitate a wider use of community smell-related information by practitioners, we present CADOCS, a client-server conversational agent that builds on top of a previous community smell detection tool proposed by Almarini et al. to (1) make it usable within a well-established communication channel like Slack and (2) augment it by providing initial support to software analytics instruments useful to diagnose and refactor community smells. We describe the features of the tool and the preliminary evaluation conducted to assess and improve robustness and usability.
@INPROCEEDINGS{9978263,
author={Voria, Gianmario and Pentangelo, Viviana and Porta, Antonio Della and Lambiase, Stefano and Catolino, Gemma and Palomba, Fabio and Ferrucci, Filomena},
booktitle={2022 IEEE International Conference on Software Maintenance and Evolution (ICSME)},
title={Community Smell Detection and Refactoring in SLACK: The CADOCS Project},
year={2022},
pages={469-473},
doi={10.1109/ICSME55016.2022.00061}}
Large Language Models (LLMs) are revolutionizing the landscape of Artificial Intelligence (AI) due to recent technological breakthroughs. Their remarkable success in aiding various Software Engineering (SE) tasks through AI-powered tools and assistants has led to the integration of LLMs as active contributors within development teams, ushering in novel modes of communication and collaboration. However, great power comes with great responsibility: ensuring that these models meet fundamental ethical principles such as fairness is still an open challenge. In this light, our vision paper analyzes the existing body of knowledge to propose a conceptual model designed to frame ethical, social, and cultural considerations that researchers and practitioners should consider when defining, employing, and validating LLM-based approaches for software engineering tasks.