AI Summary • Published on Dec 13, 2025
The field of Software Engineering (SE) frequently faces a challenge where research studies lack practical relevance, largely due to poorly formulated research problems that do not adequately reflect real-world industrial complexities. Existing technology transfer models often overlook the critical early stages of problem formulation, and most efforts focus on evaluating research results rather than establishing robust problem definition processes from the outset. This gap highlights a persistent disconnect between academic contributions and industry needs, necessitating structured approaches to ensure research problems are grounded in practical realities.
This paper proposes the integration of artificial intelligence (AI) agents to enhance the early-stage formulation of research problems within Software Engineering, building upon the Lean Research Inception (LRI) framework. The methodology involves a descriptive evaluation through a practical scenario, referencing a published study on code maintainability in machine learning projects. The scenario illustrates how AI agents, when integrated into LRI, can support SE researchers across five key phases: (1) Problem Vision Outline, where AI agents assist in pre-filling seven problem attributes (e.g., practical problem, context, implications) by synthesizing scientific literature and industry reports, reducing cognitive load; (2) Problem Vision Alignment, during collaborative workshops, where AI agents mediate discussions by providing illustrative examples and simulating how different professional profiles perceive the problem, fostering shared understanding; (3) Research Problem Formulation, where the multi-agent AI architecture refines the documented problem, identifying inconsistencies, suggesting complementary research questions, and comparing it with similar challenges; (4) Research Problem Assessment, where AI-generated stakeholder simulations enrich the LRI's semantic differential scale by offering detailed justifications for value, feasibility, and applicability assessments, encouraging critical reflection; and (5) Go/Pivot/Abort Decision, where AI agents analyze risks, feasibility trends, and scenario-based projections to support evidence-based strategic decisions on whether to continue, adjust, or abandon the research.
The descriptive evaluation of the scenario suggests that the integration of AI agents can significantly enrich collaborative discussions and enhance critical reflection among researchers and practitioners. This support helps improve the assessment of a research problem's value, feasibility, and applicability, leading to a more comprehensive and well-rounded problem definition. The scenario indicated promising opportunities for AI to serve as an intelligent support system in formulating contextualized and industry-aligned research problems.
Integrating AI agents into the LRI framework is envisioned as a promising step towards fostering SE studies with greater practical relevance and a more holistic understanding of complex phenomena. By leveraging AI's ability to synthesize knowledge, anticipate stakeholder perspectives, and support critical reflection, the LRI framework could become more robust and adaptable, ultimately helping to bridge the gap between academia and industry. However, the authors emphasize that this is a vision paper, and empirical validation is crucial to confirm, refine, and expand the practical application and effectiveness of AI agents in research problem formulation.