Conversational LLMs have been widely adopted by domain users with limited programming experience to solve domain problems. However, these users often face misalignment between their intent and generated code, resulting in frustration and rounds of clarification. This work first investigates the cause of this misalignment, which dues to bidirectional ambiguity: both user intents and coding tasks are inherently nonlinear, yet must be expressed and interpreted through linear prompts and code sequences. To address this, we propose direct intent–task matching, a new human–LLM interaction paradigm that externalizes and enables direct manipulation of the LLM understanding, i.e., the coding tasks and their relationships inferred by the LLM prior to code generation. As a proof-of-concept, this paradigm is then implemented in NeuroSync, which employs a knowledge distillation pipeline to extract LLM understanding, user intents, and their mappings, and enhances the alignment by allowing users to intuitively inspect and edit them via visualizations. We evaluate the algorithmic components of NeuroSync via technical experiments, and assess its overall usability and effectiveness via a user study (N=12). The results show that it enhances intent–task alignment, lowers cognitive effort, and improves coding efficiency.
Bidirectional ambiguity is one important reason why misalignment occurs and graphs (node-link diagrams) are a good link between users's nonlinear intent and LLM's nonlinear code tasks.
Why misalignment?
Bidirectional ambiguity is a major cause of human-LLM misalignment in conversational coding tasks. During the conversation with LLM for coding tasks, ambiguity is bidirectional:
(1) User-to-LLM: Users find it challenging to clearly express their needs and the information required by the LLM in their prompts. For example, converting tree-like intent in Fig. into prompt will lose direct structure and cause ambiguity.
(2) LLM-to-User: Users struggle to understand the specific tasks and execution logic embedded in the code, making it difficult to provide precise modification requests. For example, in Fig., users need to reconstruct codes and code relationships by themselves, which is difficult and low ability of understanding code will lead to ambiguity.
This bidirectional ambiguity compounds over turns, causing LLMs to produce code misaligned with user intent. As LLM capabilities grow and inference slows, the cost of these ineffective interactions increases.
How to cross the dual nonlinearity?
task graphs were consistently viewed as more helpful. Participants highlighted two key benefits:(1) Improved task comprehension through clear visualization of task dependencies and subgoals; (2) Enhanced efficiency in locating key logic points and understanding overall code purpose.
However, as interaction rounds increased, graph complexity grew and negatively impacted interpretability. System need dynamic graph simplification methods.
Inspired by the concept of understanding, where humans develop their interpretation of LLM outputs, we suggest that LLMs form a kind of understanding of user inputs. We call this LLM understanding, which refers to the tasks and their relationships implicitly encoded in the code that an LLM is expected to generate based on user prompts.
We propose a new human–LLM interaction paradigm, direct intent–task matching, based on externalizing and modifying LLM understanding organized in graphs prior to code generation.
NeuroSync allows a user to directly manipulate a visual task graph on two levels via the user interface to correct an LLM's understanding before code generation. This interaction is kept responsive by a lightweight distillation pipeline, which fine-tunes a small model using data from a multi-agent system that simulates user behavior. To manage cognitive load, an intent-aware graph simplification algorithm dynamically collapses and highlights parts of the graph based on the user's focus.
Overview of NeuroSync, a proof-of-concept implementation of the direct intent–task matching paradigm. NeuroSync takes user prompts as input, extracts the LLM understanding, enables users to refine this understanding through graph-based visualizations, and feeds the refined understanding back to the LLM to generate code that more accurately aligns with user intents.
User interaction in NeuroSync supports two-level graph modification before code generation. Users can perform precise node-level edits (e.g., adding/modifying nodes) or use natural language commands for broad, graph-level structural changes.
Interface of NeuroSync. Users interact with the LLM through Panel A (LLM Conversation Panel). Before each LLM response, the system generates an LLM understanding graph in Panel B (Understanding Graph Manipulation Panel) and a simplified version in Panel C (Intent–Task Mapping View). Users can edit the task graph in Panel B and explore task structures and intent alignment via Panel C.
Triple Distillation Pipeline. It aligns the SLM in the student path with the two-stage extractor in the teacher path. The SLM can extract triples directly from prompts, bypassing intermediate code generation to speed up triple extraction.
Multi-Agent Module Overview. This module involves four agents designed to interact with each other, simulating a domain user's experience of leveraging an LLM for code generation based on our findings on user behavior patterns.
Intent-aware graph simplification algorithm. The left figure illustrates an intent tree, where each node corresponds to a sub-understanding graph. During the simplification process, nodes that are mapped to changes in the intent tree are directly transferred to the simplified graph (\ie~red dashed box). Meanwhile, parts mapped to unchanged nodes are recursively collapsed or zoomed out (\ie~blue and green boxes).
@article{zhang2025neurosync,
title={NeuroSync: Intent-Aware Code-Based Problem Solving via Direct LLM Understanding Modification},
author={Zhang, Wenshuo and Shen, Leixian and Xu, Shuchang and Wang, Jindu and Zhao, Jian and Qu, Huamin and Yuan, Linping},
journal={arXiv preprint arXiv:2508.02823},
year={2025}
}