3 Comments
User's avatar
Neural Foundry's avatar

Really smart approach to actually survey users on how they're using autocomplete. The finding that people lean on it for documentation and refactoring surprised me a bit - I always assumed those were tasks where you'd want more conscious control. But thinking about how I work, yeah, having suggestions for JSDoc blocks or renaming patterns would actually cut down on the boring bits while you focus on logic. One thing I'd be curious about is whether acceptance rates vary by task complexity or if its more just ahabit thing once you trust the tool.

HyperH's avatar

Please add a custom system prompt feature for autocomplete.

The fact that autocomplete usage is primarily concentrated in dynamic types and web-related languages is likely because most LLMs possess stronger capabilities in these fields. Consequently, developers in these areas are more likely to feel that code completion already meets their needs.

Many evaluation organizations have conducted independent tests showing that most LLMs perform better in dynamic types and web-related languages while remaining relatively weak in other domains. This is a general tendency that may be related to the training data or the logic inherent in the programming languages themselves.

However, system prompts can significantly influence an LLM's performance. In the current workflow, I feel that the prompt used is a generic one rather than being tailored to different languages.

If a custom prompt feature can be implemented, the effectiveness of the autocomplete function would improve across various scenarios, and the range of applicable use cases would be expanded.

Mark IJbema's avatar

Another reason might be that our userbase is for a large part VS Code users, which is traditionally more aimed at dynamic languages (as opposed to full-fledged IDE's like Visual Studio).

Did you run into concrete problems with any code? I'd love to add some examples to our eval-suite if you noticed some languages working worse.

In general adding to the prompt for autocomplete should be done quite sparingly, as we need a low latency, which calls for a small prompt. At the moment we focus mostly on using a FIM endpoint which does not support prompts (though you can of course add some suggestions to the prefix), so I'd like to test first with some examples before allowing to extend the prompt.