2 Comments
User's avatar
HyperH's avatar

Please add a custom system prompt feature for autocomplete.

The fact that autocomplete usage is primarily concentrated in dynamic types and web-related languages is likely because most LLMs possess stronger capabilities in these fields. Consequently, developers in these areas are more likely to feel that code completion already meets their needs.

Many evaluation organizations have conducted independent tests showing that most LLMs perform better in dynamic types and web-related languages while remaining relatively weak in other domains. This is a general tendency that may be related to the training data or the logic inherent in the programming languages themselves.

However, system prompts can significantly influence an LLM's performance. In the current workflow, I feel that the prompt used is a generic one rather than being tailored to different languages.

If a custom prompt feature can be implemented, the effectiveness of the autocomplete function would improve across various scenarios, and the range of applicable use cases would be expanded.

Mark IJbema's avatar

Another reason might be that our userbase is for a large part VS Code users, which is traditionally more aimed at dynamic languages (as opposed to full-fledged IDE's like Visual Studio).

Did you run into concrete problems with any code? I'd love to add some examples to our eval-suite if you noticed some languages working worse.

In general adding to the prompt for autocomplete should be done quite sparingly, as we need a low latency, which calls for a small prompt. At the moment we focus mostly on using a FIM endpoint which does not support prompts (though you can of course add some suggestions to the prefix), so I'd like to test first with some examples before allowing to extend the prompt.