Skip to content

Improve LLM response to provide a "multi-shot" answer

This extends #7 (closed) -- we want the LLM response to be smarter than just the "single shot" response. This involves the following:

  • Log LLM requests to LangFuse
  • Use LangFuse to provide few-shot prompting.
  • Pass metadata from the front end to LangFuse (and possibly back) moved to issue #14

There may be other fine tuning options to explore; I'll make a separate issue to capture that work.

Edited by Laurian Gridinoc