Better and even automatic feedback on LLM responses
A. For debugging and prompt iteration purposes we need to know the LangFuse trace-id if an user reports some issue; imagine passing the trace-id to front-end and leaking it in the URL, any report about that URL can be easily looked into; and this is just couple of lines (see it here done for some older experimental branch: https://code.librehq.com/ots/llm/llm-api/-/blob/experimental-draft-analysis-reflection/api/reflection.py?ref_type=heads#L108-L114 )
B. Return in metadata the LangFuse API endpoint for thumbs up/down and that can be easily used from js with a library or just Fetch API https://langfuse.com/docs/scores/user-feedback#example-using-langfuseweb