Improve Semantic Search fault tolerance to LLM API failure
If the LLM API is down, we should provide descriptive error messages and/or fallback to traditional search. We could also decide to put the embedding model in the Semantic Search app instead of an external API.