Improve indexing speed of Semantic Search
This was flagged by @frankduncan as a potential issue. In testing locally against 6,000 documents indexing speed seemed slow, but acceptable. What I didn't realize is the combinatorial nature of search cache documents leads to something like 135,000 of them. And my local testing environment doesn't necessarily reflect the performance of generating the embeddings on the remote LLM API machine in production. There are couple of different ways this could be addressed (one I've been tinkering with is batching inserts and parallelizing embedding calls), and it really depends on which of those is going to have the most impact, so I'm going to keep the specifics of implementation open-ended for now. This is therefore more like a spike ticket in the sense that the first technical task is to identify the most effective approach.