Support multiple AI backends at once
We need the llm-api to be able to send a request to one of several endpoints, with the endpoint chosen programmatically.
For this task, we're going to enable specifying multiple backends (i.e. url, key, etc.) and then choosing from them in python code.
By the time this issue is done, we should be able to have two different endpoints hit two different backends, but actually implementing two different endpoints is beyond scope (we can make a second issue for that if we like).