OpenSearch specifically has an edge over Elasticsearch because it supports vectors up to 10k dimensions, whereas ES maxes out at indexing 1024 dimensions, which isn't enough to support OpenAI's 1536 dimension vectors.
And then there's the benefit of it being well documented / Q&A'd, and able to support regular searching, faceting, etc. as well.
You need to do dimensionality reduction before indexing. Basically it's fine to just pick n first components if you don't want anything fancy.
Given how well OpenSearch works and scales, I would find it hard to justify a specialized vector-specific database unless it brought A LOT of new benefits to the table. And I am not currently aware how any of them would actually do that.
Also, OpenSearch provides all of that out-of-the-box. You just configure a vector field mapping and start inserting your data. No need for an add-on plugin/extension. It just works.
This may not matter if you are not doing high throughput / have tight latency requirements, but in my case, it did. Of course you should weigh that versus the convenience of preexisting ES/OS clusters and so on. You can also use ES/OS together with a separate vector DB. (these tradeoffs are, of course, what make a static benchmarking post like this one so hard to think about).
I find vector search more convincing as a feature of an existing database than as justification to design an entirely new database - it's basically a new type of index.
Also having some benchmark to compare performance would help.
[0] https://www.elastic.co/guide/en/elasticsearch/reference/curr...