An embedding is a a way to map words into a high-dimensional "concept space", so they can be processed by ML algorithms. The most popular one is word2vec
A vector database is used for things where you're trying query "I have this image, give me a list of the 10 closest images and metrics of how similar they are."
You use a machine learning model (like word2vec, OpenAI, etc.) to produce an "embedding" that describes the image, text, video, etc., which is your "vector".
For all of the other images in your database, you also run them through the same model, and store their embedding vectors in the vector database.
Then, you ask the database "I have this vector, what are the most similar vectors, and what are their primary keys, so I can see what content they refer to".
Think: you want to implement google "search by image". This is the basics of how you'd do that.
Why use the word "embedding" if there are already much more familiar words for it (isn't this the same as feature vector)?
I want to convince myself that this isn't similar to blockchain. In the sense that blockchain renamed an old and simple idea and advertised it as something complex and groundbreaking...
Also, relational databases or graph databases have a reach theory that results in many interesting sub-problems, each interesting in its own right, to contrast this with "document databases", which have no theory, and nothing interesting behind it. So, if I were to invest my time learning about one w/o a financial incentive to do so, I'd not want to concentrate on some accidental concept that just happened to solve an immediate problem, but isn't applicable / transferable to other problems.
For example, graph databases and relational databases create interesting storage problems wrt' optimal layout for various database components. If hash-table is all there is to the vector database, then it's not an interesting storage problem.
Similarly, with querying the database: if key lookup from a hash-table is all there is, then it's not an interesting problem.
Okay, "mapping into concept space" is at least compatible with my meaning theory, but by itself it doesn't say much, since in principle anything can be mapped to anything.
https://jalammar.github.io/illustrated-word2vec/