The gte-base embedding model encodes English sentences and paragraphs into a 768-dimensional dense vector space, delivering efficient and effective semantic embeddings optimized for textual similarity, semantic search, and clustering applications.
Recent activity on GTE-Base
Total usage per day on OpenRouter
Prompt
14M
Completion
0
Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.