https://www.sbert.net/docs/pretrained_models.html
Pretrained Models — Sentence-Transformers documentation
We provide various pre-trained models. Using these models is easy: Multi-Lingual Models The following models generate aligned vector spaces, i.e., similar inputs in different languages are mapped close in vector space. You do not need to specify the input
www.sbert.net
LangChain๊ณผ ๊ฐ์ด ์ฌ์ฉํ ์ ์๋ LLM model
Semantic Search
์ local์์ embedding vector๋ฅผ ์ถ์ถํด retriever QA๋ฅผ ๋ง๋ค ์ ์๋ค.- Multi-QA (multi lingual ์ง์)
Semantic Textual Similarity
https://www.sbert.net/examples/training/sts/README.html
Semantic Textual Similarity — Sentence-Transformers documentation
Semantic Textual Similarity Semantic Textual Similarity (STS) assigns a score on the similarity of two texts. In this example, we use the STSbenchmark as training data to fine-tune our network. See the following example scripts how to tune SentenceTransfor
www.sbert.net
'๐ฃ๏ธ Natural Language Processing' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
paper-translator test (LIMA: Less Is More for Alignment) (0) | 2023.06.08 |
---|---|
[Langchain] Paper-Translator (0) | 2023.06.05 |
[OpenAI API] OpenAI Token (0) | 2023.05.30 |
[LangChain] No using OpenAI API RetrievalQA (0) | 2023.05.28 |
[Mac] Transformer model downloaded path (0) | 2023.05.28 |