728x90

https://www.sbert.net/docs/pretrained_models.html

 

Pretrained Models โ€” Sentence-Transformers documentation

We provide various pre-trained models. Using these models is easy: Multi-Lingual Models The following models generate aligned vector spaces, i.e., similar inputs in different languages are mapped close in vector space. You do not need to specify the input

www.sbert.net

 

LangChain๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” LLM model

  • Semantic Search ์‹œ local์—์„œ embedding vector๋ฅผ ์ถ”์ถœํ•ด retriever QA๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค.
  • Multi-QA (multi lingual ์ง€์›)

 

Semantic Textual Similarity 

https://www.sbert.net/examples/training/sts/README.html 

 

Semantic Textual Similarity โ€” Sentence-Transformers documentation

Semantic Textual Similarity Semantic Textual Similarity (STS) assigns a score on the similarity of two texts. In this example, we use the STSbenchmark as training data to fine-tune our network. See the following example scripts how to tune SentenceTransfor

www.sbert.net

 

๋ฐ˜์‘ํ˜•

'๐Ÿ—ฃ๏ธ Natural Language Processing' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

paper-translator test (LIMA: Less Is More for Alignment)  (0) 2023.06.08
[Langchain] Paper-Translator  (0) 2023.06.05
[OpenAI API] OpenAI Token  (0) 2023.05.30
[LangChain] No using OpenAI API RetrievalQA  (0) 2023.05.28
[Mac] Transformer model downloaded path  (0) 2023.05.28
๋‹คํ–ˆ๋‹ค