Skip to main content

LlamaIndex advanced use

Custom Prompt

Two important templates

When Response Modes is compact hour, use the text_qa_template template first, and then use the refine_template template.

LLM Window Size

Defined here

/Users/xxx/anaconda3/envs/LI311-h/lib/python3.11/site-packages/llama_index/llms/openai/utils.py

AZURE_TURBO_MODELS: Dict[str, int] = {
"gpt-35-turbo-16k": 16384,
"gpt-35-turbo": 4096,
# 0125 (2024) model (JSON mode)
"gpt-35-turbo-0125": 16385,
# 1106 model (JSON mode)
"gpt-35-turbo-1106": 16384,
# 0613 models (function calling):
"gpt-35-turbo-0613": 4096,
"gpt-35-turbo-16k-0613": 16384,
}

LLM using LangChain

Document

import os
from langchain_community.chat_models.moonshot import MoonshotChat
from langchain_core.messages import HumanMessage, SystemMessage
from llama_index.llms.langchain import LangChainLLM

os.environ["MOONSHOT_API_KEY"] = "sk-xxx"

llm = LangChainLLM(llm=MoonshotChat(model_name="moonshot-v1-128k"))
response_gen = llm.stream_complete("你是谁?")
for delta in response_gen:
print(delta.delta, end="")

When using MoonshotChat hour, errors are prone to occur:

  File "/Users/yanghaibin/anaconda3/envs/LI311-h/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 581, in create
return self._post(
^^^^^^^^^^^
File "/Users/yanghaibin/anaconda3/envs/LI311-h/lib/python3.11/site-packages/openai/_base_client.py", line 1233, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yanghaibin/anaconda3/envs/LI311-h/lib/python3.11/site-packages/openai/_base_client.py", line 922, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/yanghaibin/anaconda3/envs/LI311-h/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'Invalid request: Your request exceeded model token limit: 8192', 'type': 'invalid_request_error'}}

Custom Retriever

link