Target:基于LlamaIndex构建自己的RAG知识库,寻找一个问题A在使用LLamaIndex之前InternLM2-Chat-1.8B不会回答,使用LLamaIndex后具备回答问题A的能力。

1 前置知识

知识检索增强(Retrieval Augmented Generation,RAG):给模型注入新知识的方式,可以简单分为两种方式,一种是内部的,即更新模型的权重,另一个就是外部的方式,给模型注入格外的上下文或者说外部信息,不改变它的的权重。类比人类编程的过程,第一种方式相当于你记住了某个函数的用法,第二种方式相当于你阅读函数文档然后短暂的记住了某个函数的用法。

InternLM+LlamaIndex RAG实践_大模型

2 创建环境

!pip install einops
!pip install protobuf

!pip install llama-index==0.10.38 llama-index-llms-huggingface==0.2.0 "transformers[torch]==4.41.1" "huggingface_hub[inference]==0.23.1" huggingface_hub==0.23.1 sentence-transformers==2.7.0 sentencepiece==0.2.0

!pip install modelscope

!pip install llama-index-embeddings-huggingface llama-index-embeddings-instructor

!pip install streamlit==1.36.0

3 下载模型

# internLm-1.8B模型
from modelscope import snapshot_download

model_dir = snapshot_download('jayhust/internlm2-chat-1_8b', cache_dir='/data/coding/demo/')
model_dir

测试internLm-1.8B模型:

from llama_index.llms.huggingface import HuggingFaceLLM

from llama_index.core.llms import ChatMessage

llm = HuggingFaceLLM(
   
model_name="/data/coding/demo/jayhust/internlm2-chat-1_8b",
    tokenizer_name="/data/coding/demo/jayhust/internlm2-chat-1_8b",
    model_kwargs={"trust_remote_code":True},

    tokenizer_kwargs={"trust_remote_code":True}

)

rsp = llm.chat(messages=[ChatMessage(content="xtuner是什么?")])
print(rsp)

输出结果: ![[书生大模型/基础岛/pic/Pasted image 20240802100850.png]]

# sentence-transformers模型
import os

# 设置环境变量
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'

# 下载模型
os.system('huggingface-cli download  sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 --local-dir /data/coding/demo/sentence-transformer')

4 下载nltk资源

nltk一些资源需要用于使用词向量模型构建开源词向量。

git clone https://gitee.com/yzy0612/nltk_data.git  --branch gh-pages
cd nltk_data
mv packages/*  ./
cd tokenizers
unzip punkt.zip
cd ../taggers
unzip averaged_perceptron_tagger.zip

5 LLamaIndex RAG

获取知识库:

mkdir data
cd data
git clone https://github.com/InternLM/xtuner.git
mv xtuner/README_zh-CN.md ./

xtuner/README_zh-CN.md内容: ![[书生大模型/基础岛/pic/Pasted image 20240802101053.png]] 打开llamaindex_RAG.py贴入以下代码:

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings

from llama_index.embeddings.huggingface import HuggingFaceEmbedding

from llama_index.llms.huggingface import HuggingFaceLLM

  

#初始化一个HuggingFaceEmbedding对象,用于将文本转换为向量表示

embed_model = HuggingFaceEmbedding(

#指定了一个预训练的sentence-transformer模型的路径

    model_name="/data/coding/demo/sentence-transformer"

)

#将创建的嵌入模型赋值给全局设置的embed_model属性,

#这样在后续的索引构建过程中就会使用这个模型。

Settings.embed_model = embed_model

  

llm = HuggingFaceLLM(

    model_name="/data/coding/demo/jayhust/internlm2-chat-1_8b",

    tokenizer_name="/data/coding/demo/jayhust/internlm2-chat-1_8b",

    model_kwargs={"trust_remote_code":True},

    tokenizer_kwargs={"trust_remote_code":True}

)

#设置全局的llm属性,这样在索引查询时会使用这个模型。

Settings.llm = llm

  

#从指定目录读取所有文档,并加载数据到内存中

documents = SimpleDirectoryReader("/data/coding/demo/data").load_data()

#创建一个VectorStoreIndex,并使用之前加载的文档来构建索引。

# 此索引将文档转换为向量,并存储这些向量以便于快速检索。

index = VectorStoreIndex.from_documents(documents)

# 创建一个查询引擎,这个引擎可以接收查询并返回相关文档的响应。

query_engine = index.as_query_engine()

response = query_engine.query("xtuner是什么?")

  

print(response)

6 使用StreamLit执行InternLM+RAG

打开app.py贴入以下代码:

import streamlit as st

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings

from llama_index.embeddings.huggingface import HuggingFaceEmbedding

from llama_index.llms.huggingface import HuggingFaceLLM

  

st.set_page_config(page_title="llama_index_demo", page_icon="🦜🔗")

st.title("llama_index_demo")

  

# 初始化模型

@st.cache_resource

def init_models():

    embed_model = HuggingFaceEmbedding(

        model_name="/data/coding/demo/sentence-transformer"

    )

    Settings.embed_model = embed_model

  

    llm = HuggingFaceLLM(

        model_name="/data/coding/demo/jayhust/internlm2-chat-1_8b",

        tokenizer_name="/data/coding/demo/jayhust/internlm2-chat-1_8b",

        model_kwargs={"trust_remote_code": True},

        tokenizer_kwargs={"trust_remote_code": True}

    )

    Settings.llm = llm

  

    documents = SimpleDirectoryReader("/data/coding/demo/data").load_data()

    index = VectorStoreIndex.from_documents(documents)

    query_engine = index.as_query_engine()

  

    return query_engine

  

# 检查是否需要初始化模型

if 'query_engine' not in st.session_state:

    st.session_state['query_engine'] = init_models()

  

def greet2(question):

    response = st.session_state['query_engine'].query(question)

    return response

  

# Store LLM generated responses

if "messages" not in st.session_state.keys():

    st.session_state.messages = [{"role": "assistant", "content": "你好,我是你的助手,有什么我可以帮助你的吗?"}]    

  

    # Display or clear chat messages

for message in st.session_state.messages:

    with st.chat_message(message["role"]):

        st.write(message["content"])

  

def clear_chat_history():

    st.session_state.messages = [{"role": "assistant", "content": "你好,我是你的助手,有什么我可以帮助你的吗?"}]

  

st.sidebar.button('Clear Chat History', on_click=clear_chat_history)

  

# Function for generating LLaMA2 response

def generate_llama_index_response(prompt_input):

    return greet2(prompt_input)

  

# User-provided prompt

if prompt := st.chat_input():

    st.session_state.messages.append({"role": "user", "content": prompt})

    with st.chat_message("user"):

        st.write(prompt)

  

# Gegenerate_llama_index_response last message is not from assistant

if st.session_state.messages[-1]["role"] != "assistant":

    with st.chat_message("assistant"):

        with st.spinner("Thinking..."):

            response = generate_llama_index_response(prompt)

            placeholder = st.empty()

            placeholder.markdown(response)

    message = {"role": "assistant", "content": response}

    st.session_state.messages.append(message)

执行:

streamlit run app.py

实现效果(一言难尽\[dog]):

InternLM+LlamaIndex RAG实践_大模型_02