Nick Elwick

I build things

UK

NE

Building a LangChain Application with Phoenix and OpenAI

The rise of LLM's in the software development community has been rapid. New applications are being built with and around this technology, which in it's wake it's giving rise to more software being built for LLM's.

In this tutorial I am going to use Python to show you how to inegrate an LLM tracer named Pheonix(LINK HERE), with an example of a basic RAG technique built with Langchain and CLI query.

The code is hosted here(LINK HERE) if you would like to skip ahead.

Table of Contents

  1. Setting up your development environment
  2. Launch Phoenix
  3. OpenAI Configuration
  4. Build LangChain Application
  5. User Interaction
  6. Conclusion
  7. Code

Tech Stack

Pre-Requisites

  • Python 3.7+ Installed on your machine
  • An OpenAI API Key
  • A basic understanding of LLM's, RAG and Prompting.

1. Setting up your development environment

Before building the application, you need to set up your development environment. Start with creating a new virtual environment:

1mkdir pheonix-langchain 2cd pheonix-langchain-starter 3python3 -m venv venv; . venv/bin/activate; pip install --upgrade pip

Create a file named - requirements.txt that includes the following:

1arize-phoenix 2chromadb 3langchain 4openai 5tiktoken 6langchainhub 7load_dotenv 8unstructured[pdf]

Breakdown of packages

  • LangChain (langchain): A toolkit for building applications using large language models.

  • LangChain Hub (langchainhub): A hub containing user submitted prompts.

  • Phoenix (arize-phoenix): For observability in LangChain applications.

  • ChromaDB (chromadb): A database tool designed for efficient storage and retrieval of large-scale vector data, often used in machine learning applications.

  • OpenAI (openai): Python library provided by OpenAI, enabling easy integration with OpenAI's GPT models and other AI services.

  • TikToken (tiktoken): This could be a library related to tokenization or processing of text data, often in natural language processing applications. (Note: The specific purpose of this package is unclear without more context.)

  • dotenv (load_dotenv): A Python module that loads environment variables from a .env file.

  • Unstructured [PDF] (unstructured[pdf]): A library or module in Python for processing and handling unstructured data in PDF format.

  • Unstructured [MD] (unstructured[md]): Similar to its PDF counterpart, this is for processing and handling unstructured data in Markdown (MD) format.

Now you can install these dependencies: pip install -r requirements.txt

Environment Variables

Create a file named .env and copy your OpenAI API keys in there in this format:

1export OPENAI_API_KEY=your-api-key

Store Documents

In this tutorial we will be using a pdf file for RAG with langchain. There are plenty of other options here, I encourage you to experiment.

In your main pheonix-langchain create a folder for your documents named docs

Your file system should look like this:

/pheonix-starter
    /docs

Place a PDF document in here to ingest into langchain, you can use LINK HERE if you want to.

2. Configure Pheonix

Now, let's get started with Pheonix.

Go ahead and create a file named main.py in your pheonix-lanchain directory and put the following code in there:

1import phoenix as px 2from phoenix.trace.langchain import LangChainInstrumentor, OpenInferenceTracer 3 4px.launch_app() 5 6tracer = OpenInferenceTracer() 7LangChainInstrumentor(tracer=tracer).instrument()

This code:

  • Imports Pheonix and it's langchain configuration tools
  • Launches Pheonix with px.launch.app()
  • To capture and save AI model inferences, instantiate a tracer and record your trace data in OpenInference format. This format is open source and allows production LLMapp servers to easily interface with LLM observability solutions like Phoenix. Take note that we are instrumenting our application with the OpenInferenceTracer using the LangChainInstrumentor.

3. OpenAI Configuration

Next we need to configure our OpenAI settings, add them to your main.py file like this:

1# 1 Import necessary dependencies 2import phoenix as px 3from phoenix.trace.langchain import LangChainInstrumentor, OpenInferenceTracer 4 5# 2 Launch Pheonix 6px.launch_app() 7 8# 3 Open AI configuration 9OPENAI_MODEL = "gpt-3.5-turbo" 10OPENAI_TEMPERATURE = .5 11 12tracer = OpenInferenceTracer() 13LangChainInstrumentor(tracer=tracer).instrument()

4. Build LangChain Application

To have the tracer give you values, you need to configure Langchain. t

Add these lines of code to main.py

Document Loader

Load the documents we placed in the /docs folder

1loader = DirectoryLoader('docs/', glob="**/*.*") 2docs = loader.load()

Text Splitter

Split your documents into manageable chunks for the LLM to "read":

1text_splitter = RecursiveCharacterTextSplitter( 2 chunk_size=1000, chunk_overlap=200) 3splits = text_splitter.split_documents(docs)

Build Vectorstore

Create a vectorstore to store your documents:

1vectorstore = Chroma.from_documents( 2 documents=splits, embedding=OpenAIEmbeddings()) 3retriever = vectorstore.as_retriever()

Prompt from LangChain Prompt Hub

Retrieve a prompt from the LangChain Prompt Hub:

1prompt = hub.pull("rlm/rag-prompt")

Define LLM

Set up your Language Model with OpenAI:

1llm = ChatOpenAI( 2 model_name=OPENAI_MODEL, temperature=OPENAI_TEMPERATURE)

Format Documents Function

Define a function to format documents:

1def format_docs(docs): 2 return "\n\n".join(doc.page_content for doc in docs)

Define the RAG Chain

Set up the retriever-augmented generation chain:

1rag_chain = ( 2 {"context": retriever | format_docs, "question": RunnablePassthrough()} 3 | prompt 4 | llm 5 | StrOutputParser() 6)

That was a lot of code to add... your main.py file should now look something like this:

1 2# 1 Import necessary dependencies 3import phoenix as px 4from langchain.schema.runnable import RunnablePassthrough 5from langchain.embeddings import OpenAIEmbeddings 6from langchain.chat_models import ChatOpenAI 7from langchain.vectorstores import Chroma 8from langchain.document_loaders import DirectoryLoader 9from langchain.text_splitter import RecursiveCharacterTextSplitter 10from langchain.schema import StrOutputParser 11from langchain import hub 12from phoenix.trace.langchain import LangChainInstrumentor, OpenInferenceTracer 13from dotenv import load_dotenv 14load_dotenv() 15 16 17# 2 Launch Pheonix 18px.launch_app() 19 20# 3 Open AI configuration 21OPENAI_MODEL = "gpt-3.5-turbo" 22OPENAI_TEMPERATURE = .5 23 24# 4 Build Langchain Application 25 26# Document Loader 27loader = DirectoryLoader('docs/', glob="**/*.*") 28docs = loader.load() 29 30## Text Splitter 31text_splitter = RecursiveCharacterTextSplitter( 32 chunk_size=1000, chunk_overlap=200) 33splits = text_splitter.split_documents(docs) 34 35## Build Vectorstore 36vectorstore = Chroma.from_documents( 37 documents=splits, embedding=OpenAIEmbeddings()) 38retriever = vectorstore.as_retriever() 39 40## Pull prompt from Langchain Prompt Hub 41prompt = hub.pull("rlm/rag-prompt") 42 43## Define LLM 44llm = ChatOpenAI( 45 model_name=OPENAI_MODEL, temperature=0.5) 46 47 48## Format Documents Function 49def format_docs(docs): 50 return "\n\n".join(doc.page_content for doc in docs) 51 52##  Define the chain to run queries on 53rag_chain = ( 54 {"context": retriever | format_docs, "question": RunnablePassthrough()} 55 | prompt 56 | llm 57 | StrOutputParser() 58) 59 60tracer = OpenInferenceTracer() 61LangChainInstrumentor(tracer=tracer).instrument()

User Interaction

Let's wrap up the code and set up a loop to process user queries:

1while True: 2 user_query = input("Enter your query (or type 'exit' to quit): ") 3 if user_query.lower() == 'exit': 4 break 5 6 response = rag_chain.invoke(user_query) 7 print(response)

Your main.py should now look like this:

1# 1 Import necessary dependencies 2import phoenix as px 3from langchain.schema.runnable import RunnablePassthrough 4from langchain.embeddings import OpenAIEmbeddings 5from langchain.chat_models import ChatOpenAI 6from langchain.vectorstores import Chroma 7from langchain.document_loaders import DirectoryLoader 8from langchain.text_splitter import RecursiveCharacterTextSplitter 9from langchain.schema import StrOutputParser 10from langchain import hub 11from phoenix.trace.langchain import LangChainInstrumentor, OpenInferenceTracer 12from dotenv import load_dotenv 13load_dotenv() 14 15 16# 2 Launch Pheonix 17px.launch_app() 18 19# 3 Open AI configuration 20OPENAI_MODEL = "gpt-3.5-turbo" 21OPENAI_TEMPERATURE = .5 22 23# 4 Build Langchain Application 24 25# Document Loader 26loader = DirectoryLoader('docs/', glob="**/*.*") 27docs = loader.load() 28 29# Text Splitter 30text_splitter = RecursiveCharacterTextSplitter( 31 chunk_size=1000, chunk_overlap=200) 32splits = text_splitter.split_documents(docs) 33 34# Build Vectorstore 35vectorstore = Chroma.from_documents( 36 documents=splits, embedding=OpenAIEmbeddings()) 37retriever = vectorstore.as_retriever() 38 39# Pull prompt from Langchain Prompt Hub 40prompt = hub.pull("rlm/rag-prompt") 41 42# Define LLM 43llm = ChatOpenAI( 44 model_name=OPENAI_MODEL, temperature=0.5) 45 46 47# Format Documents Function 48def format_docs(docs): 49 return "\n\n".join(doc.page_content for doc in docs) 50 51 52#  Define the chain to run queries on 53rag_chain = ( 54 {"context": retriever | format_docs, "question": RunnablePassthrough()} 55 | prompt 56 | llm 57 | StrOutputParser() 58) 59 60# Instantiate an OpenInferenceTracer to store your data in OpenInference format. 61tracer = OpenInferenceTracer() 62LangChainInstrumentor(tracer=tracer).instrument() 63 64# Prompt the user for input 65while True: 66 user_query = input("Enter your query (or type 'exit' to quit): ") 67 if user_query.lower() == 'exit': 68 break 69 70 # Process the user input query 71 response = rag_chain.invoke(user_query) 72 73 print(response) 74

Let's run the application

That's it for the code. Now let's run the application and see Pheonix in action.

In your pheonix-langchain directory run python main.py

You should see this output in your terminal:

Console showing phoenix arize running

Go ahead and ask your document a question like so...

Console showing phoenix arize running

Head on over to http://127.0.0.1:6006/tracing to see Pheonix's outputs

Console showing phoenix arize running

Console showing phoenix arize running

Pheonix offers a broad feature set, which I encourage you to experiment with, here is an exerpt from their GitHub:

Phoenix provides MLOps and LLMOps insights at lightning speed with zero-config observability. Phoenix provides a notebook-first experience for monitoring your models and LLM Applications by providing:

  • LLM Traces - Trace through the execution of your LLM Application to understand the internals of your LLM Application and to troubleshoot problems related to things like retrieval and tool execution.
  • LLM Evals - Leverage the power of large language models to evaluate your generative model or application's relevance, toxicity, and more.
  • Embedding Analysis - Explore embedding point-clouds and identify clusters of high drift and performance degradation.
  • RAG Analysis - Visualize your generative application's search and retrieval process to solve improve your retrieval-augmented generation.
  • Structured Data Analysis - Statistically analyze your structured data by performing A/B analysis, temporal drift analysis, and more.

Conclusion

Thanks for reading this tutorial I hope it provided you with some value.

Github Code

LINK THE GITHUB PAGE HERE