This repository contains a Google Colab Notebook that implements a Retrieval-Augmented Generation (RAG) system for portfolio support. The system integrates document retrieval, dynamic entity extraction, and external API calls to generate informed, context-aware responses using a Hugging Face language model via Ollama. The entire application runs in Google Colab and is designed to work on both Linux and Windows environments.
In addition to that, this repository also contains a Flask app that serves as an API endpoint for the RAG system. The app allows users to interact with the RAG system via HTTP POST requests, enabling seamless integration with other applications and services.
Additionally, it also includes a sample backend Express API that can be used to interact with the RAG system. The system can (and should) query from the API to get more data to generate better-informed responses.
This project was done as part of the PeakSpan Capital technical assessment & portfolio support project. The Google Colab notebook is at RAG_LangChain_AI_System.ipynb.
Note: The live sample Express API is hosted on Render.com and can be accessed at https://rag-langchain-ai-system.onrender.com. The API provides endpoints for team profiles, investments, sectors, consultations, and more. You can use these endpoints to retrieve data and enrich the responses generated by the RAG system. However, please note that it will spin down after 15 minutes of inactivity, so it may need some time to spin up again if it has been inactive for a while.
Embedding Model:
We use the Hugging Face model all-MiniLM-L6-v2
to encode text chunks extracted from MasterClass documents. This model generates vector representations that enable fast similarity searches using a FAISS vector store.
Language Model:
The system utilizes the Ollama integration with the llama2
model to generate natural language responses based on the retrieved context and API data.
FAISS Vector Store:
The FAISS vector store is used to efficiently retrieve relevant document content based on user queries. The vector store is built from the embedded text chunks using the all-MiniLM-L6-v2
model.
Flask API:
The Flask app serves as an API endpoint for the RAG system, allowing users to interact with the system via HTTP POST requests. The app provides a simple and intuitive interface for querying the RAG system and receiving context-aware responses.
External APIs & API Chaining:
The system integrates with external APIs to retrieve additional data related to team profiles, investments, sectors, consultations, and more. These APIs provide valuable insights that enhance the system’s responses and support portfolio management activities.
Note: In this project, the “external APIs” are simulated using a sample backend Express API. The actual APIs can be substituted with real endpoints to access live data of your choice. Visit the Sample Backend Express API for more details.
/api/documents/download
endpoint.CharacterTextSplitter
.all-MiniLM-L6-v2
model and stored in an in-memory FAISS vector store, which allows for efficient retrieval of relevant content based on user queries.Persistent Memory:
Conversation history is maintained in a global variable (or via session data) so that context is preserved across multiple user queries. This persistent memory enables the system to handle follow-up questions accurately and generate coherent, context-aware responses.
Dynamic Entity Extraction:
Regular expressions are used to extract entities (e.g., person names, company names, sectors, URLs) from the user’s query or conversation history. Based on keywords such as “consult”, “profile”, “investment”, “sector”, or “scrape”, the corresponding API endpoint is called. The retrieved data (or friendly messages if no data is found) is then appended to the prompt used to generate the final response.
Error Handling and Robustness:
The system includes robust error handling mechanisms to gracefully handle failures when retrieving documents, querying external APIs, or processing user queries. By implementing try/except blocks and error checks, the system ensures a smooth user experience and provides informative messages in case of errors.
API Tool Integration Methodology:
The system integrates with multiple API endpoints, including /ping
, /api/documents/download
, /api/team
, /api/investments
, /api/sectors
, /api/consultations
, and /api/scrape
. Based on the user query or conversation history, the system dynamically extracts entities and calls the appropriate API endpoint to retrieve relevant data. The retrieved information is then used to generate context-aware responses that address the user’s query effectively.
Dynamic Entity Extraction:
Regular expressions are used to extract entities (e.g., person names, company names, sectors, URLs) from the user’s query or conversation history. Based on keywords such as “consult”, “profile”, “investment”, “sector”, or “scrape”, the corresponding API endpoint is called.
The retrieved data (or friendly messages if no data is found) is then appended to the prompt used to generate the final response.
Note: Please use a Google Colab instance with a GPU (e.g. T4 GPU) for better performance. All code are tested and optimized for Google Colab only!
a. Install the Colab XTerm extension (for command-line support):
!pip install colab-xterm
%load_ext colabxterm
b. Launch an XTerm terminal within Colab:
%xterm
This opens a full-screen terminal window within your notebook.
c. Install and serve Ollama:
In the XTerm terminal, run:
curl https://ollama.ai/install.sh | sh
ollama serve &
server.log
file for startup messages.)d. Pull an AI Model (Example using llama2
):
ollama pull llama2
This downloads the model for use.
e. Verify the Ollama Installation:
!ollama -version
If you see the version number, your Ollama server is running correctly.
Run the following cell in Colab:
!pip install langchain_community faiss-cpu sentence-transformers requests flask pyngrok
Copy the full RAG system script (provided in the notebook) into a new cell and run it. The script will:
Note:
API_TOKEN
and API_BASE_URL
with your credentials.exit
or quit
to end the session.Set Up ngrok for Colab:
!pip install flask pyngrok
"YOUR_NGROK_AUTH_TOKEN"
with your actual token):
from pyngrok import ngrok
ngrok.set_auth_token("YOUR_NGROK_AUTH_TOKEN")
Run the Flask App Cell:
Execute the cell containing the Flask app code. Once the documents are loaded and indexed, the app will start on port 5000 and ngrok will create a public tunnel. The output will display a public URL (e.g., https://your-ngrok-url.ngrok-free.app
).
Test the Flask Endpoint:
Send a POST request to the /chat
endpoint using the public URL. For example, in a new Colab cell:
!curl -X POST "https://your-ngrok-url.ngrok-free.app/chat" -H "Content-Type: application/json" -d '{"query": "hello"}'
Replace https://your-ngrok-url.ngrok-free.app
with the actual URL printed by ngrok.
Install Required Packages:
Navigate to the backend
directory and install the required packages:
cd backend
npm install
Also, don’t forget to set up the .env
file with the following content:
MONGO_URI=<your-mongo-uri>
PORT=3456
Start the Express Server:
Start the Express server:
npm start
The server will run on http://localhost:3456
. Visiting it in your browser should show the API documentation in Swagger UI.
Test the API Endpoints:
You can now test the API endpoints using tools like Postman or cURL. For example:
curl http://localhost:3456/api/team
Integrate with the RAG System:
Update the Flask app to query the sample backend API endpoints for additional data. You can modify the /chat
endpoint in the Flask app to call the sample backend API and enrich the responses with relevant information. Also, feel free to make changes to the API as needed if you want it to return different data or support more operations.
Below are some example interactions from the notebook (which you can also verify by viewing the console output in the notebook - under the “RAG System” code section).
Hello
Hello! I'm your assistant here to help with information about PeakSpan MasterClasses, team profiles, investments, sectors, and more. How can I assist you today?
Tell me about PeakSpan
I consulted with James Isaacs recently. I forgot, which PeakSpan portfolio companies did James consult with recently?
My name is Charlie, I work for a company named Vizzo. What are you and what can you do?
I am an intelligent assistant designed to provide you with up-to-date information about PeakSpan MasterClasses, team profiles, investments, sectors, and related insights. I retrieve document-based context and external API data to help answer your questions accurately. How may I assist you today?
Can you tell me more about Scott Varner?
Assistant: Scott Varner is a Managing Partner at PeakSpan Capital, where he focuses on investments in the technology and software sectors. He brings over 20 years of experience in the industry to his role, having held various leadership positions at companies such as Microsoft, IBM, and Oracle.
Varner has a track record of success in building and scaling high-growth businesses, and he is known for his ability to identify and support promising startups and entrepreneurs. At PeakSpan Capital, he leads the firm’s investments in companies such as Calendly, Cameo, Doolittle, Flock Safety, GrowTech, Hive, Intricelabs, Jobber, and LinguaSnap, among others.
Varner is also recognized for his commitment to diversity and inclusion in the tech industry. He has been featured in numerous publications, including Forbes, Fortune, and TechCrunch, and he regularly speaks at industry events and conferences.
In addition to his investment work, Varner is also involved in various philanthropic initiatives, focusing on education and workforce development programs. He serves on the boards of several non-profit organizations and is a mentor to several startup founders and entrepreneurs. ``` And many more features can be tested interactively in the notebook!
Google Colab provides a free, cloud-based Jupyter notebook environment with GPU support, making it an ideal platform for running AI models, training neural networks, and executing complex computations compared to local machines.
Thus, I have elected to use Google Colab for this project to leverage its great GPU capabilities, easy setup, and seamless integration with external APIs and services.
I have also tested the code so that it works on my MacOS (Local) and Windows (Local) machines, with minor adjustments. However, the performance was quite poor compared to Google Colab, so I recommend using Google Colab for the best experience.
This repository also contains additional resources that you can utilize to teach yourself and learn AI/ML! Feel free to explore the
resources
directory for more information. Resources include:
These resources cover a wide range of topics, from textual analysis and data science pipelines to deep learning, neural networks, and representation learning for recommender systems. You can use these resources to enhance your knowledge and skills in AI/ML and apply them to real-world projects and applications.
Feel free to also check out my other GitHub projects for more AI/ML resources, tutorials, and code samples, or read my blog posts for insights on AI, machine learning, and SWE topics. I hope you find these resources helpful and informative as you continue your learning journey in AI/ML! 🚀
This RAG system for portfolio support in Google Colab demonstrates the integration of document retrieval, dynamic entity extraction, and external API calls to generate context-aware responses using a Hugging Face language model via Ollama. The system is designed to provide accurate and informative responses based on user queries and conversation history. By leveraging the power of AI models and external data sources, the system can assist users in accessing relevant information about PeakSpan MasterClasses, team profiles, investments, sectors, and more.
The system’s ability to maintain persistent memory, handle follow-up questions, and enrich responses with external API data makes it a valuable tool for portfolio management and information retrieval tasks. By combining document context, dynamic entity extraction, and API chaining, the system can generate comprehensive and context-aware responses that address user queries effectively.
Thank you for checking out this project today! 🙏 Happy coding! 🚀