In one of our projects, we combined Bubble and Make.com to quickly iterate over a ChatGPT scenario. Our goal was to create a tool capable of searching across thousands of files inside multiple folders and then returning accurate AI-powered answers.
The Challenge
We needed to:
- Allow users to define specific use-case scenarios
- Compare different RAG (Retrieval-Augmented Generation) strategies using ChatGPT and Pinecone
- Iterate quickly on prompts and responses
- Provide results via an intuitive interface
To solve this, we created a modular and flexible setup that supported various configurations and workflows.
The Solution
We built a simple user interface in Bubble with:
- A login system
- A scenario selector (6–7 different ChatGPT logic variations)
- A question input field
Once the user enters their query and selects a scenario, the process unfolds as follows:
- A webhook captures the data from Bubble.
- The request is routed via Make to:
- ChatGPT for response generation
- Pinecone, a vectorized database, for contextual embeddings
- The response is manipulated and refined.
- We store intermediate results in a data store.
- The flow iterates between ChatGPT and Pinecone until a final, optimized answer is produced.
- The final result is returned to Bubble and shown to the user.
Why Make.com Was Critical
Make.com allowed us to:
- Run multiple iteration loops between Pinecone and ChatGPT
- Build alternative logic paths (different prompt strategies or retrieval techniques)
- Easily switch between scenarios and compare outputs
- Deploy new workflows quickly without deep custom coding
End users could simply choose a scenario, ask the same question, and compare how the system responds using different logic setups.
This setup helped us benchmark retrieval strategies, evaluate prompt variations, and deliver a fast and modular tool for internal testing or client use.