The Accountability Partner You Can Eat With
Foodbuddy aimed to solve the hardest part of dieting: consistency. Most trackers require tedious manual entry. Foodbuddy used early multimodal AI models to simply allow users to "snap and track". But beyond tracking, it was about accountability.
"People don't need another calculator. They need a friend who gently judges their late-night pizza."
Core Features
- Snap-to-Track: Leveraging GPT-4 Vision (and later Gemini Pro Vision), users could upload a photo of their meal. The AI estimated calories, macros, and ingredients with surprising accuracy.
- The AI Coach: A personality-driven AI chatbot that would comment on your meals. It could be supportive ("Great salad!") or sassy ("Is that your third cookie?").
- Social Circles: Small, private groups where friends could see each other's logs—not the specific calories, but a "Health Score" for the day, fostering positive peer pressure.
Backend Architecture
We hosted the core services on a GCP e2-micro instance to keep costs near zero. The challenge was managing the heavy lifting of image processing.
// Handling the image upload and analysis queue
const processImageQueue = async (imageUrl, userId) => {
// Determine complexity
const analysis = await aiService.analyzeFood(imageUrl);
// Store nutritional data
await db.saveLog({
userId,
calories: analysis.calories,
macros: analysis.macros,
image: imageUrl
});
// Trigger the accountability agent
await coachAgent.reactToMeal(userId, analysis);
}
PART III: POST-MORTEM
Why It's Unmaintained
While the prototype was functional and the "snap-to-track" feature was magical, the unit economics of processing every meal image with high-end LLMs were difficult for a free side project. Additionally, the latency in 2023 for vision models was a friction point.
However, the lessons learned here directly influenced the "Context Engineering" work in later projects like FileOps.