Vue 3 + AI: Building a Content Recommender
Imagine your Vue 3 blog suggesting personalized articles based on what users read—powered by AI. In this post, we'll build a simple content recommender using Vue 3, OpenAI’s GPT, and Axios. You’ll learn how to fetch user behavior, generate suggestions, and surface them beautifully in your Vue app.
🧠 Why Use AI for Content Recommendations?
- Improves engagement by serving relevant content
- Helps users discover hidden or older content
- Boosts time-on-site and repeat visits
⚙️ Tech Stack & Setup
- Vue 3 with Composition API
- OpenAI GPT-3.5/4 for generating recommendations
- Axios for API calls
- Basic backend (optional) for API key security
Step 1: Create a Vue 3 Project
npm create vue@latest
cd my-recommender
npm install axios openai
npm run dev
Step 2: Track User Content
Capture the ID or title of the last-read article:
import { ref } from 'vue'
const lastRead = ref(null)
function markRead(article) {
lastRead.value = article.id // or title
}
Step 3: Generate AI Recommendations
import { OpenAI } from 'openai'
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
async function getSuggestions(promptText) {
const res = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: promptText }]
})
return res.choices[0].message.content.split('\\n').filter(Boolean)
}
Step 4: Wire it Up in Vue
<template>
<div>
<h3>You might also like:</h3>
<ul>
<li v-for="item in suggestions" :key="item">{{ item }}</li>
</ul>
</div>
</template>
<script setup>
import { ref, watch } from 'vue'
import { getSuggestions } from '@/utils/openai'
const lastRead = ref('Getting started with Vue 3')
const suggestions = ref([])
watch(lastRead, async (val) => {
suggestions.value = await getSuggestions(`Suggest 5 related blog posts based on "${val}"`)
})
</script>
🖌️ Styling the Recommendations
ul { list-style: none; padding: 0; }
li { background: #f1f3f4; margin: 0.5em 0; padding: 0.75em; border-radius: 4px; }
Best Practices & Tips
- 🛡️ **Cache results** to reduce API calls and speed up UX
- 🔐 **Secure your API key** via a backend or serverless function
- 📏 **Control prompt quality** by specifying expected format (e.g., JSON, bullet list)
🚀 Try it yourself! Drop the title of your latest article in the comments and I'll suggest improvements or usage examples.
Frequently Asked Questions
Q: Can I use this in production?
A: Yes—ideal for blogs, knowledge bases, or documentation sites. Just secure your API key and handle rate limits.
Q: What about API costs?
A: GPT-3.5 has low per-request cost. Cache and batch calls to save money.
Q: Do I need a backend?
A: You can fetch GPT directly from the client for testing, but use a backend or serverless endpoint for production key security.
Comments
Post a Comment