-
Notifications
You must be signed in to change notification settings - Fork 825
feat: add global support chat #1836
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Please, you need to sign your commits |
📝 WalkthroughWalkthroughThis PR introduces a complete Global Support Chat demo application showcasing Lingo.dev's translation capabilities. The Next.js 16 project features a real-time multilingual chat interface with multiple concurrent support sessions, message translation workflows, optional Ollama-based AI response generation, and language-specific UI components. The application demonstrates both translation and AI integration patterns. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant UI as React UI
participant Handler as page.tsx
participant TranslateAPI as /api/translate
participant OllamaAPI as /api/ollama
participant Display as ChatWindow
User->>UI: Type message in English
UI->>Handler: handleSendMessage(text)
Handler->>Handler: Create agent message
Handler->>TranslateAPI: Translate to user locale
TranslateAPI-->>Handler: Return translated text
Handler->>Display: Update with translated agent msg
Handler->>Handler: simulateUserReply()
Handler->>Handler: Show typing indicator
alt AI Enabled
Handler->>OllamaAPI: Send history + locale
OllamaAPI-->>Handler: Generate reply in locale
else Hardcoded
Handler->>Handler: Select mock response
end
Handler->>TranslateAPI: Translate reply to English
TranslateAPI-->>Handler: Return English translation
Handler->>Display: Show user message with original
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@community/global-support-chat/app/page.tsx`:
- Around line 159-179: The closure captures a stale sessions array so the
history sent to Ollama may miss the just-added agent message; fix by passing the
up-to-date messages into the AI-calling flow instead of reading sessions inside
the async closure—update the call site that invokes simulateUserReply (or
whatever triggers the Ollama branch) to pass the latest session.messages (or use
the callback form of setState in updateSessionMessages to obtain the updated
messages) and then use that passed messages array to build history (replace
sessions.find(...) usage) before calling fetch('/api/ollama'); this ensures
sessionId, the updated messages, and locale are used directly rather than
relying on the stale sessions variable.
- Around line 128-146: The fetch handling for the translation call doesn't
verify HTTP status before parsing JSON; update the try block that calls
fetch('/api/translate') (the res variable) so you check res.ok and handle
non-2xx responses (log the status/error, avoid calling res.json() or parse error
body safely) before calling updateMessageContent(currentSessionId, agentMsgId,
...); apply the same res.ok/status-check + safe parsing pattern to the other
fetch usages that currently call res.json() directly (the other
translation/fetch blocks that update message content).
In `@community/global-support-chat/README.md`:
- Line 28: Replace the bare URL in the README sentence "Open
http://localhost:3000." with a markdown link by changing it to use link syntax
(e.g., "Open [http://localhost:3000](http://localhost:3000)." or a descriptive
text like "Open [http://localhost:3000](http://localhost:3000).") so
markdownlint no longer flags the bare URL; update the line containing "Open
http://localhost:3000." accordingly.
- Around line 40-42: The fenced code block that shows environment variables
currently lacks a language tag; update the triple-backtick fence containing
"LINGO_API_KEY=your_key_here" to include the "env" language (i.e., change ``` to
```env) so the block is properly marked as an environment file and avoids MD040
and improves rendering in the README.md.
🧹 Nitpick comments (9)
community/global-support-chat/components/AIToggle.tsx (1)
12-33: Addaria-pressedto the toggle buttons for accessibility.These buttons act like a segmented control; exposing pressed state improves screen reader UX.
♻️ Suggested change
- <button + <button onClick={() => onToggle(false)} + aria-pressed={!enabled} className={`flex items-center gap-2 px-3 py-1.5 rounded-md text-sm font-medium transition-all ${ !enabled ? 'bg-white dark:bg-zinc-700 shadow-sm text-gray-900 dark:text-white' : 'text-gray-500 hover:text-gray-700 dark:text-gray-400' }`} > <Quote className="w-4 h-4" /> Hardcoded </button> <button onClick={() => onToggle(true)} + aria-pressed={enabled} className={`flex items-center gap-2 px-3 py-1.5 rounded-md text-sm font-medium transition-all ${ enabled ? 'bg-blue-600 shadow-sm text-white' : 'text-gray-500 hover:text-gray-700 dark:text-gray-400' }`} > <Bot className="w-4 h-4" /> AI Responses (Ollama) </button>community/global-support-chat/app/api/translate/route.ts (1)
38-56: Consider adding input validation for robustness.The request body is destructured without validation. If
text,sourceLocale, ortargetLocaleare missing or malformed, the SDK call or fallback logic may behave unexpectedly.💡 Suggested validation
export async function POST(request: Request) { const { text, sourceLocale, targetLocale } = await request.json(); + if (!text || !sourceLocale || !targetLocale) { + return NextResponse.json( + { error: 'Missing required fields: text, sourceLocale, targetLocale' }, + { status: 400 } + ); + } + const apiKey = process.env.LINGO_API_KEY;community/global-support-chat/components/ChatSidebar.tsx (1)
36-44: Consider usingnext/imagefor avatar optimization.Using the native
<img>tag works, but Next.js's<Image>component provides automatic optimization, lazy loading, and prevents layout shift.💡 Optional improvement with next/image
+import Image from 'next/image'; import { User, MessageSquare } from 'lucide-react';{session.avatar ? ( - <img + <Image src={session.avatar} alt={session.userName} + width={40} + height={40} className="w-full h-full rounded-full object-cover" /> ) : (community/global-support-chat/app/api/ollama/route.ts (3)
29-42: Add a timeout to prevent indefinite hangs.The fetch to Ollama has no timeout. If Ollama is slow or unresponsive, this request will hang indefinitely, potentially exhausting server resources.
💡 Add AbortController timeout
try { + const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), 30000); // 30s timeout + const res = await fetch('http://localhost:11434/api/generate', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'qwen3:4b', prompt, stream: false }), + signal: controller.signal, }); + clearTimeout(timeoutId); if (!res.ok) { throw new Error(`Ollama status: ${res.status}`); }
47-52: Error response returns HTTP 200, which may mask failures.Returning a mock response with status 200 when Ollama fails could confuse client-side error handling. Consider returning a non-2xx status or including an
errorfield that clients can check.💡 Alternative: return error status or flag
} catch (error) { console.error('Ollama Error:', error); - return NextResponse.json({ - reply: `(AI Error: Ensure Ollama is running. Mock response in ${locale})` - }); + return NextResponse.json( + { + reply: `(AI Error: Ensure Ollama is running. Mock response in ${locale})`, + error: true + }, + { status: 503 } + ); }
25-25: Minor: Use proper typing instead ofany.The
anytype on the history mapping reduces type safety.💡 Add type annotation
-${history.map((m: any) => `${m.role === 'agent' ? 'Support Agent' : 'Customer'}: ${m.content}`).join('\n')} +${history.map((m: { role: 'user' | 'agent'; content: string }) => `${m.role === 'agent' ? 'Support Agent' : 'Customer'}: ${m.content}`).join('\n')}community/global-support-chat/components/ChatInput.tsx (1)
30-37: Consider adding an accessible label for screen readers.The input relies solely on
placeholderfor context. Screen readers may not announce placeholders consistently. Adding a visually-hidden label improves accessibility.💡 Add sr-only label
<div className="flex gap-2 relative"> + <label htmlFor="chat-input" className="sr-only"> + Type message in English + </label> <input + id="chat-input" type="text" value={input}community/global-support-chat/components/ChatWindow.tsx (1)
16-18: Consider usingsession?.messages.lengthin dependency array.The current dependency
session?.messagesrelies on reference equality. If the messages array is mutated in place (rather than replaced), the effect won't trigger. Using.lengthas a proxy is more reliable.💡 More robust dependency
useEffect(() => { scrollToBottom(); - }, [session?.messages, isTyping]); + }, [session?.messages.length, isTyping]);community/global-support-chat/app/page.tsx (1)
117-125: Potential message ID collision withDate.now().Using
Date.now().toString()for message IDs (lines 117, 221) can cause collisions if two messages are created within the same millisecond. Consider using a UUID or combining timestamp with a random suffix.🔧 Suggested improvement
- const agentMsgId = Date.now().toString(); + const agentMsgId = `${Date.now()}-${Math.random().toString(36).substring(2, 9)}`;Apply the same pattern at line 221 for user message IDs.
| try { | ||
| const res = await fetch('/api/translate', { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify({ | ||
| text, | ||
| sourceLocale: 'en', | ||
| targetLocale, | ||
| }), | ||
| }); | ||
| const data = await res.json(); | ||
| if (data.translatedText) { | ||
| updateMessageContent(currentSessionId, agentMsgId, { | ||
| originalText: data.translatedText, // Foreign translation stored here | ||
| }); | ||
| } | ||
| } catch (error) { | ||
| console.error('Translation failed', error); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing response status check before parsing JSON.
The code calls res.json() without first checking res.ok. If the API returns a non-2xx status, res.json() might throw or return unexpected data.
🔧 Suggested improvement
const res = await fetch('/api/translate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text,
sourceLocale: 'en',
targetLocale,
}),
});
+ if (!res.ok) {
+ throw new Error(`Translation failed: ${res.status}`);
+ }
const data = await res.json();Apply similar checks at lines 169-174 and 202-211.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const res = await fetch('/api/translate', { | |
| method: 'POST', | |
| headers: { 'Content-Type': 'application/json' }, | |
| body: JSON.stringify({ | |
| text, | |
| sourceLocale: 'en', | |
| targetLocale, | |
| }), | |
| }); | |
| const data = await res.json(); | |
| if (data.translatedText) { | |
| updateMessageContent(currentSessionId, agentMsgId, { | |
| originalText: data.translatedText, // Foreign translation stored here | |
| }); | |
| } | |
| } catch (error) { | |
| console.error('Translation failed', error); | |
| } | |
| try { | |
| const res = await fetch('/api/translate', { | |
| method: 'POST', | |
| headers: { 'Content-Type': 'application/json' }, | |
| body: JSON.stringify({ | |
| text, | |
| sourceLocale: 'en', | |
| targetLocale, | |
| }), | |
| }); | |
| if (!res.ok) { | |
| throw new Error(`Translation failed: ${res.status}`); | |
| } | |
| const data = await res.json(); | |
| if (data.translatedText) { | |
| updateMessageContent(currentSessionId, agentMsgId, { | |
| originalText: data.translatedText, // Foreign translation stored here | |
| }); | |
| } | |
| } catch (error) { | |
| console.error('Translation failed', error); | |
| } |
🤖 Prompt for AI Agents
In `@community/global-support-chat/app/page.tsx` around lines 128 - 146, The fetch
handling for the translation call doesn't verify HTTP status before parsing
JSON; update the try block that calls fetch('/api/translate') (the res variable)
so you check res.ok and handle non-2xx responses (log the status/error, avoid
calling res.json() or parse error body safely) before calling
updateMessageContent(currentSessionId, agentMsgId, ...); apply the same
res.ok/status-check + safe parsing pattern to the other fetch usages that
currently call res.json() directly (the other translation/fetch blocks that
update message content).
| if (useAI) { | ||
| // Use Ollama | ||
| // We should ideally send context, but for demo just the last message or so | ||
| // Getting context from current state might be tricky inside async specific closure if state updates. | ||
| // But 'sessions' in closure is stale. We need to fetch fresh or assume for now. | ||
| // Ideally we'd pass history. | ||
| const session = sessions.find((s) => s.id === sessionId); | ||
| const history = session?.messages.map(m => ({ role: m.sender, content: m.text })).slice(-5) || []; | ||
|
|
||
| try { | ||
| const res = await fetch('/api/ollama', { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify({ history, locale }), | ||
| }); | ||
| const data = await res.json(); | ||
| replyTextForeign = data.reply; | ||
| } catch (e) { | ||
| console.error('Ollama failed', e); | ||
| replyTextForeign = "Error calling AI. (Check console)"; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stale closure: sessions may not reflect the latest agent message.
The sessions variable captured in this closure won't include the agent message just added via updateSessionMessages. The comment on lines 162-164 acknowledges this. The history sent to Ollama will be missing the most recent agent message context.
For a demo app this is acceptable, but for production consider using a ref or passing the updated session data as a parameter.
🔧 Potential fix using callback form of setState
One approach is to pass the latest messages when calling simulateUserReply:
- // 3. Trigger simulated user reply
- simulateUserReply(currentSessionId, targetLocale);
+ // 3. Trigger simulated user reply with current messages for context
+ simulateUserReply(currentSessionId, targetLocale, [...(activeSession.messages), newMsg]);Then update the function signature and use the passed messages for history.
🤖 Prompt for AI Agents
In `@community/global-support-chat/app/page.tsx` around lines 159 - 179, The
closure captures a stale sessions array so the history sent to Ollama may miss
the just-added agent message; fix by passing the up-to-date messages into the
AI-calling flow instead of reading sessions inside the async closure—update the
call site that invokes simulateUserReply (or whatever triggers the Ollama
branch) to pass the latest session.messages (or use the callback form of
setState in updateSessionMessages to obtain the updated messages) and then use
that passed messages array to build history (replace sessions.find(...) usage)
before calling fetch('/api/ollama'); this ensures sessionId, the updated
messages, and locale are used directly rather than relying on the stale sessions
variable.
| ```bash | ||
| pnpm dev | ||
| ``` | ||
| Open http://localhost:3000. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace bare URL with a markdown link.
Markdownlint flags bare URLs; wrapping improves readability.
✏️ Suggested change
- Open http://localhost:3000.
+ Open [http://localhost:3000](http://localhost:3000).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Open http://localhost:3000. | |
| Open [http://localhost:3000](http://localhost:3000). |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
28-28: Bare URL used
(MD034, no-bare-urls)
🤖 Prompt for AI Agents
In `@community/global-support-chat/README.md` at line 28, Replace the bare URL in
the README sentence "Open http://localhost:3000." with a markdown link by
changing it to use link syntax (e.g., "Open
[http://localhost:3000](http://localhost:3000)." or a descriptive text like
"Open [http://localhost:3000](http://localhost:3000).") so markdownlint no
longer flags the bare URL; update the line containing "Open
http://localhost:3000." accordingly.
| ``` | ||
| LINGO_API_KEY=your_key_here | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a language to the fenced code block.
Specifying a language avoids MD040 and improves rendering.
✏️ Suggested change
- ```
+ ```env
LINGO_API_KEY=your_key_here
```📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ``` | |
| LINGO_API_KEY=your_key_here | |
| ``` |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
40-40: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@community/global-support-chat/README.md` around lines 40 - 42, The fenced
code block that shows environment variables currently lacks a language tag;
update the triple-backtick fence containing "LINGO_API_KEY=your_key_here" to
include the "env" language (i.e., change ``` to ```env) so the block is properly
marked as an environment file and avoids MD040 and improves rendering in the
README.md.
Summary
Added "Global Support Chat", a full-stack Next.js demo application that showcases real-time multilingual customer support simulation using the Lingo.dev SDK and AI-powered responses via Ollama.
Changes
community/global-support-chatusing Next.js 16 and Tailwind CSS v4.app/api/translateusingLingoDotDevEnginefor real-time text translation.app/api/ollamato simulate realistic customer roleplay in multiple foreign languages.Testing
Business logic tests added:
Visuals
Checklist
community/directory)Closes #1761
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.