Skip to content

Conversation

@PraveenRepswal
Copy link

@PraveenRepswal PraveenRepswal commented Jan 22, 2026

Summary

Added "Global Support Chat", a full-stack Next.js demo application that showcases real-time multilingual customer support simulation using the Lingo.dev SDK and AI-powered responses via Ollama.

Changes

  • New Project: Created community/global-support-chat using Next.js 16 and Tailwind CSS v4.
  • Lingo Integration: Implemented app/api/translate using LingoDotDevEngine for real-time text translation.
  • AI Simulation: integrated local LLM (Ollama) support via app/api/ollama to simulate realistic customer roleplay in multiple foreign languages.
  • UI/UX: Built a modern, dark-mode capable support dashboard with a dynamic language selector (supporting 10+ languages) and automated "typing" simulation.

Testing

Business logic tests added:

  • [Describe test 1 - what behavior it validates]
  • [Describe test 2 - what edge case it covers]
  • All tests pass locally

Visuals

Screenshot 2026-01-23 024709 Screenshot 2026-01-23 022858

Checklist

  • Changeset added (if version bump needed)
  • Tests cover business logic (manual verification completed)
  • No breaking changes (Isolated in community/ directory)

Closes #1761

Summary by CodeRabbit

  • New Features
    • Global Support Chat Demo: real-time multilingual support dashboard with multiple concurrent chat sessions
    • Real-time translation between agent and customer communications
    • Optional AI-powered customer responses with ability to toggle between AI and hardcoded reply modes
    • Responsive chat interface with session management, language switching, and message history

✏️ Tip: You can customize this high-level summary in your review settings.

@sumitsaurabh927
Copy link
Contributor

Hi @PraveenRepswal

Please, you need to sign your commits

@coderabbitai
Copy link

coderabbitai bot commented Jan 24, 2026

📝 Walkthrough

Walkthrough

This PR introduces a complete Global Support Chat demo application showcasing Lingo.dev's translation capabilities. The Next.js 16 project features a real-time multilingual chat interface with multiple concurrent support sessions, message translation workflows, optional Ollama-based AI response generation, and language-specific UI components. The application demonstrates both translation and AI integration patterns.

Changes

Cohort / File(s) Change Summary
Project Configuration
.changeset/better-readers-tell.md, community/global-support-chat/next.config.ts, community/global-support-chat/tsconfig.json, community/global-support-chat/postcss.config.mjs, community/global-support-chat/package.json, community/global-support-chat/.gitignore
Boilerplate configuration for Next.js 16 project with TypeScript, Tailwind CSS v4, PostCSS, and standard ignore patterns. Declares dependencies on Lingo.dev SDK, React, Tailwind, lucide-react, and jsdom.
Type Definitions & Layout
community/global-support-chat/types.ts, community/global-support-chat/app/layout.tsx, community/global-support-chat/app/globals.css
Exports Message and ChatSession TypeScript types for structured chat data. Root layout with metadata and global CSS defining theme variables for light/dark mode with Tailwind integration.
Core Application Page
community/global-support-chat/app/page.tsx
Main chat interface (~291 lines) managing session state, message handling, typing indicators, per-session AI toggle, and translation workflows. Integrates /api/translate for message translation and /api/ollama or hardcoded responses for simulated user replies.
API Routes
community/global-support-chat/app/api/translate/route.ts, community/global-support-chat/app/api/ollama/route.ts
POST endpoint for Lingo.dev SDK translation with fallback to hardcoded FALLBACKS dictionary. Ollama integration route with local model inference and error-based mock reply fallback. Both include error handling and logging.
React Components
community/global-support-chat/components/ChatSidebar.tsx, community/global-support-chat/components/ChatWindow.tsx, community/global-support-chat/components/ChatInput.tsx, community/global-support-chat/components/AIToggle.tsx
Modular UI components: sidebar session list with unread counts and locale badges; message window with auto-scroll and bilingual text display; input field with send validation; toggle switch between hardcoded and Ollama response modes.
Documentation
community/global-support-chat/README.md
Comprehensive guide describing the demo's purpose, architecture (Next.js, Tailwind, Lingo.dev, Ollama), getting started instructions (setup, local run, API key configuration), and tech stack overview.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant UI as React UI
    participant Handler as page.tsx
    participant TranslateAPI as /api/translate
    participant OllamaAPI as /api/ollama
    participant Display as ChatWindow

    User->>UI: Type message in English
    UI->>Handler: handleSendMessage(text)
    Handler->>Handler: Create agent message
    Handler->>TranslateAPI: Translate to user locale
    TranslateAPI-->>Handler: Return translated text
    Handler->>Display: Update with translated agent msg
    Handler->>Handler: simulateUserReply()
    Handler->>Handler: Show typing indicator
    alt AI Enabled
        Handler->>OllamaAPI: Send history + locale
        OllamaAPI-->>Handler: Generate reply in locale
    else Hardcoded
        Handler->>Handler: Select mock response
    end
    Handler->>TranslateAPI: Translate reply to English
    TranslateAPI-->>Handler: Return English translation
    Handler->>Display: Show user message with original
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 A chat that speaks in many tongues,
With Ollama's wit and Lingo's songs,
Sessions multiplied, translations flow,
From English replies to locales below—
The Global Support Chat steals the show! 🌍✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: add global support chat' clearly summarizes the main change: a new global support chat demo application.
Description check ✅ Passed The description includes all key required sections: summary, changes, testing, visuals, and checklist. Though test descriptions are placeholder text, the overall structure is complete and informative.
Linked Issues check ✅ Passed The PR successfully fulfills all coding objectives from issue #1761: creates a demo app in /community/[app-name], includes a README documenting functionality and local run instructions, and highlights Lingo.dev features (SDK integration, translation API, AI simulation).
Out of Scope Changes check ✅ Passed All changes are in-scope: the new community/global-support-chat directory with demo application files, configuration, and dependencies. No modifications to core repository files or unrelated areas detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@community/global-support-chat/app/page.tsx`:
- Around line 159-179: The closure captures a stale sessions array so the
history sent to Ollama may miss the just-added agent message; fix by passing the
up-to-date messages into the AI-calling flow instead of reading sessions inside
the async closure—update the call site that invokes simulateUserReply (or
whatever triggers the Ollama branch) to pass the latest session.messages (or use
the callback form of setState in updateSessionMessages to obtain the updated
messages) and then use that passed messages array to build history (replace
sessions.find(...) usage) before calling fetch('/api/ollama'); this ensures
sessionId, the updated messages, and locale are used directly rather than
relying on the stale sessions variable.
- Around line 128-146: The fetch handling for the translation call doesn't
verify HTTP status before parsing JSON; update the try block that calls
fetch('/api/translate') (the res variable) so you check res.ok and handle
non-2xx responses (log the status/error, avoid calling res.json() or parse error
body safely) before calling updateMessageContent(currentSessionId, agentMsgId,
...); apply the same res.ok/status-check + safe parsing pattern to the other
fetch usages that currently call res.json() directly (the other
translation/fetch blocks that update message content).

In `@community/global-support-chat/README.md`:
- Line 28: Replace the bare URL in the README sentence "Open
http://localhost:3000." with a markdown link by changing it to use link syntax
(e.g., "Open [http://localhost:3000](http://localhost:3000)." or a descriptive
text like "Open [http://localhost:3000](http://localhost:3000).") so
markdownlint no longer flags the bare URL; update the line containing "Open
http://localhost:3000." accordingly.
- Around line 40-42: The fenced code block that shows environment variables
currently lacks a language tag; update the triple-backtick fence containing
"LINGO_API_KEY=your_key_here" to include the "env" language (i.e., change ``` to
```env) so the block is properly marked as an environment file and avoids MD040
and improves rendering in the README.md.
🧹 Nitpick comments (9)
community/global-support-chat/components/AIToggle.tsx (1)

12-33: Add aria-pressed to the toggle buttons for accessibility.

These buttons act like a segmented control; exposing pressed state improves screen reader UX.

♻️ Suggested change
-      <button
+      <button
         onClick={() => onToggle(false)}
+        aria-pressed={!enabled}
         className={`flex items-center gap-2 px-3 py-1.5 rounded-md text-sm font-medium transition-all ${
           !enabled
             ? 'bg-white dark:bg-zinc-700 shadow-sm text-gray-900 dark:text-white'
             : 'text-gray-500 hover:text-gray-700 dark:text-gray-400'
         }`}
       >
         <Quote className="w-4 h-4" />
         Hardcoded
       </button>
       <button
         onClick={() => onToggle(true)}
+        aria-pressed={enabled}
         className={`flex items-center gap-2 px-3 py-1.5 rounded-md text-sm font-medium transition-all ${
           enabled
             ? 'bg-blue-600 shadow-sm text-white'
             : 'text-gray-500 hover:text-gray-700 dark:text-gray-400'
         }`}
       >
         <Bot className="w-4 h-4" />
         AI Responses (Ollama)
       </button>
community/global-support-chat/app/api/translate/route.ts (1)

38-56: Consider adding input validation for robustness.

The request body is destructured without validation. If text, sourceLocale, or targetLocale are missing or malformed, the SDK call or fallback logic may behave unexpectedly.

💡 Suggested validation
 export async function POST(request: Request) {
   const { text, sourceLocale, targetLocale } = await request.json();
 
+  if (!text || !sourceLocale || !targetLocale) {
+    return NextResponse.json(
+      { error: 'Missing required fields: text, sourceLocale, targetLocale' },
+      { status: 400 }
+    );
+  }
+
   const apiKey = process.env.LINGO_API_KEY;
community/global-support-chat/components/ChatSidebar.tsx (1)

36-44: Consider using next/image for avatar optimization.

Using the native <img> tag works, but Next.js's <Image> component provides automatic optimization, lazy loading, and prevents layout shift.

💡 Optional improvement with next/image
+import Image from 'next/image';
 import { User, MessageSquare } from 'lucide-react';
               {session.avatar ? (
-                <img
+                <Image
                   src={session.avatar}
                   alt={session.userName}
+                  width={40}
+                  height={40}
                   className="w-full h-full rounded-full object-cover"
                 />
               ) : (
community/global-support-chat/app/api/ollama/route.ts (3)

29-42: Add a timeout to prevent indefinite hangs.

The fetch to Ollama has no timeout. If Ollama is slow or unresponsive, this request will hang indefinitely, potentially exhausting server resources.

💡 Add AbortController timeout
   try {
+    const controller = new AbortController();
+    const timeoutId = setTimeout(() => controller.abort(), 30000); // 30s timeout
+
     const res = await fetch('http://localhost:11434/api/generate', {
       method: 'POST',
       headers: { 'Content-Type': 'application/json' },
       body: JSON.stringify({
         model: 'qwen3:4b',
         prompt,
         stream: false
       }),
+      signal: controller.signal,
     });
+    clearTimeout(timeoutId);

     if (!res.ok) {
         throw new Error(`Ollama status: ${res.status}`);
     }

47-52: Error response returns HTTP 200, which may mask failures.

Returning a mock response with status 200 when Ollama fails could confuse client-side error handling. Consider returning a non-2xx status or including an error field that clients can check.

💡 Alternative: return error status or flag
   } catch (error) {
     console.error('Ollama Error:', error);
-    return NextResponse.json({ 
-        reply: `(AI Error: Ensure Ollama is running. Mock response in ${locale})` 
-    });
+    return NextResponse.json(
+      { 
+        reply: `(AI Error: Ensure Ollama is running. Mock response in ${locale})`,
+        error: true 
+      },
+      { status: 503 }
+    );
   }

25-25: Minor: Use proper typing instead of any.

The any type on the history mapping reduces type safety.

💡 Add type annotation
-${history.map((m: any) => `${m.role === 'agent' ? 'Support Agent' : 'Customer'}: ${m.content}`).join('\n')}
+${history.map((m: { role: 'user' | 'agent'; content: string }) => `${m.role === 'agent' ? 'Support Agent' : 'Customer'}: ${m.content}`).join('\n')}
community/global-support-chat/components/ChatInput.tsx (1)

30-37: Consider adding an accessible label for screen readers.

The input relies solely on placeholder for context. Screen readers may not announce placeholders consistently. Adding a visually-hidden label improves accessibility.

💡 Add sr-only label
       <div className="flex gap-2 relative">
+        <label htmlFor="chat-input" className="sr-only">
+          Type message in English
+        </label>
         <input
+          id="chat-input"
           type="text"
           value={input}
community/global-support-chat/components/ChatWindow.tsx (1)

16-18: Consider using session?.messages.length in dependency array.

The current dependency session?.messages relies on reference equality. If the messages array is mutated in place (rather than replaced), the effect won't trigger. Using .length as a proxy is more reliable.

💡 More robust dependency
   useEffect(() => {
     scrollToBottom();
-  }, [session?.messages, isTyping]);
+  }, [session?.messages.length, isTyping]);
community/global-support-chat/app/page.tsx (1)

117-125: Potential message ID collision with Date.now().

Using Date.now().toString() for message IDs (lines 117, 221) can cause collisions if two messages are created within the same millisecond. Consider using a UUID or combining timestamp with a random suffix.

🔧 Suggested improvement
-    const agentMsgId = Date.now().toString();
+    const agentMsgId = `${Date.now()}-${Math.random().toString(36).substring(2, 9)}`;

Apply the same pattern at line 221 for user message IDs.

Comment on lines +128 to +146
try {
const res = await fetch('/api/translate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text,
sourceLocale: 'en',
targetLocale,
}),
});
const data = await res.json();
if (data.translatedText) {
updateMessageContent(currentSessionId, agentMsgId, {
originalText: data.translatedText, // Foreign translation stored here
});
}
} catch (error) {
console.error('Translation failed', error);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing response status check before parsing JSON.

The code calls res.json() without first checking res.ok. If the API returns a non-2xx status, res.json() might throw or return unexpected data.

🔧 Suggested improvement
      const res = await fetch('/api/translate', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          text,
          sourceLocale: 'en',
          targetLocale,
        }),
      });
+     if (!res.ok) {
+       throw new Error(`Translation failed: ${res.status}`);
+     }
      const data = await res.json();

Apply similar checks at lines 169-174 and 202-211.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const res = await fetch('/api/translate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text,
sourceLocale: 'en',
targetLocale,
}),
});
const data = await res.json();
if (data.translatedText) {
updateMessageContent(currentSessionId, agentMsgId, {
originalText: data.translatedText, // Foreign translation stored here
});
}
} catch (error) {
console.error('Translation failed', error);
}
try {
const res = await fetch('/api/translate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text,
sourceLocale: 'en',
targetLocale,
}),
});
if (!res.ok) {
throw new Error(`Translation failed: ${res.status}`);
}
const data = await res.json();
if (data.translatedText) {
updateMessageContent(currentSessionId, agentMsgId, {
originalText: data.translatedText, // Foreign translation stored here
});
}
} catch (error) {
console.error('Translation failed', error);
}
🤖 Prompt for AI Agents
In `@community/global-support-chat/app/page.tsx` around lines 128 - 146, The fetch
handling for the translation call doesn't verify HTTP status before parsing
JSON; update the try block that calls fetch('/api/translate') (the res variable)
so you check res.ok and handle non-2xx responses (log the status/error, avoid
calling res.json() or parse error body safely) before calling
updateMessageContent(currentSessionId, agentMsgId, ...); apply the same
res.ok/status-check + safe parsing pattern to the other fetch usages that
currently call res.json() directly (the other translation/fetch blocks that
update message content).

Comment on lines +159 to +179
if (useAI) {
// Use Ollama
// We should ideally send context, but for demo just the last message or so
// Getting context from current state might be tricky inside async specific closure if state updates.
// But 'sessions' in closure is stale. We need to fetch fresh or assume for now.
// Ideally we'd pass history.
const session = sessions.find((s) => s.id === sessionId);
const history = session?.messages.map(m => ({ role: m.sender, content: m.text })).slice(-5) || [];

try {
const res = await fetch('/api/ollama', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ history, locale }),
});
const data = await res.json();
replyTextForeign = data.reply;
} catch (e) {
console.error('Ollama failed', e);
replyTextForeign = "Error calling AI. (Check console)";
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Stale closure: sessions may not reflect the latest agent message.

The sessions variable captured in this closure won't include the agent message just added via updateSessionMessages. The comment on lines 162-164 acknowledges this. The history sent to Ollama will be missing the most recent agent message context.

For a demo app this is acceptable, but for production consider using a ref or passing the updated session data as a parameter.

🔧 Potential fix using callback form of setState

One approach is to pass the latest messages when calling simulateUserReply:

-    // 3. Trigger simulated user reply
-    simulateUserReply(currentSessionId, targetLocale);
+    // 3. Trigger simulated user reply with current messages for context
+    simulateUserReply(currentSessionId, targetLocale, [...(activeSession.messages), newMsg]);

Then update the function signature and use the passed messages for history.

🤖 Prompt for AI Agents
In `@community/global-support-chat/app/page.tsx` around lines 159 - 179, The
closure captures a stale sessions array so the history sent to Ollama may miss
the just-added agent message; fix by passing the up-to-date messages into the
AI-calling flow instead of reading sessions inside the async closure—update the
call site that invokes simulateUserReply (or whatever triggers the Ollama
branch) to pass the latest session.messages (or use the callback form of
setState in updateSessionMessages to obtain the updated messages) and then use
that passed messages array to build history (replace sessions.find(...) usage)
before calling fetch('/api/ollama'); this ensures sessionId, the updated
messages, and locale are used directly rather than relying on the stale sessions
variable.

```bash
pnpm dev
```
Open http://localhost:3000.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace bare URL with a markdown link.

Markdownlint flags bare URLs; wrapping improves readability.

✏️ Suggested change
-    Open http://localhost:3000.
+    Open [http://localhost:3000](http://localhost:3000).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Open http://localhost:3000.
Open [http://localhost:3000](http://localhost:3000).
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

28-28: Bare URL used

(MD034, no-bare-urls)

🤖 Prompt for AI Agents
In `@community/global-support-chat/README.md` at line 28, Replace the bare URL in
the README sentence "Open http://localhost:3000." with a markdown link by
changing it to use link syntax (e.g., "Open
[http://localhost:3000](http://localhost:3000)." or a descriptive text like
"Open [http://localhost:3000](http://localhost:3000).") so markdownlint no
longer flags the bare URL; update the line containing "Open
http://localhost:3000." accordingly.

Comment on lines +40 to +42
```
LINGO_API_KEY=your_key_here
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language to the fenced code block.

Specifying a language avoids MD040 and improves rendering.

✏️ Suggested change
-    ```
+    ```env
     LINGO_API_KEY=your_key_here
     ```
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```
LINGO_API_KEY=your_key_here
```
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

40-40: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@community/global-support-chat/README.md` around lines 40 - 42, The fenced
code block that shows environment variables currently lacks a language tag;
update the triple-backtick fence containing "LINGO_API_KEY=your_key_here" to
include the "env" language (i.e., change ``` to ```env) so the block is properly
marked as an environment file and avoids MD040 and improves rendering in the
README.md.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Build Demo Apps, Integrations etc & Win Exclusive Lingo.dev Swag!

2 participants