Skip to main content
/

Technical Track Record

Case 05: DNA Clotilde – AI-Powered Sales Enablement

Note: Source code is private due to enterprise compliance and non-disclosure agreements. Detailed system architecture and DevOps documentation are available for review.
React (Vite)Node.js (Express)OpenAI APIAssemblyAISSE

Briefing

Sales teams often lose momentum when switching between lead context and structured sales methodologies (SDR/Closer). DNA Clotilde was conceived as a copilot assistant that reduces cognitive load by delivering scripts, objection handling, and real-time meeting audio summaries, while preserving brand voice and tone.

Technical Deep Dive

I implemented an SSE (Server-Sent Events) streaming architecture to deliver a fluid, responsive chat experience with token cancellation for cost optimization. The AI engine uses dynamic prompt engineering to adjust tone, formality, and objectives based on operating mode (SDR or Closer). I also integrated an asynchronous transcription pipeline with AssemblyAI and Vercel Blob, enabling analysis of real conversations and extraction of insights with structured citations (T#) to ensure factual traceability.

Sales Copilot

Insight & Objection Matrix

StreamingSSE
TranscriptionAssemblyAI
ModesSDR/Closer
CitationsT# refs

Documentation

Monorepo with frontend (Vite + React + TS) and backend (Express + SSE) for a commercial assistant (SDR/Closer), featuring response streaming, next-action suggestions, and audio transcription support.

Features

  • SSE streaming chat with response cancellation
  • SDR/Closer mode, tone, formality, and objective configurable
  • Automatic suggestions by objective (action chips)
  • Quick templates per objective and mode
  • Image and text attachments to enrich context
  • Audio transcription via URL or upload (AssemblyAI + Vercel Blob)
  • Embeddable widget via query param ?widget=commercial (client-simulated responses)
  • Per-message feedback (šŸ‘/šŸ‘Ž) and session export
  • UI with design system tokens and microinteractions (GSAP)

Summary

  • Overview
  • Architecture
  • Core Flows
  • Technical Stack
  • Repository Structure
  • Running Locally
  • Environment Configuration
  • API
  • Deploy
  • Observability and Logs
  • Security and Privacy
  • Limits and Quotas
  • Troubleshooting
  • Tests and Quality
  • Maintenance
  • Internal References

Overview

The project delivers a chat experience focused on commercial activities (SDR/Closer), with LLM-generated answers via SSE streaming. The frontend maintains local conversation state and provides operational features (templates, suggestions, export, transcription), while the backend concentrates streaming and integrations with OpenAI and AssemblyAI.

The monorepo also includes serverless functions for Vercel deployment (frontend/api), enabling operation without the Express backend in production when desired.

Architecture

code
ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│                        CLIENT (Browser)                       │
│  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”  │
│  │ Frontend (Vite + React)                                 │  │
│  │ - UI, state (Zustand), templates, suggestions           │  │
│  │ - SSE client, cancellation, export                      │  │
│  │ - Transcription (URL / upload)                           │  │
│  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜  │
│               │                                     │          │
│               │ /api (prod)                         │          │
│               ā–¼                                     ā–¼          │
│  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”      ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │
│  │ Vercel Functions          │      │ Express Backend         │ │
│  │ frontend/api/*            │      │ backend/src             │ │
│  │ - /api/chat/stream (SSE)  │      │ - /chat/stream (SSE)     │ │
│  │ - /api/transcriptions     │      │ - /transcriptions        │ │
│  │ - /api/blob/upload        │      │ - /blob/upload           │ │
│  │ - /api/analyze             │      │ - /diagnostics/llm       │ │
│  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜      ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │
│               │                                     │          │
│               ā–¼                                     ā–¼          │
│         OpenAI / LLM                         AssemblyAI         │
│               │                                     │          │
│               └─────────────── Vercel Blob ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜          │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

Core Flows

1) SSE streaming chat

  • Endpoint: POST /chat/stream (backend) or POST /api/chat/stream (Vercel)
  • SSE format with events open, ping, error, suggestions, and end
  • Frontend consumes the stream, aggregates chunks, and reports UX metrics (time to first token and total)

Example payload:

json
{
  "message": "I need an outreach script",
  "mode": "SDR",
  "tone": "brief",
  "formality": "informal",
  "objective": "qualify",
  "attachments": [
    { "kind": "text", "content": "Lead context...", "name": "context.txt" },
    { "kind": "image", "content": "data:image/png;base64,..." }
  ]
}

Returned events (example):

code
event: open

data: {}

data: {"chunk":"Hello!"}

event: suggestions

data: {"suggestions":["Ask about budget", "..." ]}

event: end

data: {}

2) Templates and suggestions

  • Predefined templates by mode: GET /templates?mode=SDR|Closer
  • Automatic suggestions emitted by the backend when a response is valid

3) Audio transcription

URL flow:

  1. Frontend calls POST /transcriptions (backend) or POST /api/transcriptions (Vercel) with audio_url
  2. Backend creates a transcription on AssemblyAI and returns { id, status }
  3. Frontend polls GET /transcriptions/:id or GET /api/transcriptions/:id

File flow:

  1. Frontend uploads to Vercel Blob via POST /api/blob/upload
  2. Receives a public blob URL
  3. Creates transcription via POST /api/transcriptions with audio_url pointing to the blob

4) Analysis with citations (serverless)

  • Endpoint: POST /api/analyze
  • Objective: transcript analysis with fixed structure and citations (T# for transcript, W# for Web)
  • Optional Web support via Tavily (enableWeb=true)
  • Truncates transcripts at 20k characters and applies chunking (800 with 120 overlap)

Minimum payload:

json
{
  "query": "Summary with risks and next steps",
  "transcriptText": "..."
}

5) LLM provider

LLM_PROVIDER controls the engine used in the Express backend:

  • openai_chat (default): real streaming via Chat Completions, with model fallback when streaming is restricted.
  • openai_assistants: uses Assistants API v2 with ephemeral threads and polling; streaming is simulated by chunking the final response.

Technical Stack

LayerTechnologyVersionNotes
FrontendReact18.3.1SPA with SSE streaming and interactive UX
FrontendVite5.4.xBuild and dev server with /api proxy
FrontendTypeScript5.9.xStatic typing
FrontendTailwind CSS4.1.xDesign system tokens and utilities
FrontendZustand4.5.xState store for chat and UI
FrontendGSAP3.13.xMicrointeractions and motion
BackendNode.js20.xDefined in .nvmrc
BackendExpress4.19.xAPI and SSE
BackendZod3.23.xPayload validation
Backendpino9.xStructured logging
IntegrationsOpenAIChat/Assistants APIStreaming and model fallback
IntegrationsAssemblyAIv2Audio transcription
StorageVercel Blob—Direct client upload
Web SearchTavily—Optional in /api/analyze

Repository Structure

code
.
ā”œā”€ā”€ backend
│   ā”œā”€ā”€ src
│   │   ā”œā”€ā”€ index.js            # Express API, SSE, integrations
│   │   ā”œā”€ā”€ config.js           # Environment variables
│   │   ā”œā”€ā”€ llm                 # OpenAI Chat + Assistants adapter
│   │   └── utils               # errors, retries, SSE parser
│   └── test                    # Tests (Vitest + Supertest)
ā”œā”€ā”€ frontend
│   ā”œā”€ā”€ api                     # Serverless functions (Vercel)
│   ā”œā”€ā”€ src
│   │   ā”œā”€ā”€ ui                  # Components and hooks
│   │   ā”œā”€ā”€ store               # Zustand slices
│   │   └── utils               # SSE, transcription, logging
│   └── e2e                     # Playwright
ā”œā”€ā”€ docs
│   └── design-system.md        # Tokens and guidelines
└── README.md

Running Locally

Prerequisites:

  • Node.js 20.x (.nvmrc)
  • npm 8+

Steps:

  1. Install dependencies at the repo root
bash
npm install
  1. Start frontend and backend in dev mode
bash
npm run dev

Default ports:

  • Backend: http://localhost:3001
  • Frontend: http://localhost:5174

Build and run backend:

bash
npm run build
npm run start

Environment Configuration

Express Backend (backend/src/config.js)

VariableDefaultUsage
PORT3001Backend port
CORS_ORIGIN*Express CORS
OPENAI_API_KEY—OpenAI key (required)
OPENAI_BASE_URLhttps://api.openai.com/v1OpenAI endpoint
OPENAI_MODEL_PREFERREDgpt-4o-miniPreferred model (streaming)
OPENAI_MODEL_FALLBACKgpt-4o-miniStreaming fallback model
OPENAI_MODELgpt-4o-miniLegacy config compatibility
OPENAI_FALLBACK_MODELgpt-4o-miniLegacy config compatibility
OPENAI_TEMPERATURE0.7Model temperature
OPENAI_MAX_TOKENS800 (chat) / 1800 (analyze)Max tokens
OPENAI_ASSISTANT_ID—Assistants API (if LLM_PROVIDER=openai_assistants)
LLM_PROVIDERopenai_chatopenai_chat or openai_assistants
ASSEMBLYAI_API_KEY—AssemblyAI key (transcription)
ASSEMBLYAI_BASE_URLhttps://api.assemblyai.com/v2AssemblyAI base
LOG_LEVELinfopino log level

Frontend (Vite)

VariableDefaultUsage
VITE_BACKEND_URLhttp://localhost:3001Backend base in dev
VITE_TRANSCRIBE_TIMEOUT_MS120000Total transcription timeout
VITE_TRANSCRIBE_INITIAL_DELAY_MS1000Initial polling delay
VITE_TRANSCRIBE_MAX_DELAY_MS5000Max polling delay
VITE_TRANSCRIBE_BACKOFF_FACTOR1.5Polling backoff
VITE_MAX_UPLOAD_MB500Preventive upload limit
VITE_DEBUG_UIfalseVisual debug for metrics

Vercel Functions (frontend/api)

VariableDefaultUsage
OPENAI_API_KEY—OpenAI key
OPENAI_BASE_URLhttps://api.openai.com/v1OpenAI base
OPENAI_MODELgpt-4.1Model for /api/chat/stream and /api/analyze
OPENAI_TEMPERATURE0.7Temperature
OPENAI_MAX_TOKENS800Max tokens
ASSEMBLYAI_API_KEY—Transcription
ASSEMBLYAI_BASE_URLhttps://api.assemblyai.com/v2AssemblyAI base
TAVILY_API_KEY—Optional web search for /api/analyze
CHAT_STREAM_TIMEOUT_MS90000Serverless stream timeout

Notes:

  • In production, the frontend ignores VITE_BACKEND_URL when pointing to localhost and uses /api (Vercel).
  • Do not commit .env.

API

Express Backend

GET /health

  • Returns { status: "ok" }.

POST /chat/stream (SSE)

  • Accepts: { message, mode, tone, formality, objective, attachments }
  • Returns text/event-stream, heartbeat every 15s.

GET /templates

  • Query: ?mode=SDR|Closer
  • Returns predefined templates by objective.

POST /feedback

  • Body: { rating: "up"|"down", reason?: string }
  • Logs feedback only.

GET /diagnostics/llm

  • Streaming diagnostics for preferred model.
  • Returns canStream, recommendation, and details.

POST /transcriptions

  • Body: { audio_url, speaker_labels?, language_code? }
  • Returns { id, status }.

GET /transcriptions/:id

  • Returns transcription status and content.

POST /blob/upload

  • Token proxy for Vercel Blob upload.

POST /transcriptions/upload

  • Proxy for direct AssemblyAI upload (legacy).

Vercel Functions

POST /api/chat/stream

  • Serverless SSE stream.
  • Implements dedicated rate limit and timeout.

POST /api/transcriptions

GET /api/transcriptions/:id

POST /api/blob/upload

  • File upload to Vercel Blob (token issuance).

POST /api/transcriptions/upload

  • Deprecated endpoint (returns 410).

POST /api/analyze

  • Structured analysis with T# and W# citations.

Deploy

Option 1: Monorepo with Express backend

  • Run npm run build at the root.
  • Run npm run start in the backend.
  • Publish the Vite frontend output (frontend/dist).

Option 2: Vercel (serverless)

  • Functions in frontend/api replace the Express backend.
  • Frontend uses /api automatically in production.
  • Configure environment variables in Vercel per the table above.

Observability and Logs

  • Express backend uses pino and pino-http with structured logs.
  • Serverless functions use JSON logs and requestId for tracing.
  • Frontend logs UX metrics without PII (logUX).

Security and Privacy

  • No authentication and no data persistence in v1.
  • In-memory rate limit per IP (5 min window).
  • CSP configured in vercel.json to limit origins.
  • UI renders text only, no arbitrary HTML, preventing XSS.
  • Avoid sending PII or secrets in chat.

Limits and Quotas

  • Chat input: 2000 characters.
  • Text attachment: up to 1 MB.
  • Image attachment: up to 4 MB.
  • Transcription upload: preventive limit VITE_MAX_UPLOAD_MB (default 500 MB).
  • Rate limit: 60 req/5min (general) and 30 req/5min (chat).

Troubleshooting

  • MISSING_API_KEY: configure OPENAI_API_KEY or ASSEMBLYAI_API_KEY.
  • Streaming not allowed: use OPENAI_MODEL_FALLBACK or adjust the model.
  • TIMEOUT: increase CHAT_STREAM_TIMEOUT_MS or reduce OPENAI_MAX_TOKENS.
  • Transcription errors: ensure the audio URL is public and accessible.

Tests and Quality

Backend:

bash
npm run test -w backend
npm run lint -w backend

Frontend:

bash
npm run test -w frontend
npm run e2e -w frontend
npm run lint -w frontend
npm run lint:styles

Maintenance

  • npm audit and npm audit fix for security.
  • Update dependencies periodically.
  • Review tokens and guidelines in docs/design-system.md.

Internal References

  • Design system: docs/design-system.md
  • Plan and requirements: prd.md, plan.md