Grace & Carry AI Assistant

An intelligent AI-powered e-commerce assistant that enables store administrators to manage products, orders, inventory, and customers through natural conversation.

Back to Projects
MedusaJS, Model Context Protocol (MCP), Vercel AI SDKView Live

From Manual Store Management to Conversational Commerce

The Challenge: Building for Scale from Day One

A client approached me to build an e-commerce platform for Grace & Carry, a premium women's fashion boutique featuring curated collections of clothing, footwear, accessories, and lifestyle products. The store is still in its early days and not yet public, but we wanted to get the foundations right from the start.

Even before launch, I could see where things would get painful. Adding products, managing inventory, processing orders, handling customer data - these tasks pile up fast. Store owners end up spending more time clicking through admin dashboards than actually running their business.

So I asked myself: what if you could just tell your store what to do?

The Vision: Talk to Your Store

I wanted to build something different from the typical chatbot that can only answer FAQs. This needed to be an AI that actually understands the store - one that can look up orders, update inventory, create products, and handle real operations through simple conversation. It had to work with large amounts of data without breaking, and remember context across multiple sessions.

See it in Action

Grace & Carry AI Demo


How It Works: Three Connected Systems

The solution has three main parts that work together:

1. MCP Server: Finding the Right Tools

The core is a Model Context Protocol (MCP) server that exposes over 400 operations from the Medusa e-commerce platform. Instead of defining each tool by hand, the server generates them automatically from OpenAPI specs.

Smart Tool Search

When the AI needs to do something, it searches by meaning, not by browsing a list:

User: "Show me all orders from last week that are still pending"

AI calls: search_tools({ query: "list orders filter status" })

Returns: Matching operations with schemas, parameters, and hints

The search combines embedding-based similarity (using Nomic Embed v1.5) with keyword matching and domain synonyms. Ask about "stock levels" and it knows you mean inventory.

2. Running the Tools

Once the AI finds what it needs, it runs operations through a single interface:

AI calls: execute_tool({
  tool_name: "admin_list_orders",
  params: {
    status: "pending",
    created_at: { gte: "2024-01-14" }
  }
})

This layer takes care of:

  • Validating parameters against schemas with automatic type conversion
  • Routing authentication between admin and storefront scopes
  • Catching dangerous operations - deletes, refunds, and cancellations need explicit approval

3. Handling Large Data

E-commerce queries can return huge results - hundreds of products, thousands of line items. Dumping all that into the AI would kill performance.

The Cursor Approach

When a response goes over 3KB, the system stores it and returns a reference:

{ "_stored": true, "cursor_id": "cr_8f4a2b...", "item_count": 847, "preview": [/* first 3 items */], "hint": "Use get_result with filters to explore" }

The AI can then page through, filter, and search the data as needed:

get_result({
  cursor_id: "cr_8f4a2b...",
  filters: [{ field: "status", operator: "eq", value: "pending" }],
  sort_by: "created_at",
  limit: 10
})

This keeps responses focused while the full dataset stays available.


Remembering Conversations

Long chats create another problem. As context grows, older messages get cut off, and the AI forgets important things from earlier.

Auto-Summarization

When the conversation gets close to the token limit (80,000 by default), the system:

  1. Trims tool calls and internal reasoning from old messages
  2. Creates a summary with:
    • Key context (2 sentences max)
    • Decisions that were made
    • Tasks still pending
  3. Adds the summary to the system prompt for the next turn

The summary gets passed along in response headers and restored on later requests. This lets conversations span multiple sessions without losing track.


Built-in Safety

E-commerce actions have real consequences. You really don't want to accidentally cancel a batch of orders.

Approval Required

The system checks each operation before running it:

  • DELETE requests always need approval
  • Order cancellations get flagged
  • Refunds get flagged
  • Updates to critical data need confirmation

When the AI tries something risky, the UI shows an approval dialog. Nothing happens until the user confirms.


Tech Stack

AI Layer

  • Vercel AI SDK
  • OpenAI / Anthropic / Gemini
  • Nomic Embed v1.5

Backend

  • MedusaJS v2
  • Model Context Protocol
  • Express.js

Storage

  • PostgreSQL (Medusa data)
  • SQLite (response cursors)
  • Redis (sessions)

Frontend

  • React (Medusa Admin)
  • TailwindCSS
  • Streaming components

What This Enables

With the AI assistant in place, store management becomes conversational:

  • Quick answers to complex questions: "Which products are low on stock but sold well last month?" Just ask.
  • Batch tasks made simple: Creating product variants with regional pricing used to take forever. Now it's a conversation.
  • Memory that persists: Pick up where you left off without re-explaining everything.
Connect
Let's get in touch