logoassistant-ui
Custom Backend

LocalRuntime

Overview

With LocalRuntime, the chat history state is managed by assistant-ui. This gives you built-in support for thread management, message editing, reloading and branch switching.

If you need full control over the state of the messages on the frontend, use ExternalStoreRuntime instead.

assistant-ui integrates with any custom REST API. To do so, you define a custom ChatModelAdapter and pass it to the useLocalRuntime hook.

Getting Started

Create a Next.JS project

npx create-next-app@latest my-app
cd my-app

Install @assistant-ui/react

npm install @assistant-ui/react

Define a MyRuntimeProvider component

Update the MyModelAdapter below to integrate with your own custom API.

@/app/MyRuntimeProvider.tsx
"use client";
 
import type {  } from "react";
import {
  ,
  ,
  type ,
} from "@assistant-ui/react";
 
const :  = {
  async ({ ,  }) {
    // TODO replace with your own API
    const  = await ("<YOUR_API_ENDPOINT>", {
      : "POST",
      : {
        "Content-Type": "application/json",
      },
      // forward the messages in the chat to the API
      : .({
        ,
      }),
      // if the user hits the "cancel" button or escape keyboard key, cancel the request
      : ,
    });
 
    const  = await .();
    return {
      : [
        {
          : "text",
          : .text,
        },
      ],
    };
  },
};
 
export function ({
  ,
}: <{
  : ;
}>) {
  const  = ();
 
  return (
    < ={}>
      {}
    </>
  );
}

Wrap your app in MyRuntimeProvider

@/app/layout.tsx
import type {  } from "react";
import {  } from "@/app/MyRuntimeProvider";
 
export default function ({
  ,
}: <{
  : ;
}>) {
  return (
    <>
      < ="en">
        <>{}</>
      </>
    </>
  );
}

Streaming

Declare the run function as an AsyncGenerator (async *run). This allows you to yield the results as they are generated.

@/app/MyRuntimeProvider.tsx
const :  = {
  async *({ , ,  }) {
    const  = await ({ , ,  });
 
    let  = "";
    for await (const  of ) {
       += .[0]?.?. || "";
 
      yield {
        : [{ : "text",  }],
      };
    }
  },
};

Resuming a Run

The unstable_resumeRun method is experimental and may change in future releases.

In some advanced scenarios, you might need to resume a run with a custom stream. The ThreadRuntime.unstable_resumeRun method allows you to do this by providing an async generator that yields chat model run results.

import { useThreadRuntime, type ChatModelRunResult } from "@assistant-ui/react";
 
// Get the thread runtime
const thread = useThreadRuntime();
 
// Create a custom stream
async function* createCustomStream(): AsyncGenerator<ChatModelRunResult, void, unknown> {
  let text = "Initial response";
  yield {
    content: [{ type: "text", text }]
  };
  
  // Simulate delay
  await new Promise(resolve => setTimeout(resolve, 500));
  
  text = "Initial response. And here's more content...";
  yield {
    content: [{ type: "text", text }]
  };
}
 
// Resume a run with the custom stream
thread.unstable_resumeRun({
  parentId: "message-id", // ID of the message to respond to
  stream: createCustomStream() // The stream to use for resuming
});

This is particularly useful for:

  • Implementing custom streaming logic
  • Resuming conversations from external sources
  • Creating demo or testing environments with predefined response patterns

For more detailed information, see the ThreadRuntime.unstable_resumeRun API reference.

On this page

Edit on Github