Skip to content

πŸ€– AI Assistant

Sebastian MusiaΕ‚ edited this page May 13, 2024 · 26 revisions

Introducing the NestJS library, designed to harness the power of OpenAI's Assistant, enabling developers to create highly efficient, scalable, and rapid AI assistants and chatbots. This library is tailored for seamless integration into the NestJS ecosystem, offering an intuitive API, WebSockets, and tools that streamline the development of AI-driven interactions. Whether you're building a customer service bot, a virtual assistant, or an interactive chatbot for engaging user experiences, our library empowers you to leverage cutting-edge AI capabilities with minimal effort.

πŸš€ Features

AI Assistant library features

  • WebSockets: The library provides a WebSocket server for real-time communication between the client and the assistant.
  • REST API: The library provides a REST API for communication with the assistant.
  • Function calling: The library provides a way to create functions, which allows you to extend the assistant's capabilities with custom logic.
  • File support: The library provides a way to add files to the assistant, which allows you to extend the assistant's knowledge base with custom data.
  • TTS (Text-to-Speech): The library provides a way to convert text to speech, which allows you to create voice-based interactions with the assistant.
  • STT (Speech-to-Text): The library provides a way to convert speech to text, which allows you to create voice-based interactions with the assistant.
  • Streaming: The library provides streaming events for AI resopnses.

Additional features in the repository

  • Embedded chatbot: The library provides a way to embed the chatbot on various websites through JavaScript scripts.
  • Chatbot client application: The repository includes an example client application (SPA) with a chatbot.

πŸ† Getting started

In this section, you will learn how to integrate the AI Assistant library into your NestJS application. The following steps will guide you through the process of setting up the library and creating simple functionalities.

Step 0: Prerequiring

Before you start, you will need to have an account on the OpenAI platform and an API key. You can create an account here.

Open or create your NestJS application where you would like to integrate the AI Assistant. If you don't have a NestJS application yet, you can create one using the following command:

nest new project-name

Step 1: Installation

Install the library using npm:

npm i @boldare/openai-assistant --save

Step 2: Env variables

Set up your environment variables, create environment variables in the .env file in the root directory of the project, and populate it with the necessary secrets. The assistant ID is optional and serves as a unique identifier for your assistant. When the environment variable is not set, the assistant will be created automatically. You can use the assistant ID to connect to an existing assistant, which can be found in the OpenAI platform after creating an assistant.

Create a .env file in the root directory of your project and populate it with the necessary secrets:

touch .env

Add the following content to the .env file:

# OpenAI API Key
OPENAI_API_KEY=

# Assistant ID - leave it empty if you don't have an assistant yet
ASSISTANT_ID=

Please note that the .env file should not be committed to the repository. Add it to the .gitignore file to prevent it from being committed.

Step 3: Configuration

Configure the settings for your assistant. For more information about assistant parameters, you can refer to the OpenAI documentation. A sample configuration can be found in (chat.config.ts).

// chat.config.ts file
import { AssistantConfigParams } from '@boldare/openai-assistant';
import { AssistantCreateParams } from 'openai/resources/beta';

// Default OpenAI configuration
export const assistantParams: AssistantCreateParams = {
  name: 'Your assistant name',
  instructions: `You are a chatbot assistant. Speak briefly and clearly.`,
  tools: [
    { type: 'code_interpreter' }, 
    { type: 'file_search' }
    // (...) function calling - functions are configured by extended services
  ],
  model: 'gpt-4-turbo',
  temperature: 0.05,
};

// Additional configuration for our assistant
export const assistantConfig: AssistantConfigParams = {
  id: process.env['ASSISTANT_ID'],                  // OpenAI API Key
  params: assistantParams,                          // AssistantCreateParams
  filesDir: './apps/api/src/app/knowledge',         // Knowledge Base - path to the directory with files (the final path is "fileDir" + "single file")
  toolResources: {
    fileSearch: {
      // Syntax: [name of collection]: [list of files or paths if you didn't fill the `fileDir` property].
      boldare: ['33-things-to-ask-your-digital-product-development-partner.md'], 
    },
    codeInterpreter: {
      // List of files or paths if you didn't fill the `fileDir` property.
      fileNames: [],
    },
  },
};

Import the AI Assistant module with your configuration into the module file where you intend to use it:

@Module({
  imports: [AssistantModule.forRoot(assistantConfig)],
  providers: [],
})
export class ChatbotModule {}

Websockets

If you want to use the WebSockets, create a new service that extends the ChatGateway class. The WebSocket server will be available at the / endpoint, and the REST API will be available at the /api endpoint (depending on the API prefix).

import { ChatGateway, ChatService } from '@boldare/openai-assistant';
import { WebSocketGateway } from '@nestjs/websockets';

@WebSocketGateway({
  // CORS configuration
  cors: {
    origin: '*',
  }
})
export class ChatSockets extends ChatGateway {
  constructor(override readonly chatsService: ChatService) {
    super(chatsService);
  }
}

Adds it to the providers array in the module file:

@Module({
  imports: [AssistantModule.forRoot(assistantConfig)],
  providers: [ChatSockets],
})
export class ChatbotModule {}

Currently, the library provides the following WebSocket events:

Event name Description
callStart The event is emitted when the user sends a message.
callDone The event is emitted when the assistant sends a message.
messageCreated Occurs when a message is created (details).
messageDelta Occurs when parts of a Message are being streamed (details).
messageDone Occurs when a message is completed (details).
textCreated Occurs when a text content is created (details).
textDelta Occurs when parts of a text content are being streamed (details).
textDone Occurs when a text content is completed (details).
imageFileDone Occurs when a image file is available (details).
toolCallCreated Occurs when a tool call is created (details).
toolCallDelta Occurs when parts of a tool call are being streamed (details).
toolCallDone Occurs when a tool call is completed (details).
RunStepCreated Occurs when a run step is created (details).
RunStepDelta Occurs when parts of a run step are being streamed (details).
RunStepDone Occurs when a run step is completed (details).

Step 4: Function calling

Create a new service that extends the AgentBase class, fill the definition and implement the output method.

  • The output method is the main method that will be called when the function is invoked.
  • The definition property is an object that describes the function and its parameters.

For more information about function calling, you can refer to the OpenAI documentation. Below is an example of a service that extends the AgentBase class:

@Injectable()
export class GetCurrentWeatherAgent extends AgentBase {
  override definition: FunctionTool = {
    type: 'function',
    function: {
      name: this.constructor.name,
      description: 'Get the current weather in location',
      parameters: {
        type: 'object',
        properties: {
          city: {
            type: 'string',
            description:
              'Name of the city e.g. Warsaw, San Francisco, Paris, etc.',
          },
        },
        required: ['city'],
      },
    },
  };

  constructor(override readonly agentService: AgentService) {
    super(agentService);
  }

  override async output(data: AgentData): Promise<string> {
    // TODO: Your logic here
    return 'Your string value';
  }
}

More examples can be found in the agents directory.

Import the service into the module file where you intend to use it:

import { Module } from '@nestjs/common';
import { AgentModule } from '@boldare/openai-assistant';
import { GetCurrentWeatherAgent } from './get-current-weather.agent';

@Module({
  imports: [AgentModule],
  providers: [GetCurrentWeatherAgent],
})
export class AgentsModule {}

and remember to add the AgentsModule above the AssistantModule in your main module file (e.g. chat.module.ts):

@Module({
  imports: [AgentsModule, AssistantModule.forRoot(assistantConfig)],
  providers: [ChatSockets],
})
export class ChatModule {}