diff --git a/.gitignore b/.gitignore index ba1a6777..42e82e55 100644 --- a/.gitignore +++ b/.gitignore @@ -137,3 +137,4 @@ dist .idea /experiments .npmrc +.DS_Store diff --git a/docs/README.md b/docs/README.md index d89ba3de..c5788e44 100644 --- a/docs/README.md +++ b/docs/README.md @@ -101,25 +101,24 @@ console.log(`Agent 🤖 : `, response.result.text); ➡️ To run an arbitrary example, use the following command `yarn start examples/agents/bee.ts` (just pass the appropriate path to the desired example). -### 📦 Modules +### 📦 Bee Framework Modules The source directory (`src`) provides numerous modules that one can use. -| Name | Description | -| ------------------------------------------------ | ------------------------------------------------------------------------------------------- | -| [**agents**](/docs/agents.md) | Base classes defining the common interface for agent. | -| [**llms**](/docs/llms.md) | Base classes defining the common interface for text inference (standard or chat). | -| [**template**](/docs/templates.md) | Prompt Templating system based on `Mustache` with various improvements. | -| [**memory**](/docs/memory.md) | Various types of memories to use with agent. | -| [**tools**](/docs/tools.md) | Tools that an agent can use. | -| [**cache**](/docs/cache.md) | Preset of different caching approaches that can be used together with tools. | -| [**errors**](/docs/errors.md) | Error classes and helpers to catch errors fast. | -| [**adapters**](/docs/llms.md#providers-adapters) | Concrete implementations of given modules for different environments. | -| [**logger**](/docs/logger.md) | Core component for logging all actions within the framework. | -| [**serializer**](/docs/serialization.md) | Core component for the ability to serialize/deserialize modules into the serialized format. | -| [**version**](/docs/version.md) | Constants representing the framework (e.g., latest version) | -| [**emitter**](/docs/emitter.md) | Bringing visibility to the system by emitting events. | -| **internals** | Modules used by other modules within the framework. | +- [**Agents**](/docs/agents.md) : Base classes defining the common interface for agent. + - [**LLM**](/docs/llms.md) : Base classes defining the common interface for text inference (standard or chat) + - [**Template Prompt**](/docs/templates.md): Templating system based on Mustache with various improvements. + - [**Adapters**](/docs/llms.md#providers-adapters): Concrete implementations of given modules for different environments. + - [**Memory**](/docs/memory.md) : Various types of memories to use with agent. + - [**Cache**](/docs/cache.md): Preset of different caching approaches that can be used together with tools. + - [**Tools**](/docs/tools.md) : Tools that an agent can use. + - Dev tool: + - [**emitter**](/docs/emitter.md) : Bringing visibility to the system by emitting events. + - [**logger**](/docs/logger.md) : Core component for logging all actions within the framework. + - [**serializer**](/docs/serialization.md) : Core component for the ability to serialize/deserialize modules. + - [**errors**](/docs/errors.md) : Error classes and helpers to catch errors fast. +- [**version**](/docs/version.md) : Constants representing the framework (e.g., latest version) +- **internals** : Modules used by other modules within the framework. To see more in-depth explanation see [overview](/docs/overview.md). diff --git a/docs/agents.md b/docs/agents.md index 4c299437..3cdebe6c 100644 --- a/docs/agents.md +++ b/docs/agents.md @@ -1,128 +1,232 @@ -# Agents - -AI agents built on large language models control the path to solving a complex problem. They can typically act on feedback to refine their plan of action, a capability that can improve performance and help them accomplish more sophisticated tasks. - -We recommend reading the [following article](https://research.ibm.com/blog/what-are-ai-agents-llm) to learn more. - -## Implementation in Bee Agent Framework - -An agent can be thought of as a program powered by LLM. The LLM generates structured output that is then processed by your program. - -Your program then decides what to do next based on the retrieved content. It may leverage a tool, reflect, or produce a final answer. -Before the agent determines the final answer, it performs a series of `steps`. A step might be calling an LLM, parsing the LLM output, or calling a tool. - -Steps are grouped in a `iteration`, and every update (either complete or partial) is emitted to the user. - -### Bee Agent - -Our Bee Agent is based on the `ReAct` ([Reason and Act](https://arxiv.org/abs/2210.03629)) approach. - -Hence, the agent in each iteration produces one of the following outputs. - -For the sake of simplicity, imagine that the input prompt is "What is the current weather in Las Vegas?" - -First iteration: - -``` -thought: I need to retrieve the current weather in Las Vegas. I can use the OpenMeteo function to get the current weather forecast for a location. -tool_name: OpenMeteo -tool_input: {"location": {"name": "Las Vegas"}, "start_date": "2024-10-17", "end_date": "2024-10-17", "temperature_unit": "celsius"} +# Agent + +The `BaseAgent` class is the foundation of the Bee Framework, providing the core interface and functionality that all agent implementations must follow. It orchestrates the interaction between LLMs, tools, memory, and development utilities to create intelligent, automated workflows. + +## Overview + +`BaseAgent` acts as an abstract base class that defines the standard interface and basic functionality for all agents in the framework. It manages the lifecycle of agent operations, coordinates between different components, and provides a consistent interface for agent implementations. + +## Architecture + +```mermaid +classDiagram + class BaseAgent { + +LLM llm + +Memory memory + +Tool[] tools + +DevTools devTools + +run(prompt: string, options?: ExecutionOptions) + } + + BaseAgent *-- LLM + BaseAgent *-- Memory + BaseAgent *-- Tool + BaseAgent *-- DevTools + + class LLM { + +inference() + +templates: TemplatePrompt + } + + class Memory { + +store() + +retrieve() + +cache: Cache + } + + class Tool { + +execute() + +validate() + } + + class DevTools { + +emitter: Emitter + +logger: Logger + +adapter: Adapter + +serializer: Serializer + +errorHandler: ErrorHandler + } ``` -> [!NOTE] +> [!TIP] > -> Agent emitted 3 complete updates in the following order (`thought`, `tool_name`, `tool_input`) and tons of partial updates in the same order. -> Partial update means that new tokens are being added to the iteration. Updates are always in strict order: You first get many partial updates for thought, followed by a final update for thought (that means no final updates are coming for a given key). +> Location within the framework `bee-agent-framework/agents`. -Second iteration: +## Core Properties -``` -thought: I have the current weather in Las Vegas in Celsius. -final_answer: The current weather in Las Vegas is 20.5°C with an apparent temperature of 18.3°C. -``` +| Property | Type | Description | +| ---------- | ---------- | -------------------------------------------- | +| `llm` | `LLM` | Manages interactions with the language model | +| `memory` | `Memory` | Handles state management and persistence | +| `tools` | `Tool[]` | Array of available tools for the agent | +| `devTools` | `DevTools` | Development and debugging utilities | -For more complex tasks, the agent may do way more iterations. +## Main Methods -In the following example, we will transform the knowledge gained into code. +### Public Methods -```ts -const response = await agent - .run({ prompt: "What is the current weather in Las Vegas?" }) - .observe((emitter) => { - emitter.on("update", async ({ data, update, meta }) => { - // to log only valid runs (no errors), check if meta.success === true - console.log(`Agent Update (${update.key}) 🤖 : ${update.value}`); - console.log("-> Iteration state", data); - }); +#### `run(prompt: string, options?: ExecutionOptions): Promise` - emitter.on("partialUpdate", async ({ data, update, meta }) => { - // to log only valid runs (no errors), check if meta.success === true - console.log(`Agent Partial Update (${update.key}) 🤖 : ${update.value}`); - console.log("-> Iteration state", data); - }); +Executes the agent with the given prompt and options. - // you can observe other events such as "success" / "retry" / "error" / "toolStart" / "toolEnd", ... +```typescript +interface ExecutionOptions { + signal?: AbortSignal; + execution?: { + maxRetriesPerStep?: number; + totalMaxRetries?: number; + maxIterations?: number; + }; +} - // To see all events, uncomment the following code block - // emitter.match("*.*", async (data: unknown, event) => { - // const serializedData = JSON.stringify(data).substring(0, 128); // show only part of the event data - // console.trace(`Received event "${event.path}"`, serializedData); - // }); - }); +const response = await agent.run("What's the weather in Las Vegas?", { + signal: AbortSignal.timeout(60000), + execution: { + maxIterations: 20, + maxRetriesPerStep: 3, + totalMaxRetries: 10, + }, +}); ``` -### Behaviour - -You can alter the agent's behavior in the following ways. +#### `observe(callback: (emitter: Emitter) => void): void` -#### Setting execution policy +Subscribes to agent events for monitoring and debugging. -```ts -await agent.run( - { prompt: "What is the current weather in Las Vegas?" }, - - { - signal: AbortSignal.timeout(60 * 1000), // 1 minute timeout - execution: { - // How many times an agent may repeat the given step before it halts (tool call, llm call, ...) - maxRetriesPerStep: 3, +```typescript +agent.observe((emitter) => { + // Listen for complete updates + emitter.on("update", ({ data, update, meta }) => { + console.log(`Complete Update: ${update.key} = ${update.value}`); + }); - // How many retries can the agent occur before halting - totalMaxRetries: 10, + // Listen for partial updates (streaming) + emitter.on("partialUpdate", ({ data, update, meta }) => { + console.log(`Partial Update: ${update.key} = ${update.value}`); + }); - // Maximum number of iterations in which the agent must figure out the final answer - maxIterations: 20, - }, - }, -); + // Listen for tool execution + emitter.on("toolStart", ({ tool, input }) => { + console.log(`Tool Started: ${tool.name}`); + }); +}); ``` -> [!NOTE] -> -> The default is zero retries and no timeout. - -##### Overriding prompt templates +## Events -The agent uses the following prompt templates. +Agent emits various events through its DevTools.Emitter: -1. **System Prompt** +| Event | Description | Payload | +| --------------- | -------------------------- | ------------------------- | +| `update` | Complete update for a step | `{ data, update, meta }` | +| `partialUpdate` | Streaming update | `{ data, update, meta }` | +| `toolStart` | Tool execution started | `{ tool, input }` | +| `toolEnd` | Tool execution completed | `{ tool, result }` | +| `error` | Error occurred | `{ error, context }` | +| `retry` | Retry attempt | `{ attempt, maxRetries }` | +| `success` | Successful completion | `{ result }` | -2. **User Prompt** (to reformat the user's prompt) +## Implementation Example -3. **User Empty Prompt** +Here's an example of implementing an simple agent base in Bee Agent class: -4. **Tool Error** +```typescript +import { BeeAgent } from "bee-agent-framework/agents/bee/agent"; +import { TokenMemory } from "bee-agent-framework/memory/tokenMemory"; +import { DuckDuckGoSearchTool } from "bee-agent-framework/tools/search/duckDuckGoSearch"; +import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; +import { OpenMeteoTool } from "bee-agent-framework/tools/weather/openMeteo"; -5. **Tool Input Error** (validation error) +const llm = new OllamaChatLLM(); +const agent = new BeeAgent({ + llm, + memory: new TokenMemory({ llm }), + tools: [new DuckDuckGoSearchTool(), new OpenMeteoTool()], +}); -6. **Tool No Result Error** - -7. **Tool Not Found Error** - -Please refer to the [following example](/examples/agents/bee_advanced.ts) to see how to modify them. - -## Creating your own agent +const response = await agent + .run({ prompt: "What's the current weather in Las Vegas?" }) + .observe((emitter) => { + emitter.on("update", async ({ data, update, meta }) => { + console.log(`Agent (${update.key}) 🤖 : `, update.value); + }); + }); -To create your own agent, you must implement the agent's base class (`BaseAgent`). +console.log(`Agent 🤖 : `, response.result.text); +``` -The example can be found [here](/examples/agents/custom_agent.ts). +## Best Practices + +1. **Error Handling** + + ```typescript + protected async executeIteration(iteration: number): Promise { + try { + // ... iteration logic ... + } catch (error) { + this.devTools.emitter.emit('error', { error, context: { iteration } }); + throw error; + } + } + ``` + +2. **Memory Management** + + ```typescript + protected async cleanup(): Promise { + await this.memory.store('lastCleanup', Date.now()); + // Clear temporary data + } + ``` + +3. **Event Emission** + + ```typescript + protected emitProgress(progress: number): void { + this.devTools.emitter.emit('progress', { value: progress }); + } + ``` + +4. **Tool Management** + ```typescript + protected async validateTools(): Promise { + for (const tool of this.tools) { + if (!await tool.validate()) { + throw new Error(`Tool validation failed: ${tool.name}`); + } + } + } + ``` + +## Best Practices + +1. **Error Handling** + + - Use `AgentError` for agent-specific errors + - Implement proper cleanup in `finally` blocks + - Handle tool execution errors gracefully + +2. **State Management** + + - Use the `isRunning` flag to prevent concurrent executions + - Implement proper state cleanup in the `destroy` method + - Use snapshots for state persistence + +3. **Event Emission** + + - Configure appropriate event namespaces + - Emit events for significant state changes + - Include relevant metadata with events + +4. **Type Safety** + - Leverage generic types for input/output typing + - Define clear interfaces for options and metadata + - Use type guards for runtime safety + +## See Also + +- [LLM Documentation](./llms.md) +- [Memory System](./memory.md) +- [Tools Guide](./tools.md) +- [DevTools Reference](./dev-tools.md) +- [Event System](./emiter.md) diff --git a/docs/cache.md b/docs/cache.md index 3b91aec8..a536e27a 100644 --- a/docs/cache.md +++ b/docs/cache.md @@ -1,81 +1,159 @@ # Cache -> [!TIP] -> -> Location within the framework `bee-agent-framework/cache`. +The `BaseCache` class is the foundation of the Bee Framework's caching system, providing the core interface and functionality for storing and retrieving computation results and data. It enables performance optimization through temporary storage of expensive operations' results and state management across framework components. + +## Overview + +`BaseCache` serves as the abstract base class that defines the standard interface for all cache implementations in the framework. It provides consistent methods for data storage, retrieval, and cache management while supporting different caching strategies and persistence mechanisms. + +## Architecture + +```mermaid +classDiagram + class BaseCache { + +boolean enabled + +size() + +set(key: string, value: T) + +get(key: string) + +has(key: string) + +delete(key: string) + +clear() + } + + class UnconstrainedCache { + -Map storage + } + + class SlidingCache { + -number maxSize + -number ttl + -SlidingTaskMap provider + } + + class FileCache { + -string fullPath + -BaseCache provider + +get source() + +reload() + } + + class CacheDecorator { + +CacheKeyFn cacheKey + +number ttl + +boolean enabled + +get(key: string) + +clear() + } + + BaseCache <|-- UnconstrainedCache + BaseCache <|-- SlidingCache + BaseCache <|-- FileCache + BaseCache <-- CacheDecorator + +``` + +## Core Properties -Caching is a process used to temporarily store copies of data or computations in a cache (a storage location) to facilitate faster access upon future requests. The primary purpose of caching is to improve the efficiency and performance of systems by reducing the need to repeatedly fetch or compute the same data from a slower or more resource-intensive source. +| Property | Type | Description | +| --------- | ------------- | ---------------------------------- | +| `enabled` | `boolean` | Whether caching is active | +| `storage` | `Map/TaskMap` | Internal storage mechanism | +| `ttl` | `number` | Time-to-live for cache entries | +| `maxSize` | `number` | Maximum cache size (if applicable) | -## Usage +## Cache Implementations -### Capabilities showcase +### UnconstrainedCache - +Provides unlimited storage capacity with no automatic eviction. -```ts +```typescript import { UnconstrainedCache } from "bee-agent-framework/cache/unconstrainedCache"; -const cache = new UnconstrainedCache(); +const cache = new UnconstrainedCache(); -// Save -await cache.set("a", 1); -await cache.set("b", 2); +await cache.set("key1", 100); +const value = await cache.get("key1"); // 100 +console.log(await cache.size()); // 1 +``` -// Read -const result = await cache.get("a"); -console.log(result); // 1 +_Source: [examples/cache/unconstrainedCache.ts](/examples/cache/unconstrainedCache.ts)_ -// Meta -console.log(cache.enabled); // true -console.log(await cache.has("a")); // true -console.log(await cache.has("b")); // true -console.log(await cache.has("c")); // false -console.log(await cache.size()); // 2 +### SlidingCache -// Delete -await cache.delete("a"); -console.log(await cache.has("a")); // false +Maintains a fixed-size cache with TTL support and LRU eviction. -// Clear -await cache.clear(); -console.log(await cache.size()); // 0 +```typescript +import { SlidingCache } from "bee-agent-framework/cache/slidingCache"; + +const cache = new SlidingCache({ + size: 1000, // Maximum entries + ttl: 60 * 1000, // 1 minute TTL +}); + +await cache.set("user:123", userData); +// Oldest entries are removed when time limit is reached ``` -_Source: [examples/cache/unconstrainedCache.ts](/examples/cache/unconstrainedCache.ts)_ +_Source: [examples/cache/toolCache.ts](/examples/cache/toolCache.ts)_ -### Caching function output + intermediate steps +### FileCache - +Persists cache data to the filesystem. -```ts -import { UnconstrainedCache } from "bee-agent-framework/cache/unconstrainedCache"; + -const cache = new UnconstrainedCache(); +```typescript +import { FileCache } from "bee-agent-framework/cache/fileCache"; -async function fibonacci(n: number): Promise { - const cacheKey = n.toString(); - const cached = await cache.get(cacheKey); - if (cached !== undefined) { - return cached; - } +const cache = new FileCache({ + fullPath: "/path/to/cache.json", +}); + +await cache.set("user:123", userData); +// Data is automatically persisted to disk +``` + +_Source: [examples/cache/fileCache.ts](/examples/cache/fileCache.ts)_ + +> [!NOTE] +> +> Provided location (`fullPath`) doesn't have to exist. It gets automatically created when needed. + +> [!NOTE] +> +> Every modification to the cache (adding, deleting, clearing) immediately updates the target file. + +### Cache Decorator + +Method-level caching using TypeScript decorators. + + + +```typescript +import { Cache } from "bee-agent-framework/cache/decoratorCache"; - const result = n < 1 ? 0 : n <= 2 ? 1 : (await fibonacci(n - 1)) + (await fibonacci(n - 2)); - await cache.set(cacheKey, result); - return result; +class Generator { + @Cache() + get(seed: number) { + return (Math.random() * 1000) / Math.max(seed, 1); + } } -console.info(await fibonacci(10)); // 55 -console.info(await fibonacci(9)); // 34 (retrieved from cache) -console.info(`Cache size ${await cache.size()}`); // 10 +const generator = new Generator(); +const a = generator.get(5); +const b = generator.get(5); +console.info(a === b); // true +console.info(a === generator.get(6)); // false ``` -_Source: [examples/cache/unconstrainedCacheFunction.ts](/examples/cache/unconstrainedCacheFunction.ts)_ +_Source: [examples/cache/decoratorCache.ts](/examples/cache/decoratorCache.ts)_ -### Usage with tools +## Integration Examples - +### With Tools -```ts +```typescript import { SlidingCache } from "bee-agent-framework/cache/slidingCache"; import { WikipediaTool } from "bee-agent-framework/tools/search/wikipedia"; @@ -86,23 +164,22 @@ const ddg = new WikipediaTool({ }), }); +// Results are cached automatically const response = await ddg.run({ query: "United States", }); -// upcoming requests with the EXACTLY same input will be retrieved from the cache +const response2 = await ddg.run({ + query: "United States", +}); // From cache ``` _Source: [examples/cache/toolCache.ts](/examples/cache/toolCache.ts)_ -> [!IMPORTANT] -> -> Cache key is created by serializing function parameters (the order of keys in the object does not matter). - -### Usage with LLMs +### With LLMs -```ts +```typescript import { SlidingCache } from "bee-agent-framework/cache/slidingCache"; import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; @@ -132,207 +209,48 @@ _Source: [examples/cache/llmCache.ts](/examples/cache/llmCache.ts)_ > > Caching for non-chat LLMs works exactly the same way. -## Cache types - -The framework provides multiple out-of-the-box cache implementations. - -### UnconstrainedCache - -Unlimited in size. - -```ts -import { UnconstrainedCache } from "bee-agent-framework/cache/unconstrainedCache"; -const cache = new UnconstrainedCache(); - -await cache.set("a", 1); -console.log(await cache.has("a")); // true -console.log(await cache.size()); // 1 -``` - -### SlidingCache - -Keeps last `k` entries in the memory. The oldest ones are deleted. - - - -```ts -import { SlidingCache } from "bee-agent-framework/cache/slidingCache"; - -const cache = new SlidingCache({ - size: 3, // (required) number of items that can be live in the cache at a single moment - ttl: 1000, // (optional, default is Infinity) Time in milliseconds after the element is removed from a cache -}); - -await cache.set("a", 1); -await cache.set("b", 2); -await cache.set("c", 3); - -await cache.set("d", 4); // overflow - cache internally removes the oldest entry (key "a") -console.log(await cache.has("a")); // false -console.log(await cache.size()); // 3 -``` - -_Source: [examples/cache/slidingCache.ts](/examples/cache/slidingCache.ts)_ - -### FileCache - -One may want to persist data to a file so that the data can be later loaded. In that case the `FileCache` is ideal candidate. -You have to provide a location where the cache is persisted. - - - -```ts -import { FileCache } from "bee-agent-framework/cache/fileCache"; -import * as os from "node:os"; +## Cache Key Generation -const cache = new FileCache({ - fullPath: `${os.tmpdir()}/bee_file_cache_${Date.now()}.json`, -}); -console.log(`Saving cache to "${cache.source}"`); -await cache.set("abc", { firstName: "John", lastName: "Doe" }); -``` +### Built-in Key Generators -_Source: [examples/cache/fileCache.ts](/examples/cache/fileCache.ts)_ +```typescript +// Object-based hashing +const objectKey = ObjectHashKeyFn(input); -> [!NOTE] -> -> Provided location (`fullPath`) doesn't have to exist. It gets automatically created when needed. +// Singleton key (same key always) +const singleKey = SingletonCacheKeyFn(input); -> [!NOTE] -> -> Every modification to the cache (adding, deleting, clearing) immediately updates the target file. +// WeakRef-based key generation +const weakKey = WeakRefKeyFn(input); -#### Using a custom provider - - - -```ts -import { FileCache } from "bee-agent-framework/cache/fileCache"; -import { UnconstrainedCache } from "bee-agent-framework/cache/unconstrainedCache"; -import os from "node:os"; - -const memoryCache = new UnconstrainedCache(); -await memoryCache.set("a", 1); - -const fileCache = await FileCache.fromProvider(memoryCache, { - fullPath: `${os.tmpdir()}/bee_file_cache.json`, -}); -console.log(`Saving cache to "${fileCache.source}"`); -console.log(await fileCache.get("a")); // 1 +// JSON stringification +const jsonKey = JSONCacheKeyFn(input); ``` -_Source: [examples/cache/fileCacheCustomProvider.ts](/examples/cache/fileCacheCustomProvider.ts)_ - -### NullCache - -The special type of cache is `NullCache` which implements the `BaseCache` interface but does nothing. - -The reason for implementing is to enable [Null object pattern](https://en.wikipedia.org/wiki/Null_object_pattern). - -### @Cache (decorator cache) - - - -```ts -import { Cache } from "bee-agent-framework/cache/decoratorCache"; - -class Generator { - @Cache() - get(seed: number) { - return (Math.random() * 1000) / Math.max(seed, 1); - } -} - -const generator = new Generator(); -const a = generator.get(5); -const b = generator.get(5); -console.info(a === b); // true -console.info(a === generator.get(6)); // false +### Custom Key Generation + +```typescript +const customKeyFn: CacheKeyFn = (...args: any[]) => { + return args.map(arg => + typeof arg === 'object' + ? JSON.stringify(arg) + : String(arg) + ).join(':'); +}; + +@Cache({ + cacheKey: customKeyFn +}) +method() { } ``` -_Source: [examples/cache/decoratorCache.ts](/examples/cache/decoratorCache.ts)_ - -**Complex example** - - - -```ts -import { Cache, SingletonCacheKeyFn } from "bee-agent-framework/cache/decoratorCache"; - -class MyService { - @Cache({ - cacheKey: SingletonCacheKeyFn, - ttl: 3600, - enumerable: true, - enabled: true, - }) - get id() { - return Math.floor(Math.random() * 1000); - } - - reset() { - Cache.getInstance(this, "id").clear(); - } -} - -const service = new MyService(); -const a = service.id; -console.info(a === service.id); // true -service.reset(); -console.info(a === service.id); // false -``` - -_Source: [examples/cache/decoratorCacheComplex.ts](/examples/cache/decoratorCacheComplex.ts)_ - -> [!NOTE] -> -> Default `cacheKey` function is `ObjectHashKeyFn` - -> [!CAUTION] -> -> Calling an annotated method with the `@Cache` decorator with different parameters (despite the fact you are not using them) yields in cache bypass (different arguments = different cache key) generated. -> Be aware of that. If you want your method always to return the same response, use `SingletonCacheKeyFn`. - -### CacheFn - -Because previously mentioned `CacheDecorator` can be applied only to class methods/getter the framework -provides a way how to do caching on a function level. - - - -```ts -import { CacheFn } from "bee-agent-framework/cache/decoratorCache"; -import { setTimeout } from "node:timers/promises"; - -const getSecret = CacheFn.create( - async () => { - // instead of mocking response you would do a real fetch request - const response = await Promise.resolve({ secret: Math.random(), expiresIn: 100 }); - getSecret.updateTTL(response.expiresIn); - return response.secret; - }, - {}, // options object -); - -const token = await getSecret(); -console.info(token === (await getSecret())); // true -await setTimeout(150); -console.info(token === (await getSecret())); // false -``` - -_Source: [examples/cache/cacheFn.ts](/examples/cache/cacheFn.ts)_ - -> [!NOTE] -> -> Internally, the function is wrapped as a class; therefore, the same rules apply here as if it were a method annotated with the `@Cache` decorator. - -## Creating a custom cache provider +## Custom cache provider implementation To create your cache implementation, you must implement the `BaseCache` class. -```ts +```typescript import { BaseCache } from "bee-agent-framework/cache/base"; import { NotImplementedError } from "bee-agent-framework/errors"; @@ -373,4 +291,60 @@ export class CustomCache extends BaseCache { _Source: [examples/cache/custom.ts](/examples/cache/custom.ts)_ -The simplest implementation is `UnconstrainedCache`, which can be found [here](/src/cache/unconstrainedCache.ts). +## Best Practices + +1. **Cache Strategy Selection** + + ```typescript + // For memory-sensitive applications + const cache = new SlidingCache({ + size: 1000, + ttl: 3600 * 1000, + }); + + // For persistent storage needs + const cache = new FileCache({ + fullPath: "/path/to/cache.json", + }); + ``` + +2. **TTL Management** + + ```typescript + // Set appropriate TTL for data freshness + @Cache({ + ttl: 5 * 60 * 1000, // 5 minutes + enabled: true + }) + getData() { } + ``` + +3. **Cache Invalidation** + + ```typescript + // Clear specific entries + await cache.delete("key"); + + // Clear entire cache + await cache.clear(); + + // Selective clearing with decorators + Cache.getInstance(this, "methodName").clear(); + ``` + +4. **Resource Management** + + ```typescript + // Monitor cache size + const size = await cache.size(); + + // Enable/disable as needed + cache.enabled = false; + ``` + +## See Also + +- [Agent System](./agent.md) +- [Tools System](./tools.md) +- [LLM Integration](./llms.md) +- [Serialization](./serialization.md) diff --git a/docs/dev_tools.md b/docs/dev_tools.md new file mode 100644 index 00000000..93ce20a7 --- /dev/null +++ b/docs/dev_tools.md @@ -0,0 +1,158 @@ +# Developer Tools + +The Developer Tools ecosystem in the Bee Framework provides essential functionality for debugging, monitoring, error handling, and maintaining applications. These tools form the foundation for reliable agent operations and development workflows. + +## Overview + +The Developer Tools system consists of several key components: + +- Error Handling System +- Logging System +- Instrumentation and Telemetry +- Serialization System + +Each component is designed to work independently while integrating seamlessly with the broader framework. + +## Architecture + +```mermaid +classDiagram + + class ErrorHandler { + +FrameworkError baseError + +handle(error: Error) + +format(error: Error) + +isRetryable(error: Error) + } + + class Logger { + +LogLevel level + +string name + +trace(msg: string) + +debug(msg: string) + +info(msg: string) + +warn(msg: string) + +error(msg: string) + } + + class Instrumentation { + +boolean enabled + +Tracer tracer + +createSpan() + +recordMetric() + } + + class Serializer { + +serialize(data: any) + +deserialize(data: string) + +register(class: Class) + } + + ErrorHandler + Logger + Instrumentation + Serializer +``` + +## Core Components + +### Error Handling + +The error handling system provides a structured approach to managing and propagating errors throughout the application. + +Key features: + +- Hierarchical error classification +- Error chaining and context preservation +- Standardized error formatting +- Retry management + +[Learn more about Error Handling](./errors.md) + +### Logging + +A flexible logging system built on top of Pino with enhanced capabilities for agent development. + +Key features: + +- Multiple log levels +- Structured logging +- Child loggers +- Pretty printing support + +[Learn more about Logging](./logger.md) + +### Instrumentation + +OpenTelemetry integration for comprehensive application monitoring and tracing. + +Key features: + +- Distributed tracing +- Performance metrics +- Custom span creation +- Flexible configuration + +[Learn more about Instrumentation](./instrumentation.md) + +### Serialization + +Robust serialization system for handling complex object graphs and class instances. + +Key features: + +- Class-aware serialization +- Circular reference handling +- Custom serializer registration +- Snapshot system + +[Learn more about Serialization](./serialization.md) + +## Best Practices + +1. **Error Handling** + + - Use appropriate error classes for different types of failures + - Include relevant context in error objects + - Handle retryable vs non-retryable errors appropriately + +2. **Instrumentation** + + - Enable telemetry in production environments + - Create custom spans for important operations + - Use meaningful span names and attributes + +3. **Logging** + + - Use appropriate log levels + - Include structured data in log messages + - Create child loggers for different components + +4. **Serialization** + - Register custom classes before serialization + - Implement proper snapshot methods + - Handle circular references carefully + +## Environment Configuration + +The development tools can be configured through environment variables: + +```bash +# Logging +export BEE_FRAMEWORK_LOG_LEVEL=debug +export BEE_FRAMEWORK_LOG_PRETTY=true + +# Instrumentation +export BEE_FRAMEWORK_INSTRUMENTATION_ENABLED=true +export INSTRUMENTATION_IGNORED_KEYS="apiToken,accessToken" + +# Error Handling +export BEE_FRAMEWORK_ERROR_VERBOSE=true +``` + +## See Also + +- [Agent System](./agent.md) +- [Memory System](./memory.md) +- [Tool System](./tools.md) +- [LLM System](./llms.md) diff --git a/docs/emitter.md b/docs/emitter.md index 7f9c439d..3abac6c7 100644 --- a/docs/emitter.md +++ b/docs/emitter.md @@ -1,122 +1,180 @@ -# Emitter (Observability) +# Emitter + +The `Emitter` class is the foundation of the Bee Framework's event system, providing robust observability and event handling capabilities across all framework components. It enables granular monitoring of internal operations, event propagation, and system-wide event handling. + +## Overview + +`Emitter` serves as the event management system that allows framework components to emit events, propagate them through the system, and enable observers to monitor and react to these events. It provides type-safe event handling with support for complex event hierarchies and filtering. + +## Architecture + +```mermaid +classDiagram + class Emitter { + +string[] namespace + +object creator + +object context + +EventTrace trace + +emit(name: string, value: any) + +on(event: string, callback: Function) + +match(matcher: Matcher, callback: Function) + +child(input: EmitterInput) + +pipe(target: Emitter) + #createEvent(name: string) + } + + class EventMeta { + +string id + +string groupId + +string name + +string path + +Date createdAt + +Emitter source + +object creator + +object context + +EventTrace trace + } + + class Listener { + -Function match + -Matcher raw + -Callback callback + -EmitterOptions options + } + + class EventTrace { + +string id + +string runId + +string parentRunId + } + + Emitter *-- Listener + Emitter --> EventMeta + EventMeta --> EventTrace -> Location within the framework `bee-agent-framework/emitter`. +``` -An emitter is a core functionality of the framework that allows you to see what is happening under the hood. +## Core Properties -## Standalone usage +| Property | Type | Description | +| ----------- | ------------ | ------------------------------- | +| `namespace` | `string[]` | Event namespace hierarchy | +| `creator` | `object` | Object that created the emitter | +| `context` | `object` | Contextual data for events | +| `trace` | `EventTrace` | Tracing information | -The following examples demonstrate how [`Emitter`](/src/emitter/emitter.ts) concept works. +## Main Methods -### Basic Usage +### Public Methods - +#### `emit(name: string, value: any): Promise` -```ts -import { Emitter, EventMeta } from "bee-agent-framework/emitter/emitter"; +Emits an event to all registered listeners. -// Get the root emitter or create your own -const root = Emitter.root; +```typescript +import { Emitter } from "bee-agent-framework/emitter/emitter"; -root.match("*.*", async (data: unknown, event: EventMeta) => { - console.log(`Received event '${event.path}' with data ${JSON.stringify(data)}`); -}); +const emitter = new Emitter({ namespace: ["app"] }); -await root.emit("start", { id: 123 }); -await root.emit("end", { id: 123 }); +await emitter.emit("start", { id: 123 }); +await emitter.emit("end", { id: 123 }); ``` _Source: [examples/emitter/base.ts](/examples/emitter/base.ts)_ -> [!NOTE] -> -> You can create your own emitter by initiating the `Emitter` class, but typically it's better to use or fork the root one (as can be seen in the following examples). +#### `on(event: string, callback: Function): CleanupFn` -### Advanced +Registers a listener for a specific event. - +```typescript +import { Emitter } from "bee-agent-framework/emitter/emitter"; -```ts -import { Emitter, EventMeta, Callback } from "bee-agent-framework/emitter/emitter"; +const emitter = new Emitter({ namespace: ["app"] }); -// Define events in advanced -interface Events { - start: Callback<{ id: number }>; - update: Callback<{ id: number; data: string }>; -} - -// Create emitter with a type support -const emitter = Emitter.root.child({ - namespace: ["bee", "demo"], - creator: {}, // typically a class - context: {}, // custom data (propagates to the event's context property) - groupId: undefined, // optional id for grouping common events (propagates to the event's groupId property) - trace: undefined, // data related to identity what emitted what and which context (internally used by framework's components) -}); - -// Listen for "start" event -emitter.on("start", async (data, event: EventMeta) => { - console.log(`Received ${event.name} event with id "${data.id}"`); +emitter.on("update", (data, event) => { + console.log(`Event ${event.name} received with data:`, data); }); -// Listen for "update" event -emitter.on("update", async (data, event: EventMeta) => { - console.log(`Received ${event.name}' with id "${data.id}" and data ${data.data}`); -}); - -await emitter.emit("start", { id: 123 }); await emitter.emit("update", { id: 123, data: "Hello Bee!" }); ``` _Source: [examples/emitter/advanced.ts](/examples/emitter/advanced.ts)_ -> [!NOTE] -> -> Because we created the `Emitter` instance directly emitted events will not be propagated to the `root` which may or may not be desired. -> The `piping` concept is explained later on. +#### `match(matcher: Matcher, callback: Function): CleanupFn` -### Event Matching +Registers a listener with advanced matching capabilities. -```ts -import { Callback, Emitter } from "bee-agent-framework/emitter/emitter"; -import { BaseLLM } from "bee-agent-framework/llms/base"; +```typescript +import { Emitter } from "bee-agent-framework/emitter/emitter"; -interface Events { - update: Callback<{ data: string }>; -} +const emitter = new Emitter({ namespace: ["app"] }); -const emitter = new Emitter({ - namespace: ["app"], +// Match all events in namespace +emitter.match("*.*", (data, event) => { + console.log(`${event.path}: ${JSON.stringify(data)}`); }); -// Match events by a concrete name (strictly typed) -emitter.on("update", async (data, event) => {}); - -// Match all events emitted directly on the instance (not nested) -emitter.match("*", async (data, event) => {}); - -// Match all events (included nested) -emitter.match("*.*", async (data, event) => {}); +// Match with regular expression +emitter.match(/error/, (data, event) => { + console.log(`Error event: ${event.name}`); +}); -// Match events by providing a filter function +// Match with custom function emitter.match( - (event) => event.creator instanceof BaseLLM, - async (data, event) => {}, + (event) => event.context.priority === "high", + (data, event) => { + console.log(`High priority event: ${event.name}`); + }, ); - -// Match events by regex -emitter.match(/watsonx/, async (data, event) => {}); ``` _Source: [examples/emitter/matchers.ts](/examples/emitter/matchers.ts)_ -### Event Piping +#### `child(input: EmitterInput): Emitter` + +Creates a new emitter that inherits from the parent. + +```typescript +import { Emitter } from "bee-agent-framework/emitter/emitter"; + +const parentEmitter = new Emitter({ namespace: ["app"] }); +const childEmitter = parentEmitter.child({ + namespace: ["bee", "demo"], + creator: {}, // typically a class + context: {}, // custom data (propagates to the event's context property) + groupId: undefined, // optional id for grouping common events (propagates to the event's groupId property) + trace: undefined, // data related to identity what emitted what and which context (internally used by framework's components) +}); +``` + +_Source: [examples/emitter/advanced.ts](/examples/emitter/advanced.ts)_ + +### Event Handling + +#### Event Types + +```typescript +import { Callback, Emitter } from "bee-agent-framework/emitter/emitter"; + +interface Events { + start: Callback<{ id: number }>; + progress: Callback<{ id: number; percent: number }>; + complete: Callback<{ id: number; result: any }>; +} + +const emitter = new Emitter({ + namespace: ["process"], + context: { service: "background" }, +}); +``` + +#### Event Piping/Propagation -```ts +```typescript import { Emitter, EventMeta } from "bee-agent-framework/emitter/emitter"; const first = new Emitter({ @@ -153,15 +211,13 @@ await second.emit("d", {}); _Source: [examples/emitter/piping.ts](/examples/emitter/piping.ts)_ -## Framework Usage - -Typically, you consume out-of-the-box modules that use the `Emitter` concept on your behalf. +## Integration Examples -## Agent usage +### With Agents -```ts +```typescript import { BeeAgent } from "bee-agent-framework/agents/bee/agent"; import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMemory"; import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; @@ -200,3 +256,79 @@ _Source: [examples/emitter/agentMatchers.ts](/examples/emitter/agentMatchers.ts) > [!TIP] > > To verify if a given class instance has one, check for the presence of the `emitter` property. + +### With Tools + +> [!IMPORTANT] +> +> The `observe` method is also supported on [`Tools`](./tools.md) and [`LLMs`](./llms.md). + +```typescript +const tool = new SearchTool(); + +tool.emitter.match("*.*", (data, event) => { + console.log(`Tool event: ${event.path}`); +}); + +await tool.run({ query: "test" }).observe((emitter) => { + emitter.on("start", (data) => { + console.log("Search started:", data); + }); + + emitter.on("result", (data) => { + console.log("Search result:", data); + }); +}); +``` + +## Best Practices + +1. **Event Naming** + + ```typescript + // Good - clear, descriptive names + await emitter.emit("processingStarted", { jobId: "123" }); + + // Avoid - unclear names + await emitter.emit("proc", { id: "123" }); + ``` + +2. **Context Usage** + + ```typescript + const emitter = new Emitter({ + context: { + component: "auth", + version: "1.0", + }, + }); + ``` + +3. **Event Cleanup** + + ```typescript + const cleanup = emitter.on("event", callback); + try { + // Use emitter + } finally { + cleanup(); + } + ``` + +4. **Type Safety** + + ```typescript + interface MyEvents { + start: Callback<{ id: string }>; + end: Callback<{ id: string; result: any }>; + } + + const emitter = new Emitter(); + ``` + +## See Also + +- [Agent System](./agent.md) +- [Tools System](./tools.md) +- [Error Handling](./errors.md) +- [Logging System](./logger.md) diff --git a/docs/errors.md b/docs/errors.md index 9bf99324..898afbfe 100644 --- a/docs/errors.md +++ b/docs/errors.md @@ -1,12 +1,66 @@ # Error Handling +The `FrameworkError` class is the foundation of the Bee Framework's error handling system, providing a robust and consistent approach to error management across all framework components. Built on Node.js's native `AggregateError`, it enables sophisticated error chaining, contextual information preservation, and standardized error handling patterns. + +## Overview + +`FrameworkError` serves as the base class for all framework-specific errors, offering enhanced capabilities for error aggregation, context preservation, and error chain management. It provides a consistent interface for error handling while supporting both synchronous and asynchronous operations. + +## Architecture + +```mermaid +classDiagram + class FrameworkError { + +boolean isFatal + +boolean isRetryable + +Record context + +traverseErrors() + +getCause() + +explain() + +dump() + +hasFatalError() + +static ensure(error: Error) + } + + class ToolError { + +Record context + } + + class AgentError { + +Record context + } + + class SerializerError { + +Record context + } + + class LoggerError { + +Record context + } + + class ValueError { + +Record context + } + + FrameworkError <|-- ToolError + FrameworkError <|-- AgentError + FrameworkError <|-- SerializerError + FrameworkError <|-- LoggerError + FrameworkError <|-- ValueError + + class AggregateError { + +Error[] errors + +string message + } + + FrameworkError --|> AggregateError +``` + > [!TIP] > > Location within the framework `bee-agent-framework/error`. -Error handling is a critical part of any JavaScript application, especially when dealing with asynchronous operations, various error types, and error propagation across multiple layers. In the Bee Agent Framework, we provide a robust and consistent error-handling structure that ensures reliability and ease of debugging. - -## The `FrameworkError` class +# Error Handling All errors thrown within the Bee Agent Framework extend from the base [FrameworkError](/src/errors.ts) class, which itself extends Node.js's native [AggregateError](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/AggregateError). @@ -20,9 +74,26 @@ Benefits of using `FrameworkError`: This structure ensures that users can trace the complete error history while clearly identifying any errors originating from the Bee Agent Framework. +## Core Properties + +| Property | Type | Description | +| ------------- | ------------------------- | ------------------------------------- | +| `isFatal` | `boolean` | Indicates if error is unrecoverable | +| `isRetryable` | `boolean` | Indicates if operation can be retried | +| `context` | `Record` | Additional error context | +| `errors` | `Error[]` | Aggregated child errors | + +## Main Methods + +### Public Methods + +#### `traverseErrors(): Generator` + +Traverses the error chain, yielding all nested errors. + -```ts +```typescript import { FrameworkError } from "bee-agent-framework/errors"; const error = new FrameworkError( @@ -42,9 +113,6 @@ const error = new FrameworkError( console.log("Message", error.message); // Main error message console.log("Meta", { fatal: error.isFatal, retryable: error.isRetryable }); // Is the error fatal/retryable? console.log("Context", error.context); // Context in which the error occurred -console.log(error.explain()); // Human-readable format without stack traces (ideal for LLMs) -console.log(error.dump()); // Full error dump, including sub-errors -console.log(error.getCause()); // Retrieve the initial cause of the error ``` _Source: [examples/errors/base.ts](/examples/errors/base.ts)_ @@ -53,11 +121,54 @@ _Source: [examples/errors/base.ts](/examples/errors/base.ts)_ > > Every error thrown from the framework is an instance of the `FrameworkError` class, ensuring consistency across the codebase. +#### `explain(): string` + +Generates a human-readable explanation of the error chain. + +```typescript +import { FrameworkError } from "bee-agent-framework/errors"; + +const error = new FrameworkError("API call failed", [new Error("Network timeout")], { + context: { endpoint: "/users" }, + isFatal: false, +}); + +console.log(error.explain()); +// Output: +// API call failed +// Network timeout +``` + > [!TIP] > > The `explain()` method is particularly useful for returning a simplified, human-readable error message to an LLM, as used by the Bee Agent. -## Specialized Error Classes +#### `dump(): string` + +Provides a detailed inspection of the error object. + +```typescript +const error = new FrameworkError("Data validation failed"); +console.log(error.dump()); +// Detailed error structure including stack traces +``` + +### Static Methods + +#### `ensure(error: Error): FrameworkError` + +Ensures an error is wrapped as a FrameworkError. + +```typescript +try { + throw new Error("Regular error"); +} catch (e) { + const frameworkError = FrameworkError.ensure(e); + console.log(frameworkError instanceof FrameworkError); // true +} +``` + +## Specialized Error Types The Bee Agent Framework extends FrameworkError to create specialized error classes for different components. This ensures that each part of the framework has clear and well-defined error types, improving debugging and error handling. @@ -65,36 +176,18 @@ The Bee Agent Framework extends FrameworkError to create specialized error class > > Casting an unknown error to a `FrameworkError` can be done by calling the `FrameworkError.ensure` static method ([example](/examples/errors/cast.ts)). -### Tools - -When a tool encounters an error, it throws a `ToolError`, which extends `FrameworkError`. If input validation fails, a `ToolInputValidationError` (which extends `ToolError`) is thrown. +### ToolError - +For errors occurring during tool execution. -```ts -import { DynamicTool, ToolError } from "bee-agent-framework/tools/base"; +```typescript +import { ToolError } from "bee-agent-framework/tools/base"; import { FrameworkError } from "bee-agent-framework/errors"; -import { z } from "zod"; - -const tool = new DynamicTool({ - name: "dummy", - description: "dummy", - inputSchema: z.object({}), - handler: async () => { - throw new Error("Division has failed."); - }, -}); - -try { - await tool.run({}); -} catch (e) { - const err = e as FrameworkError; - console.log(e instanceof ToolError); // true - console.log("===DUMP==="); - console.log(err.dump()); - console.log("===EXPLAIN==="); - console.log(err.explain()); +class ToolError extends FrameworkError { + constructor(message: string, errors?: Error[], context?: Record) { + super(message, errors, { context }); + } } ``` @@ -104,18 +197,105 @@ _Source: [examples/errors/tool.ts](/examples/errors/tool.ts)_ > > If you throw a `ToolError` intentionally in a custom tool, the framework will not apply any additional "wrapper" errors, preserving the original error context. -### Agents +### AgentError -Throw `AgentError` class which extends `FrameworkError` class. +For errors occurring during agent execution. -### Prompt Templates +```typescript +class AgentError extends FrameworkError { + constructor(message: string, errors?: Error[], context?: Record) { + super(message, errors, { context }); + } +} +``` -Throw `PromptTemplateError` class which extends `FrameworkError` class. +### Other Specialized Types -### Loggers +- `SerializerError`: For serialization/deserialization errors +- `LoggerError`: For logging system errors +- `ValueError`: For value validation errors +- `NotImplementedError`: For unimplemented features +- `AbortError`: For cancelled operations -Throw `LoggerError` class which extends `FrameworkError` class. +## Error Handling Examples -### Serializers +### Basic Error Creation + +```typescript +const error = new FrameworkError("Operation failed", [new Error("Root cause")], { + context: { operation: "data-fetch" }, + isFatal: true, + isRetryable: false, +}); +``` + +### Error Chaining + +```typescript +const rootError = new Error("Database connection failed"); +const middlewareError = new FrameworkError("Query execution failed", [rootError]); +const applicationError = new FrameworkError("Data retrieval failed", [middlewareError]); + +console.log(applicationError.getCause().message); // "Database connection failed" +``` + +### Error in Tools + +```typescript +try { + await tool.execute(); +} catch (error) { + if (error instanceof ToolError) { + console.log(error.explain()); + if (error.isRetryable) { + // Attempt retry + } + } +} +``` -Throw `SerializerError` class which extends `FrameworkError` class. +## Best Practices + +1. **Error Creation** + + ```typescript + throw new FrameworkError("Clear, descriptive message", [originalError], { + context: { relevant: "data" }, + isFatal: whenUnrecoverable, + isRetryable: whenRetryPossible, + }); + ``` + +2. **Context Preservation** + + ```typescript + catch (error) { + throw new FrameworkError("Higher-level context", [error], { + context: { ...error.context, newInfo: "value" } + }); + } + ``` + +3. **Error Recovery** + + ```typescript + if (!error.hasFatalError() && error.isRetryable) { + // Implement retry logic + } + ``` + +4. **Error Reporting** + ```typescript + logger.error({ + message: error.explain(), + context: error.context, + stack: error.stack, + }); + ``` + +## See Also + +- [Agent System](./agent.md) +- [Tools System](./tools.md) +- [Logging System](./logger.md) +- [Serialization System](./serialization.md) diff --git a/docs/instrumentation.md b/docs/instrumentation.md index f415bc31..3391e7b3 100644 --- a/docs/instrumentation.md +++ b/docs/instrumentation.md @@ -1,39 +1,159 @@ -# OpenTelemetry Instrumentation in Bee-Agent-Framework +# Instrumentation -This document provides an overview of the OpenTelemetry instrumentation setup in the Bee-Agent-Framework. -The implementation is designed to [create telemetry spans](https://opentelemetry.io/docs/languages/js/instrumentation/#create-spans) for observability when instrumentation is enabled. +The OpenTelemetry instrumentation system in the Bee Framework provides comprehensive observability capabilities through distributed tracing, metrics collection, and performance monitoring. It enables developers to gain deep insights into agent operations, LLM interactions, and tool executions. ## Overview +The instrumentation system uses OpenTelemetry to provide detailed telemetry data across all framework components. It automatically creates spans, records metrics, and tracks performance when enabled, offering valuable insights into system behavior and performance. + OpenTelemetry instrumentation allows you to collect telemetry data, such as traces and metrics, to monitor the performance of your services. This setup involves creating middleware to handle instrumentation automatically when the `INSTRUMENTATION_ENABLED` flag is active. -## Setting up OpenTelemetry +## Architecture + +```mermaid +classDiagram + class TelemetrySystem { + +Tracer tracer + +SpanProcessor processor + +MetricExporter exporter + +boolean enabled + +createSpan() + +recordMetric() + } + + class Span { + +string spanId + +string traceId + +string parentId + +Attributes attributes + +TimeInput startTime + +TimeInput endTime + +SpanStatus status + } + + class TelemetryMiddleware { + +Map spansMap + +Map parentIdsMap + +createSpans() + +processEvents() + +handleErrors() + } + + class SpanBuilder { + +string name + +string target + +Attributes attributes + +TimeInput startTime + +buildSpan() + } + + TelemetrySystem *-- Span + TelemetrySystem *-- TelemetryMiddleware + TelemetryMiddleware --> SpanBuilder -Follow the official OpenTelemetry [Node.js Getting Started Guide](https://opentelemetry.io/docs/languages/js/getting-started/nodejs/) to initialize and configure OpenTelemetry in your application. +``` -## Instrumentation Configuration +## Core Components + +### Tracer Configuration + +```typescript +const tracer = opentelemetry.trace.getTracer("bee-agent-framework", Version); +``` + +### Span Creation + +```typescript +interface SpanAttributes { + ctx?: Attributes; + data?: Attributes; + target: string; +} + +interface FrameworkSpan { + attributes: SpanAttributes; + context: { + span_id: string; + }; + name: string; + parent_id?: string; + start_time: TimeInput; + end_time: TimeInput; + status: SpanStatus; +} +``` + +## Integration Examples + +### With Agents + +```typescript +const agent = new BeeAgent({ + llm, + memory, + tools, +}).middleware(createTelemetryMiddleware()); + +await agent.run({ + prompt: "Hello", + devTools: { + enableTelemetry: true, + }, +}); +``` + +### With LLMs + +```typescript +const llm = new ChatLLM().middleware(createTelemetryMiddleware()); + +await llm.generate([{ role: "user", text: "Hello" }], { + telemetry: true, +}); +``` -### Environment Variable +### With Tools -Use the environment variable `BEE_FRAMEWORK_INSTRUMENTATION_ENABLED` to enable or disable instrumentation. +```typescript +const tool = new SearchTool().middleware(createTelemetryMiddleware()); + +await tool.run( + { + query: "test", + }, + { + telemetry: true, + }, +); +``` + +## Configuration + +Follow the official OpenTelemetry [Node.js Getting Started Guide](https://opentelemetry.io/docs/languages/js/getting-started/nodejs/) to initialize and configure OpenTelemetry in your application. + +### Environment Variables ```bash # Enable instrumentation export BEE_FRAMEWORK_INSTRUMENTATION_ENABLED=true -# Ignore sensitive keys from collected events data -export INSTRUMENTATION_IGNORED_KEYS="apiToken,accessToken" + +# Configure ignored keys for sensitive data +export BEE_FRAMEWORK_INSTRUMENTATION_IGNORED_KEYS=apiKey,secret,token + +# Set logging level for instrumentation +export BEE_FRAMEWORK_LOG_LEVEL=debug ``` -If `BEE_FRAMEWORK_INSTRUMENTATION_ENABLED` is false or unset, the framework will run without instrumentation. +## Span Creation Patterns -## Creating Custom Spans +### Basic Span You can manually create spans during the `run` process to track specific parts of the execution. This is useful for adding custom telemetry to enhance observability. Example of creating a span: -```ts +```typescript import { trace } from "@opentelemetry/api"; const tracer = trace.getTracer("bee-agent-framework"); @@ -51,12 +171,27 @@ function exampleFunction() { } ``` +## Event Tracking + +```typescript +emitter.match("*.*", (data, meta) => { + const span = createSpan({ + id: meta.id, + name: meta.name, + target: meta.path, + data: getSerializedObjectSafe(data), + ctx: getSerializedObjectSafe(meta.context), + startedAt: meta.createdAt, + }); +}); +``` + ## Verifying Instrumentation Once you have enabled the instrumentation, you can view telemetry data using any [compatible OpenTelemetry backend](https://opentelemetry.io/docs/languages/js/exporters/), such as [Jaeger](https://www.jaegertracing.io/), [Zipkin](https://zipkin.io/), [Prometheus](https://prometheus.io/docs/prometheus/latest/feature_flags/#otlp-receiver), etc... Ensure your OpenTelemetry setup is properly configured to export trace data to your chosen backend. -## Run examples +### Run examples > the right version of node.js must be correctly set @@ -64,35 +199,92 @@ Ensure your OpenTelemetry setup is properly configured to export trace data to y nvm use ``` -### Agent instrumentation - -Running the Instrumented Application (`examples/agents/bee_instrumentation.js`) file. - -```bash -## the telemetry example is run on built js files -yarn start:telemetry ./examples/agents/bee_instrumentation.ts +## Best Practices + +1. **Span Management** + + ```typescript + try { + const span = tracer.startSpan("operation"); + // Operation logic + } catch (error) { + span.recordException(error); + span.setStatus({ code: SpanStatusCode.ERROR }); + throw error; + } finally { + span.end(); + } + ``` + +2. **Attribute Handling** + + ```typescript + span.setAttributes({ + "operation.name": name, + "operation.params": JSON.stringify(params), + "operation.result": JSON.stringify(result), + }); + ``` + +3. **Error Tracking** + + ```typescript + function handleError(error: Error, span: Span) { + span.recordException(error); + span.setStatus({ + code: SpanStatusCode.ERROR, + message: error.message, + }); + } + ``` + +4. **Performance Monitoring** + ```typescript + const startTime = performance.now(); + // Operation + span.setAttributes({ + "duration.ms": performance.now() - startTime, + }); + ``` + +## Visualization and Monitoring + +### Compatible Backends + +- Jaeger +- Zipkin +- Prometheus +- OpenTelemetry Collector + +### Example Configuration + +```typescript +import { JaegerExporter } from "@opentelemetry/exporter-jaeger"; + +const exporter = new JaegerExporter({ + endpoint: "http://localhost:14268/api/traces", +}); + +const spanProcessor = new BatchSpanProcessor(exporter); +provider.addSpanProcessor(spanProcessor); ``` -### LLM instrumentation - -Running (`./examples/llms/instrumentation.js`) file. - -```bash -## the telemetry example is run on built js files - -yarn start:telemetry ./examples/llms/instrumentation.ts -``` +## Security Considerations -### Tool instrumentation +1. **Sensitive Data** -Running (`./examples/tools/instrumentation.js`) file. + - Use `INSTRUMENTATION_IGNORED_KEYS` for sensitive fields + - Sanitize data before recording in spans + - Avoid logging credentials or secrets -```bash -## the telemetry example is run on built js files -yarn start:telemetry ./examples/tools/instrumentation.ts -``` +2. **Resource Usage** + - Monitor telemetry overhead + - Use sampling when necessary + - Configure appropriate batch sizes -## Conclusion +## See Also -This setup provides basic OpenTelemetry instrumentation with the flexibility to enable or disable it as needed. -By creating custom spans and using `createTelemetryMiddleware`, you can capture detailed telemetry for better observability and performance insights. +- [Agent System](./agent.md) +- [LLM System](./llms.md) +- [Tools System](./tools.md) +- [Logging System](./logger.md) diff --git a/docs/llms.md b/docs/llms.md index f483a528..ad80133b 100644 --- a/docs/llms.md +++ b/docs/llms.md @@ -1,4 +1,59 @@ -# LLMs (inference) +# LLM + +The `BaseLLM` class is the foundation of the Bee Framework's language model integration system, providing the core interface and functionality for interacting with various LLM providers. It serves as the abstract base class that all LLM implementations must extend. + +## Overview + +`BaseLLM` defines the standard interface and basic functionality for LLM interactions in the framework. It handles text generation, chat completions, token management, and provides a consistent interface for different LLM implementations like chat-based, completion-based, and specialized LLM types. + +## Architecture + +```mermaid +classDiagram + class BaseLLM { + +string modelId + +ExecutionOptions executionOptions + +LLMCache cache + +generate(input: TInput, options?: TGenerateOptions) + +stream(input: TInput, options?: StreamGenerateOptions) + +tokenize(input: TInput) + +meta() + #_generate(input, options, run)* + #_stream(input, options, run)* + #_mergeChunks(chunks: TOutput[]) + } + + class LLM { + +LLMInput input + } + + class ChatLLM { + +BaseMessage[] input + } + + class BaseLLMOutput { + +merge(other: BaseLLMOutput) + +getTextContent() + +toString() + +mergeImmutable(other) + } + + class ChatLLMOutput { + +BaseMessage[] messages + } + + BaseLLM <|-- LLM + BaseLLM <|-- ChatLLM + BaseLLMOutput <|-- ChatLLMOutput + + class BaseMessage { + +string role + +string text + +BaseMessageMeta meta + } + + ChatLLM o-- BaseMessage +``` > [!TIP] > @@ -6,238 +61,199 @@ > > Location for base abstraction within the framework `bee-agent-framework/llms`. -A Large Language Model (LLM) is an AI designed to understand and generate human-like text. -Trained on extensive text data, LLMs learn language patterns, grammar, context, and basic reasoning to perform tasks like text completion, translation, summarization, and answering questions. - -To unify differences between various APIs, the framework defines a common interface—a set of actions that can be performed with it. - -## Providers (adapters) +## Core Properties -| Name | LLM | Chat LLM | Structured output (constrained decoding) | -| ------------------------------------------------------------------------- | -------------------------- | --------------------------------------------- | ---------------------------------------- | -| `WatsonX` | ✅ | ⚠️ (model specific template must be provided) | ❌ | -| `Ollama` | ✅ | ✅ | ⚠️ (JSON only) | -| `OpenAI` | ❌ | ✅ | ⚠️ (JSON schema only) | -| `LangChain` | ⚠️ (depends on a provider) | ⚠️ (depends on a provider) | ❌ | -| `Groq` | ❌ | ✅ | ⚠️ (JSON object only) | -| `AWS Bedrock` | ❌ | ✅ | ⚠️ (JSON only) - model specific | -| `VertexAI` | ✅ | ✅ | ⚠️ (JSON only) | -| `BAM (Internal)` | ✅ | ⚠️ (model specific template must be provided) | ✅ | -| ➕ [Request](https://github.com/i-am-bee/bee-agent-framework/discussions) | | | | +| Property | Type | Description | +| ------------------ | ------------------ | --------------------------------------------- | +| `modelId` | `string` | Identifier for the LLM model | +| `executionOptions` | `ExecutionOptions` | Configuration for execution behavior | +| `cache` | `LLMCache` | Cache system for LLM responses | +| `emitter` | `Emitter` | Event emitter for monitoring LLM interactions | -All providers' examples can be found in [examples/llms/providers](/examples/llms/providers). +## Main Methods -Are you interested in creating your own adapter? Jump to the [adding a new provider](#adding-a-new-provider-adapter) section. +### Public Methods -## Usage +#### `generate(input: TInput, options?: TGenerateOptions): Promise` -### Plain text generation +Generates a response from the LLM based on the provided input. - - -```ts -import "dotenv/config.js"; -import { createConsoleReader } from "examples/helpers/io.js"; -import { WatsonXLLM } from "bee-agent-framework/adapters/watsonx/llm"; - -const llm = new WatsonXLLM({ - modelId: "google/flan-ul2", - projectId: process.env.WATSONX_PROJECT_ID, - apiKey: process.env.WATSONX_API_KEY, - region: process.env.WATSONX_REGION, // (optional) default is us-south - parameters: { - decoding_method: "greedy", - max_new_tokens: 50, - }, +```typescript +// Text LLM +const llm = new TextLLM({ modelId: "model-name" }); +const response = await llm.generate("What is the capital of France?", { + stream: false, + signal: AbortSignal.timeout(30000), }); -const reader = createConsoleReader(); -const prompt = await reader.prompt(); -const response = await llm.generate(prompt); -reader.write(`LLM 🤖 (text) : `, response.getTextContent()); -reader.close(); +// Chat LLM +const chatLlm = new ChatLLM({ modelId: "chat-model" }); +const messages = [ + BaseMessage.of({ + role: "user", + text: "Who won the 2024 Super Bowl?", + }), +]; +const chatResponse = await chatLlm.generate(messages); ``` -_Source: [examples/llms/text.ts](/examples/llms/text.ts)_ - -> [!NOTE] -> -> The `generate` method returns a class that extends the base [`BaseLLMOutput`](/src/llms/base.ts) class. -> This class allows you to retrieve the response as text using the `getTextContent` method and other useful metadata. - -> [!TIP] -> -> You can enable streaming communication (internally) by passing `{ stream: true }` as a second parameter to the `generate` method. - -### Chat text generation - - +#### `stream(input: TInput, options?: StreamGenerateOptions): AsyncGenerator` -```ts -import "dotenv/config.js"; -import { createConsoleReader } from "examples/helpers/io.js"; -import { BaseMessage, Role } from "bee-agent-framework/llms/primitives/message"; -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; +Streams the LLM's response as it's being generated. -const llm = new OllamaChatLLM(); - -const reader = createConsoleReader(); - -for await (const { prompt } of reader) { - const response = await llm.generate([ - BaseMessage.of({ - role: Role.USER, - text: prompt, - }), - ]); - reader.write(`LLM 🤖 (txt) : `, response.getTextContent()); - reader.write(`LLM 🤖 (raw) : `, JSON.stringify(response.finalResult)); +```typescript +const llm = new ChatLLM({ modelId: "streaming-model" }); +for await (const chunk of llm.stream(messages, { + signal: AbortSignal.timeout(30000), +})) { + console.log(chunk.getTextContent()); } ``` -_Source: [examples/llms/chat.ts](/examples/llms/chat.ts)_ - -> [!NOTE] -> -> The `generate` method returns a class that extends the base [`ChatLLMOutput`](/src/llms/chat.ts) class. -> This class allows you to retrieve the response as text using the `getTextContent` method and other useful metadata. -> To retrieve all messages (chunks) access the `messages` property (getter). - -> [!TIP] -> -> You can enable streaming communication (internally) by passing `{ stream: true }` as a second parameter to the `generate` method. +#### `tokenize(input: TInput): Promise` -#### Streaming +Returns token information for the provided input. - +```typescript +const tokenInfo = await llm.tokenize("Hello, world!"); +console.log(tokenInfo.tokensCount); // Number of tokens +console.log(tokenInfo.tokens); // Array of token strings if available +``` -```ts -import "dotenv/config.js"; -import { createConsoleReader } from "examples/helpers/io.js"; -import { BaseMessage, Role } from "bee-agent-framework/llms/primitives/message"; -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; +## Supported Providers + +| Provider | Text Generation | Chat | Structured Output | +| ----------- | --------------- | ---- | ----------------- | +| WatsonX | ✅ | ⚠️ | ❌ | +| Ollama | ✅ | ✅ | ⚠️ | +| OpenAI | ❌ | ✅ | ⚠️ | +| LangChain | ⚠️ | ⚠️ | ❌ | +| Groq | ❌ | ✅ | ⚠️ | +| AWS Bedrock | ❌ | ✅ | ⚠️ | +| VertexAI | ✅ | ✅ | ⚠️ | + +✅ Full support +⚠️ Partial support/limitations +❌ Not supported + +## Implementation Example + +Here's an example of implementing a custom LLM provider: + +```typescript +class CustomLLMOutput extends BaseLLMOutput { + constructor( + private content: string, + private metadata: Record = {}, + ) { + super(); + } -const llm = new OllamaChatLLM(); + merge(other: CustomLLMOutput): void { + this.content += other.content; + Object.assign(this.metadata, other.metadata); + } -const reader = createConsoleReader(); + getTextContent(): string { + return this.content; + } -for await (const { prompt } of reader) { - for await (const chunk of llm.stream([ - BaseMessage.of({ - role: Role.USER, - text: prompt, - }), - ])) { - reader.write(`LLM 🤖 (txt) : `, chunk.getTextContent()); - reader.write(`LLM 🤖 (raw) : `, JSON.stringify(chunk.finalResult)); + toString(): string { + return this.content; } } -``` -_Source: [examples/llms/chatStream.ts](/examples/llms/chatStream.ts)_ +class CustomLLM extends LLM { + public readonly emitter = new Emitter(); -#### Callback (Emitter) - - - -```ts -import "dotenv/config.js"; -import { createConsoleReader } from "examples/helpers/io.js"; -import { BaseMessage, Role } from "bee-agent-framework/llms/primitives/message"; -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; - -const llm = new OllamaChatLLM(); - -const reader = createConsoleReader(); + async meta(): Promise { + return { + tokenLimit: 4096, + }; + } -for await (const { prompt } of reader) { - const response = await llm - .generate( - [ - BaseMessage.of({ - role: Role.USER, - text: prompt, - }), - ], - {}, - ) - .observe((emitter) => - emitter.match("*", (data, event) => { - reader.write(`LLM 🤖 (event: ${event.name})`, JSON.stringify(data)); + async tokenize(input: string): Promise { + return { + tokensCount: Math.ceil(input.length / 4), + }; + } - // if you want to close the stream prematurely, just uncomment the following line - // callbacks.abort() - }), - ); + protected async _generate( + input: string, + options: GenerateOptions, + run: RunContext, + ): Promise { + // Implementation for one-shot generation + const response = await this.callApi(input); + return new CustomLLMOutput(response.text, response.meta); + } - reader.write(`LLM 🤖 (txt) : `, response.getTextContent()); - reader.write(`LLM 🤖 (raw) : `, JSON.stringify(response.finalResult)); + protected async *_stream( + input: string, + options: StreamGenerateOptions, + run: RunContext, + ): AsyncGenerator { + // Implementation for streaming generation + for await (const chunk of this.streamApi(input)) { + yield new CustomLLMOutput(chunk); + } + } } ``` -_Source: [examples/llms/chatCallback.ts](/examples/llms/chatCallback.ts)_ - -### Structured generation - - - -```ts -import "dotenv/config.js"; -import { z } from "zod"; -import { BaseMessage, Role } from "bee-agent-framework/llms/primitives/message"; -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; -import { JsonDriver } from "bee-agent-framework/llms/drivers/json"; - -const llm = new OllamaChatLLM(); -const driver = new JsonDriver(llm); -const response = await driver.generate( - z.union([ - z.object({ - firstName: z.string().min(1), - lastName: z.string().min(1), - address: z.string(), - age: z.number().int().min(1), - hobby: z.string(), - }), - z.object({ - error: z.string(), - }), - ]), - [ - BaseMessage.of({ - role: Role.USER, - text: "Generate a profile of a citizen of Europe.", - }), - ], -); -console.info(response); -``` - -_Source: [examples/llms/structured.ts](/examples/llms/structured.ts)_ - -## Adding a new provider (adapter) - -To use an inference provider that is not mentioned in our providers list feel free to [create a request](https://github.com/i-am-bee/bee-agent-framework/discussions). - -If approved and you want to create it on your own, you must do the following things. Let's assume the name of your provider is `Custom.` - -- Base location within the framework: `bee-agent-framework/adapters/custom` - - Text LLM (filename): `llm.ts` ([example implementation](/examples/llms/providers/customProvider.ts)) - - Chat LLM (filename): `chat.ts` ([example implementation](/examples/llms/providers/customChatProvider.ts)) - -> [!IMPORTANT] -> -> If the target provider provides an SDK, use it. - -> [!IMPORTANT] -> -> All provider-related dependencies (if any) must be included in `devDependencies` and `peerDependencies` in the [`package.json`](/package.json). - -> [!TIP] -> -> To simplify work with the target RestAPI feel free to use the helper [`RestfulClient`](/src/internals/fetcher.ts) class. -> The client usage can be seen in the WatsonX LLM Adapter [here](/src/adapters/watsonx/llm.ts). - -> [!TIP] -> -> Parsing environment variables should be done via helper functions (`parseEnv` / `hasEnv` / `getEnv`) that can be found [here](/src/internals/env.ts). +## Best Practices + +1. **Error Handling** + + ```typescript + try { + const response = await llm.generate(input); + } catch (error) { + if (error instanceof LLMFatalError) { + // Handle unrecoverable errors + } else if (error instanceof LLMError) { + // Handle recoverable errors + } + } + ``` + +2. **Stream Management** + + ```typescript + const controller = new AbortController(); + setTimeout(() => controller.abort(), 30000); + + for await (const chunk of llm.stream(input, { + signal: controller.signal, + })) { + // Process chunks + } + ``` + +3. **Event Handling** + + ```typescript + const response = await llm.generate(input).observe((emitter) => { + emitter.on("newToken", ({ data }) => { + console.log("New token:", data.value.getTextContent()); + }); + emitter.on("error", ({ data }) => { + console.error("Error:", data.error); + }); + }); + ``` + +4. **Cache Usage** + ```typescript + const llm = new CustomLLM({ + modelId: "model-name", + cache: new CustomCache(), + }); + ``` + +## See Also + +- [Memory System](./memory.md) +- [Agent System](./agent.md) +- [Providers Guide](./providers.md) +- [Event System](./events.md) diff --git a/docs/logger.md b/docs/logger.md index 3bc394f9..cbfbdb77 100644 --- a/docs/logger.md +++ b/docs/logger.md @@ -1,18 +1,93 @@ # Logger -> [!TIP] -> -> Location within the framework `bee-agent-framework/logger`. +The `Logger` class is the foundation of the Bee Framework's logging system, providing robust logging capabilities built on top of the Pino logger. It enables comprehensive system monitoring, debugging, and troubleshooting through structured logging with multiple severity levels and flexible configuration options. -The Logger is a key component designed to record and track events, errors, and other important actions during an application's execution. It provides valuable insights into the application's behavior, performance, and potential issues, helping developers and system administrators troubleshoot and monitor the system effectively. +## Overview + +`Logger` serves as an abstraction layer over Pino, offering enhanced functionality for structured logging, child loggers, and framework integration. It provides consistent logging patterns across all framework components while supporting customization and extension. In the Bee Agent Framework, the [Logger](/src/logger/logger.ts) class is an abstraction built on top of the popular [pino](https://github.com/pinojs/pino) logger, offering additional flexibility and integration. -## Basic Usage +## Architecture + +```mermaid +classDiagram + class Logger { + +LoggerInput input + +LoggerLevel level + +pino.Logger raw + +info(msg: string) + +warn(msg: string) + +error(msg: string) + +debug(msg: string) + +trace(msg: string) + +fatal(msg: string) + +child(input: LoggerInput) + } + + class LoggerInput { + +string name + +LoggerBindings bindings + +LoggerLevelType level + +ChildLoggerOptions raw + } + + class LoggerDefaults { + +boolean pretty + +string name + +LoggerBindings bindings + +LoggerLevelType level + } + + class LoggerLevel { + +TRACE + +DEBUG + +INFO + +WARN + +ERROR + +FATAL + +SILENT + } + + Logger *-- LoggerInput + Logger --> LoggerLevel + Logger --> LoggerDefaults +``` + +## Core Properties + +| Property | Type | Description | +| ---------- | ----------------- | ------------------------ | +| `level` | `LoggerLevelType` | Current logging level | +| `input` | `LoggerInput` | Logger configuration | +| `raw` | `pino.Logger` | Underlying Pino instance | +| `defaults` | `LoggerDefaults` | Global default settings | + +## Logging Levels + +```typescript +const LoggerLevel = { + TRACE: "trace", // Most detailed logging + DEBUG: "debug", // Debug information + INFO: "info", // General information + WARN: "warn", // Warning messages + ERROR: "error", // Error conditions + FATAL: "fatal", // Critical failures + SILENT: "silent", // No logging +}; +``` + +## Main Methods + +### Public Methods + +#### `child(input?: LoggerInput): Logger` + +Creates a new logger instance inheriting from the parent. -```ts +```typescript import { Logger, LoggerLevel } from "bee-agent-framework/logger/logger"; // Configure logger defaults @@ -21,27 +96,67 @@ Logger.defaults.level = LoggerLevel.TRACE; // Set log level to trace (default: T Logger.defaults.name = undefined; // Optional name for logger (default: undefined) Logger.defaults.bindings = {}; // Optional bindings for structured logging (default: empty) -// Create a child logger for your app -const logger = Logger.root.child({ name: "app" }); +const parentLogger = Logger.root.child({ name: "app" }); +const moduleLogger = parentLogger.child({ + name: "module", + level: "debug", +}); +``` + +#### Logging Methods + +```typescript +import { Logger, LoggerLevel } from "bee-agent-framework/logger/logger"; -// Log at different levels -logger.trace("Trace!"); -logger.debug("Debug!"); -logger.info("Info!"); -logger.warn("Warning!"); -logger.error("Error!"); -logger.fatal("Fatal!"); +Logger.defaults.level = LoggerLevel.TRACE; // Set log level to trace (default: TRACE, can also be set via ENV: BEE_FRAMEWORK_LOG_LEVEL=trace) + +logger.trace("Detailed debugging information"); +logger.debug("Debugging information"); +logger.info("General information"); +logger.warn("Warning messages"); +logger.error("Error conditions"); +logger.fatal("Critical failures"); ``` _Source: [examples/logger/base.ts](/examples/logger/base.ts)_ -## Usage with Agents +## Configuration + +### Environment Variables + +```bash +# Enable pretty printing +export BEE_FRAMEWORK_LOG_PRETTY=true + +# Set default log level +export BEE_FRAMEWORK_LOG_LEVEL=debug + +# Enable single-line logging +export BEE_FRAMEWORK_LOG_SINGLE_LINE=true +``` + +### Default Configuration + +```typescript +import { Logger } from "bee-agent-framework/logger/logger"; + +Logger.defaults = { + pretty: false, // Pretty printing + name: undefined, // Logger name + level: "info", // Default level + bindings: {}, // Default bindings +}; +``` + +## Integration Examples + +### With Agents The [Logger](/src/logger/logger.ts) seamlessly integrates with agents in the framework. Below is an example that demonstrates how logging can be used in conjunction with agents and event emitters. -```ts +```typescript import { BeeAgent } from "bee-agent-framework/agents/bee/agent"; import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMemory"; @@ -75,13 +190,13 @@ logger.info(response.result.text); _Source: [examples/logger/agent.ts](/examples/logger/agent.ts)_ -## Custom pino instance integration +### With Custom Pino Instance If you need to integrate your own `pino` instance with the Bee Agent Framework Logger, you can do so easily. Below is an example that demonstrates how to create a pino logger and use it with the framework’s [Logger](/src/logger/logger.ts). -```ts +```typescript import { Logger } from "bee-agent-framework/logger/logger"; import { pino } from "pino"; @@ -101,3 +216,68 @@ const frameworkLogger = new Logger( ``` _Source: [examples/logger/pino.ts](/examples/logger/pino.ts)_ + +## Pretty Printing + +```typescript +// Enable pretty printing with custom options +Logger.defaults.pretty = true; + +const logger = Logger.root.child({ + name: "pretty-logger", +}); + +logger.info("This will be pretty printed!"); +// Output: 2024-02-28 14:30:45 INF ℹ️ [pretty-logger] This will be pretty printed! +``` + +## Best Practices + +1. **Logger Hierarchy** + + ```typescript + const appLogger = Logger.root.child({ name: "app" }); + const dbLogger = appLogger.child({ name: "database" }); + const apiLogger = appLogger.child({ name: "api" }); + ``` + +2. **Structured Logging** + + ```typescript + logger.info({ + operation: "user-login", + userId: "123", + status: "success", + }); + ``` + +3. **Error Logging** + + ```typescript + try { + await operation(); + } catch (error) { + logger.error({ + error, + context: "operation-name", + inputs: operationInputs, + }); + } + ``` + +4. **Performance Monitoring** + ```typescript + const start = performance.now(); + // Operation + logger.debug({ + operation: "task-name", + duration: performance.now() - start, + }); + ``` + +## See Also + +- [Agent System](./agent.md) +- [Error Handling](./errors.md) +- [Instrumentation](./instrumentation.md) +- [Event System](./emitter.md) diff --git a/docs/memory.md b/docs/memory.md index bd095854..434911da 100644 --- a/docs/memory.md +++ b/docs/memory.md @@ -1,291 +1,242 @@ # Memory -> [!TIP] -> -> Location within the framework `bee-agent-framework/memory`. +The `BaseMemory` class is the foundation of the Bee Framework's memory system, providing the core interface and functionality for managing conversation history, context retention, and state management across agent interactions. It serves as the abstract base class that all memory implementations must extend. + +## Overview + +`BaseMemory` defines the standard interface and basic functionality for memory management in the framework. It handles message storage, retrieval, and manipulation while providing a consistent interface for different memory implementations like token-based, unconstrained, and specialized memory types. + +## Architecture + +```mermaid +classDiagram + class BaseMemory { + +BaseMessage[] messages + +add(message: BaseMessage, index?: number) + +delete(message: BaseMessage) + +reset() + +addMany(messages: BaseMessage[]) + +deleteMany(messages: BaseMessage[]) + +splice(start: number, deleteCount: number, items: BaseMessage[]) + +isEmpty() + +asReadOnly() + #loadSnapshot(state: TState) + #createSnapshot() + } + + class TokenMemory { + +number tokensUsed + +boolean isDirty + +sync() + +stats() + } + + class UnconstrainedMemory { + +BaseMessage[] messages + } + + class ReadOnlyMemory { + +BaseMemory source + } + + BaseMemory <|-- TokenMemory + BaseMemory <|-- UnconstrainedMemory + BaseMemory <|-- ReadOnlyMemory + + class BaseMessage { + +string role + +string text + } + + BaseMemory o-- BaseMessage +``` -Memory in the context of an agent refers to the system's capability to store, recall, and utilize information from past interactions. This enables the agent to maintain context over time, improve its responses based on previous exchanges, and provide a more personalized experience. +## Core Properties -## Usage +| Property | Type | Description | +| ------------------- | ------------------------ | ------------------------------------ | +| `messages` | `readonly BaseMessage[]` | Array of stored messages | +| `isEmpty()` | `boolean` | Whether memory contains any messages | +| `[Symbol.iterator]` | `Iterator` | Allows iteration over messages | -### Capabilities showcase +## Main Methods - +### Public Methods -```ts -import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMemory"; -import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; +#### `add(message: BaseMessage, index?: number): Promise` -const memory = new UnconstrainedMemory(); +Adds a new message to memory at the specified index. -// Single message +```typescript await memory.add( BaseMessage.of({ - role: "system", - text: `You are a helpful assistant.`, + role: "user", + text: "What's the weather like?", }), ); -// Multiple messages -await memory.addMany([ - BaseMessage.of({ role: "user", text: `What can you do?` }), - BaseMessage.of({ role: "assistant", text: `Everything!` }), -]); - -console.info(memory.isEmpty()); // false -console.info(memory.messages); // prints all saved messages -console.info(memory.asReadOnly()); // returns a NEW read only instance -memory.reset(); // removes all messages +// Add at specific index +await memory.add(systemMessage, 0); ``` -_Source: [examples/memory/base.ts](/examples/memory/base.ts)_ +#### `delete(message: BaseMessage): Promise` -### Usage with LLMs +Removes a message from memory. - +```typescript +const deleted = await memory.delete(message); +console.log(`Message ${deleted ? "was" : "was not"} deleted`); +``` -```ts -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; -import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMemory"; -import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; +#### `addMany(messages: Iterable, start?: number): Promise` -const memory = new UnconstrainedMemory(); +Adds multiple messages to memory. + +```typescript await memory.addMany([ - BaseMessage.of({ - role: "system", - text: `Always respond very concisely.`, - }), - BaseMessage.of({ role: "user", text: `Give me first 5 prime numbers.` }), + BaseMessage.of({ role: "user", text: "Hello" }), + BaseMessage.of({ role: "assistant", text: "Hi there!" }), ]); - -// Generate response -const llm = new OllamaChatLLM(); -const response = await llm.generate(memory.messages); -await memory.add(BaseMessage.of({ role: "assistant", text: response.getTextContent() })); - -console.log(`Conversation history`); -for (const message of memory) { - console.log(`${message.role}: ${message.text}`); -} ``` -_Source: [examples/memory/llmMemory.ts](/examples/memory/llmMemory.ts)_ - -> [!TIP] -> -> Memory for non-chat LLMs works exactly the same way. - -### Usage with agents +#### `reset(): void` - +Clears all messages from memory. -```ts -import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMemory"; -import { BeeAgent } from "bee-agent-framework/agents/bee/agent"; -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; - -const agent = new BeeAgent({ - memory: new UnconstrainedMemory(), - llm: new OllamaChatLLM(), - tools: [], -}); -await agent.run({ prompt: "Hello world!" }); - -console.info(agent.memory.messages.length); // 2 - -const userMessage = agent.memory.messages[0]; -console.info(`User: ${userMessage.text}`); // User: Hello world! - -const agentMessage = agent.memory.messages[1]; -console.info(`Agent: ${agentMessage.text}`); // Agent: Hello! It's nice to chat with you. +```typescript +memory.reset(); +console.log(memory.isEmpty()); // true ``` -_Source: [examples/memory/agentMemory.ts](/examples/memory/agentMemory.ts)_ +## Memory Implementations -> [!TIP] -> -> If your memory already contains the user message, run the agent with `prompt: null`. - -> [!NOTE] -> -> Bee Agent internally uses `TokenMemory` to store intermediate steps for a given run. +### TokenMemory -> [!NOTE] -> -> Agent typically works with a memory similar to what was just shown. +Manages messages while respecting token limits, suitable for LLM context windows. -## Memory types +```typescript +const memory = new TokenMemory({ + llm, + maxTokens: 4096, + capacityThreshold: 0.75, + syncThreshold: 0.25, + handlers: { + estimate: (msg) => Math.ceil((msg.role.length + msg.text.length) / 4), + removalSelector: (messages) => messages[0], + }, +}); -The framework provides multiple out-of-the-box memory implementations. +console.log(memory.stats()); +// { +// tokensUsed: 1024, +// maxTokens: 4096, +// messagesCount: 10, +// isDirty: false +// } +``` ### UnconstrainedMemory -Unlimited in size. - - - -```ts -import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMemory"; -import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; +Simple memory implementation with no size or token limits. +```typescript const memory = new UnconstrainedMemory(); + await memory.add( BaseMessage.of({ - role: "user", - text: `Hello world!`, + role: "system", + text: "You are a helpful assistant", }), ); -console.info(memory.isEmpty()); // false -console.log(memory.messages.length); // 1 -console.log(memory.messages); +console.log(memory.messages.length); ``` -_Source: [examples/memory/unconstrainedMemory.ts](/examples/memory/unconstrainedMemory.ts)_ - -### SlidingMemory - -Keeps last `k` entries in the memory. The oldest ones are deleted (unless specified otherwise). - - - -```ts -import { SlidingMemory } from "bee-agent-framework/memory/slidingMemory"; -import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; - -const memory = new SlidingMemory({ - size: 3, // (required) number of messages that can be in the memory at a single moment - handlers: { - // optional - // we select a first non-system message (default behaviour is to select the oldest one) - removalSelector: (messages) => messages.find((msg) => msg.role !== "system")!, - }, -}); +### ReadOnlyMemory -await memory.add(BaseMessage.of({ role: "system", text: "You are a guide through France." })); -await memory.add(BaseMessage.of({ role: "user", text: "What is the capital?" })); -await memory.add(BaseMessage.of({ role: "assistant", text: "Paris" })); -await memory.add(BaseMessage.of({ role: "user", text: "What language is spoken there?" })); // removes the first user's message -await memory.add(BaseMessage.of({ role: "assistant", text: "French" })); // removes the first assistant's message +Wrapper providing read-only access to another memory instance. -console.info(memory.isEmpty()); // false -console.log(memory.messages.length); // 3 -console.log(memory.messages); +```typescript +const readOnly = memory.asReadOnly(); +await readOnly.add(message); // No effect +console.log(readOnly.messages); // Same as source memory ``` -_Source: [examples/memory/slidingMemory.ts](/examples/memory/slidingMemory.ts)_ +## Best Practices -### TokenMemory +1. **Memory Management** -Ensures that the token sum of all messages is below the given threshold. -If overflow occurs, the oldest message will be removed. + ```typescript + // Clean up messages when done + memory.reset(); - + // Use read-only memory when passing to untrusted code + const safeMemory = memory.asReadOnly(); + ``` -```ts -import { TokenMemory } from "bee-agent-framework/memory/tokenMemory"; -import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; +2. **Error Handling** -const llm = new OllamaChatLLM(); -const memory = new TokenMemory({ - llm, - maxTokens: undefined, // optional (default is inferred from the passed LLM instance), - capacityThreshold: 0.75, // maxTokens*capacityThreshold = threshold where we start removing old messages - syncThreshold: 0.25, // maxTokens*syncThreshold = threshold where we start to use a real tokenization endpoint instead of guessing the number of tokens - handlers: { - // optional way to define which message should be deleted (default is the oldest one) - removalSelector: (messages) => messages.find((msg) => msg.role !== "system")!, + ```typescript + try { + await memory.add(message); + } catch (error) { + if (error instanceof MemoryError) { + // Handle unrecoverable errors + } + } + ``` - // optional way to estimate the number of tokens in a message before we use the actual tokenize endpoint (number of tokens < maxTokens*syncThreshold) - estimate: (msg) => Math.ceil((msg.role.length + msg.text.length) / 4), - }, -}); +3. **State Persistence** -await memory.add(BaseMessage.of({ role: "system", text: "You are a helpful assistant." })); -await memory.add(BaseMessage.of({ role: "user", text: "Hello world!" })); + ```typescript + // Save memory state + const snapshot = memory.createSnapshot(); -console.info(memory.isDirty); // is the consumed token count estimated or retrieved via the tokenize endpoint? -console.log(memory.tokensUsed); // number of used tokens -console.log(memory.stats()); // prints statistics -await memory.sync(); // calculates real token usage for all messages marked as "dirty" -``` + // Restore from snapshot + memory.loadSnapshot(snapshot); + ``` -_Source: [examples/memory/tokenMemory.ts](/examples/memory/tokenMemory.ts)_ +## Implementation Example -### SummarizeMemory +Here's an example of implementing a custom memory system: -Only a single summarization of the conversation is preserved. Summarization is updated with every new message. - - - -```ts -import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; -import { SummarizeMemory } from "bee-agent-framework/memory/summarizeMemory"; -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; - -const memory = new SummarizeMemory({ - llm: new OllamaChatLLM({ - modelId: "llama3.1", - parameters: { - temperature: 0, - num_predict: 250, - }, - }), -}); - -await memory.addMany([ - BaseMessage.of({ role: "system", text: "You are a guide through France." }), - BaseMessage.of({ role: "user", text: "What is the capital?" }), - BaseMessage.of({ role: "assistant", text: "Paris" }), - BaseMessage.of({ role: "user", text: "What language is spoken there?" }), -]); - -console.info(memory.isEmpty()); // false -console.log(memory.messages.length); // 1 -console.log(memory.messages[0].text); // The capital city of France is Paris, ... -``` - -_Source: [examples/memory/summarizeMemory.ts](/examples/memory/summarizeMemory.ts)_ - -## Creating a custom memory provider - -To create your memory implementation, you must implement the `BaseMemory` class. - - - -```ts -import { BaseMemory } from "bee-agent-framework/memory/base"; -import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; -import { NotImplementedError } from "bee-agent-framework/errors"; - -export class MyMemory extends BaseMemory { - get messages(): readonly BaseMessage[] { - throw new NotImplementedError("Method not implemented."); - } +```typescript +class CustomMemory extends BaseMemory { + private messages: BaseMessage[] = []; - add(message: BaseMessage, index?: number): Promise { - throw new NotImplementedError("Method not implemented."); + async add(message: BaseMessage, index?: number) { + const targetIndex = index ?? this.messages.length; + this.messages.splice(targetIndex, 0, message); } - delete(message: BaseMessage): Promise { - throw new NotImplementedError("Method not implemented."); + async delete(message: BaseMessage) { + const index = this.messages.indexOf(message); + if (index >= 0) { + this.messages.splice(index, 1); + return true; + } + return false; } - reset(): void { - throw new NotImplementedError("Method not implemented."); + reset() { + this.messages = []; } - createSnapshot(): unknown { - throw new NotImplementedError("Method not implemented."); + createSnapshot() { + return { + messages: [...this.messages], + }; } - loadSnapshot(state: ReturnType): void { - throw new NotImplementedError("Method not implemented."); + loadSnapshot(state: ReturnType) { + this.messages = [...state.messages]; } } ``` -_Source: [examples/memory/custom.ts](/examples/memory/custom.ts)_ +## See Also -The simplest implementation is `UnconstrainedMemory`, which can be found [here](/src/memory/unconstrainedMemory.ts). +- [Agent Documentation](./agent.md) +- [LLM Integration](./llms.md) +- [Message System](./messages.md) +- [Serialization](./serialization.md) diff --git a/docs/serialization.md b/docs/serialization.md index 21a79639..fc27bb38 100644 --- a/docs/serialization.md +++ b/docs/serialization.md @@ -1,23 +1,77 @@ # Serialization -> [!TIP] -> -> Location within the framework `bee-agent-framework/serializer`. +The `Serializer` class is the foundation of the Bee Framework's serialization system, providing robust functionality for converting complex data structures and objects into a format suitable for storage and transmission. It handles circular references, complex object graphs, and framework-specific data types with built-in type safety. + +## Overview + +`Serializer` serves as the central system for managing serialization and deserialization of objects throughout the framework. It provides a registry-based approach to handling different types, supports circular dependencies, and maintains object references during the serialization process. + +## Architecture + +```mermaid +classDiagram + class Serializer { + +Map factories + +register(class, processors) + +serialize(data: any) + +deserialize(raw: string) + +getFactory(className: string) + +hasFactory(className: string) + #_createOutputBuilder() + } + + class SerializeFactory { + +ClassConstructor ref + +toPlain(value: T) + +fromPlain(value: B) + +createEmpty()? + +updateInstance(instance, update)? + } + + class SerializerNode { + +boolean __serializer + +string __class + +string __ref + +any __value + } + + class RefPlaceholder { + -any partialResult + +value get() + +final get() + } + + Serializer --> SerializeFactory: manages + Serializer --> SerializerNode: creates + Serializer --> RefPlaceholder: uses -Serialization is a process of converting complex data structures or objects into a format that can be easily stored, transmitted, and reconstructed later. -Serialization is a difficult task, and JavaScript does not provide a magic tool to serialize and deserialize an arbitrary input. That is why we made such one. +``` - +## Core Properties -```ts -import { Serializer } from "bee-agent-framework/serializer/serializer"; +| Property | Type | Description | +| ----------- | ------------------------------- | ---------------------------------- | +| `factories` | `Map` | Registry of serialization handlers | +| `enabled` | `boolean` | Whether serialization is active | +| `version` | `string` | Serialization format version | -const original = new Date("2024-01-01T00:00:00.000Z"); -const serialized = Serializer.serialize(original); -const deserialized = Serializer.deserialize(serialized); +## Main Methods + +### Public Methods + +#### `serialize(data: any): string` + +Converts an object into a serialized string representation. -console.info(deserialized instanceof Date); // true -console.info(original.toISOString() === deserialized.toISOString()); // true +```typescript +import { Serializer } from "bee-agent-framework/serializer/serializer"; + +const data = { + date: new Date(), + map: new Map([["key", "value"]]), + set: new Set([1, 2, 3]), +}; +const serialized = Serializer.serialize(data); ``` _Source: [examples/serialization/base.ts](/examples/tools/base.ts)_ @@ -26,7 +80,87 @@ _Source: [examples/serialization/base.ts](/examples/tools/base.ts)_ > > Serializer knows how to serialize/deserialize the most well-known JavaScript data structures. Continue reading to see how to register your own. -## Being Serializable +#### `deserialize(raw: string, extraClasses?: SerializableClass[]): T` + +Reconstructs an object from its serialized form. + +```typescript +const original = { + buffer: Buffer.from("Hello"), + regex: /test/g, + date: new Date(), +}; + +const serialized = Serializer.serialize(original); +const restored = Serializer.deserialize(serialized); +``` + +### Registration Methods + +#### `register(ref: ClassConstructor, processors: SerializeFactory): void` + +Registers a new class for serialization support. + +```typescript +class CustomType { + constructor(public data: string) {} +} + +Serializer.register(CustomType, { + toPlain: (instance) => ({ + data: instance.data, + }), + fromPlain: (plain) => new CustomType(plain.data), + createEmpty: () => new CustomType(""), + updateInstance: (instance, update) => { + instance.data = update.data; + }, +}); +``` + +## Built-in Type Support + +### Primitive Types + +```typescript +// Built-in handlers for primitive types +Serializer.register(Number, { + toPlain: (value) => value.toString(), + fromPlain: (value) => Number(value), +}); + +Serializer.register(String, { + toPlain: (value) => String(value), + fromPlain: (value) => String(value), +}); + +Serializer.register(Boolean, { + toPlain: (value) => Boolean(value), + fromPlain: (value) => Boolean(value), +}); +``` + +### Complex Types + +```typescript +// Built-in handlers for complex types +Serializer.register(Map, { + toPlain: (value) => Array.from(value.entries()), + fromPlain: (value) => new Map(value), +}); + +Serializer.register(Set, { + toPlain: (value) => Array.from(value.values()), + fromPlain: (value) => new Set(value), +}); + +Serializer.register(Date, { + toPlain: (value) => value.toISOString(), + fromPlain: (value) => new Date(value), +}); +``` + +## Implementation Examples Most parts of the framework implement the internal [`Serializable`](/src/internals/serializable.ts) class, which exposes the following methods. @@ -36,11 +170,11 @@ Most parts of the framework implement the internal [`Serializable`](/src/interna - `fromSerialized` (static, creates the new instance from the given serialized input) - `fromSnapshot` (static, creates the new instance from the given snapshot) -See the direct usage on the following memory example. +### With tools -```ts +```typescript import { TokenMemory } from "bee-agent-framework/memory/tokenMemory"; import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; import { BaseMessage } from "bee-agent-framework/llms/primitives/message"; @@ -67,6 +201,54 @@ await deserialized.add( _Source: [examples/serialization/memory.ts](/examples/serialization/memory.ts)_ +### Custom Class Registration + +```typescript +class UserProfile { + constructor( + public name: string, + public createdAt: Date, + ) {} +} + +Serializer.register(UserProfile, { + toPlain: (instance) => ({ + name: instance.name, + createdAt: instance.createdAt, + }), + fromPlain: (data) => new UserProfile(data.name, new Date(data.createdAt)), + // For circular references + createEmpty: () => new UserProfile("", new Date()), + updateInstance: (instance, update) => { + Object.assign(instance, update); + }, +}); +``` + +### Handling Circular References + +```typescript +class Node { + constructor( + public value: string, + public next?: Node, + ) {} +} + +Serializer.register(Node, { + toPlain: (instance) => ({ + value: instance.value, + next: instance.next, + }), + fromPlain: (data) => new Node(data.value, data.next), + createEmpty: () => new Node(""), + updateInstance: (instance, update) => { + instance.value = update.value; + instance.next = update.next; + }, +}); +``` + ### Serializing unknowns If you want to serialize a class that the `Serializer` does not know, it throws the `SerializerError` error. @@ -171,3 +353,62 @@ _Source: [examples/serialization/context.ts](/examples/serialization/context.ts) > > Ensuring that all classes are registered in advance can be annoying, but there's a good reason for that. > If we imported all the classes for you, that would significantly increase your application's size and bootstrapping time + you would have to install all peer dependencies that you may not even need. + +## Best Practices + +1. **Type Registration** + + ```typescript + // Register types before using them + Serializer.register(CustomType, { + toPlain: (value) => ({ + /* ... */ + }), + fromPlain: (data) => new CustomType(/* ... */), + createEmpty: () => new CustomType(), + updateInstance: (instance, update) => { + // Update logic + }, + }); + ``` + +2. **Error Handling** + + ```typescript + try { + const serialized = Serializer.serialize(data); + } catch (error) { + if (error instanceof SerializerError) { + // Handle serialization errors + } + } + ``` + +3. **Circular Reference Management** + + ```typescript + // Always implement createEmpty and updateInstance + // for classes that might have circular references + createEmpty: () => new CustomType(), + updateInstance: (instance, update) => { + Object.assign(instance, update); + } + ``` + +4. **Performance Optimization** + ```typescript + // Cache serialization results when appropriate + class SerializableCache { + @Cache() + serializeData(data: any) { + return Serializer.serialize(data); + } + } + ``` + +## See Also + +- [Memory System](./memory.md) +- [Cache System](./cache.md) +- [Agent System](./agent.md) +- [Tools System](./tools.md) diff --git a/docs/templates.md b/docs/templates.md index 3f72ad70..683dc804 100644 --- a/docs/templates.md +++ b/docs/templates.md @@ -1,47 +1,89 @@ -# Templates (Prompt Templates) - -> [!TIP] -> -> Location within the framework `bee-agent-framework/template`. - -**Template** is a predefined structure or format used to create consistent documents or outputs. It often includes placeholders for specific information that can be filled in later. +# Template + +The `PromptTemplate` class is the foundation of the Bee Framework's templating system, providing robust functionality for creating, validating, and rendering structured prompts. Built on top of Mustache.js, it adds type safety, schema validation, and advanced template manipulation capabilities. + +## Overview + +`PromptTemplate` serves as the core system for managing prompt templates throughout the framework. It enables the creation of type-safe, validated templates with support for complex data structures, custom functions, and advanced rendering capabilities. + +## Architecture + +```mermaid +classDiagram + class PromptTemplate { + +string template + +ZodSchema schema + +Object defaults + +Object functions + +render(input: Input) + +fork(customizer: Function) + +validateInput(input: unknown) + #config: TemplateConfig + } + + class TemplateConfig { + +string template + +SchemaObject schema + +Object defaults + +Object functions + +boolean escape + +Array customTags + } + + class TemplateError { + +PromptTemplate template + +string message + +Object context + } + + class ValidationError { + +PromptTemplate template + +ValidatorErrors errors + } + + PromptTemplate *-- TemplateConfig + PromptTemplate --> TemplateError + TemplateError <|-- ValidationError +``` -**Prompt template**, on the other hand, is a specific type of template used in the context of language models or AI applications. -It consists of a structured prompt that guides the model in generating a response or output. The prompt often includes variables or placeholders for user input, which helps to elicit more relevant or targeted responses. +## Core Properties -The Framework exposes such functionality via the [`PromptTemplate`](/src/template.ts) class, which is based on the well-known [`Mustache.js`](https://github.com/janl/mustache.js) template system, which is supported almost in every programming language. -In addition, the framework provides type safety and validation against appropriate [`code](https://zod.dev/) schema, as you can see in the following examples. +| Property | Type | Description | +| ----------- | ----------- | ------------------------------------- | +| `template` | `string` | Template string with placeholders | +| `schema` | `ZodSchema` | Validation schema for inputs | +| `defaults` | `Object` | Default values for template variables | +| `functions` | `Object` | Custom rendering functions | -> [!TIP] -> -> The Prompt Template concept is used anywhere - especially in our agents. +## Main Methods -## Usage +### Public Methods -### Primitives +#### `render(input: TemplateInput): string` - +Renders the template with provided input data. -```ts +```typescript import { PromptTemplate } from "bee-agent-framework/template"; import { z } from "zod"; -const greetTemplate = new PromptTemplate({ - template: `Hello {{name}}`, +const greetingTemplate = new PromptTemplate({ + template: "Hello {{name}}!", schema: z.object({ name: z.string(), }), }); -const output = greetTemplate.render({ - name: "Alex", +const output = greetingTemplate.render({ + name: "Alice", }); -console.log(output); // Hello Alex! + +console.log(output); // Hello Alice! ``` _Source: [examples/templates/primitives.ts](/examples/templates/primitives.ts)_ -### Arrays +#### Arrays @@ -64,7 +106,7 @@ console.log(output); // Colors: Green,Yellow _Source: [examples/templates/arrays.ts](/examples/templates/arrays.ts)_ -### Objects +#### Objects @@ -92,11 +134,11 @@ console.log(output); // Expected Duration: 5ms; Retrieved: 3ms 5ms 6ms _Source: [examples/templates/objects.ts](/examples/templates/objects.ts)_ -### Forking +#### `fork(customizer: Function): PromptTemplate` - +Creates a new template by modifying an existing one. -```ts +```typescript import { PromptTemplate } from "bee-agent-framework/template"; import { z } from "zod"; @@ -110,7 +152,7 @@ const original = new PromptTemplate({ const modified = original.fork((config) => ({ ...config, - template: `${config.template} Your answers must be concise.`, + template: `${config.template} Your answers must be concise`, defaults: { name: "Bee", }, @@ -125,59 +167,171 @@ console.log(output); // You are a helpful assistant called Bee. Your objective i _Source: [examples/templates/forking.ts](/examples/templates/forking.ts)_ -### Functions +## Template Features - +### Schema Validation -```ts -import { PromptTemplate } from "bee-agent-framework/template"; -import { z } from "zod"; +```typescript +const userTemplate = new PromptTemplate({ + template: "User: {{name}}, Age: {{age}}", + schema: z.object({ + name: z.string().min(1), + age: z.number().min(0).max(150), + }), +}); -const messageTemplate = new PromptTemplate({ - schema: z - .object({ - text: z.string(), - author: z.string().optional(), - createdAt: z.string().datetime().optional(), - }) - .passthrough(), - functions: { - formatMeta: function () { - if (!this.author && !this.createdAt) { - return ""; - } +// Throws ValidationPromptTemplateError if invalid +userTemplate.render({ + name: "John", + age: 30, +}); +``` + +### Default Values + +```typescript +const configTemplate = new PromptTemplate({ + template: "Server: {{host}}:{{port}}", + schema: z.object({ + host: z.string(), + port: z.number(), + }), + defaults: { + host: "localhost", + port: 8080, + }, +}); +``` - const author = this.author || "anonymous"; - const createdAt = this.createdAt || new Date().toISOString(); +### Custom Functions - return `\nThis message was created at ${createdAt} by ${author}.`; +```typescript +const messageTemplate = new PromptTemplate({ + schema: z.object({ + text: z.string(), + timestamp: z.date(), + }), + functions: { + formatDate() { + return new Date(this.timestamp).toLocaleString(); }, }, - template: `Message: {{text}}{{formatMeta}}`, + template: "{{text}} (Sent: {{formatDate}})", }); +``` -// Message: Hello from 2024! -// This message was created at 2024-01-01T00:00:00.000Z by John. -console.log( - messageTemplate.render({ - text: "Hello from 2024!", - author: "John", - createdAt: new Date("2024-01-01").toISOString(), +## Implementation Examples + +### Basic Template + +```typescript +const simpleTemplate = new PromptTemplate({ + template: "{{#trim}}{{#items}}{{.}},{{/items}}{{/trim}}", + schema: z.object({ + items: z.array(z.string()), }), -); +}); -// Message: Hello from the present! console.log( - messageTemplate.render({ - text: "Hello from the present!", + simpleTemplate.render({ + items: ["one", "two", "three"], }), -); +); // "one,two,three" ``` -_Source: [examples/templates/functions.ts](/examples/templates/functions.ts)_ - -## Agents - -The Bee Agent internally uses multiple prompt templates, and because now you know how to work with them, you can alter the agent’s behavior. +### Complex Template + +```typescript +const profileTemplate = new PromptTemplate({ + template: ` + Name: {{name}} + Age: {{age}} + {{#hasHobbies}} + Hobbies: + {{#hobbies}} + - {{name}} ({{years}} years) + {{/hobbies}} + {{/hasHobbies}} + `, + schema: z.object({ + name: z.string(), + age: z.number(), + hobbies: z.array( + z.object({ + name: z.string(), + years: z.number(), + }), + ), + }), + functions: { + hasHobbies() { + return this.hobbies.length > 0; + }, + }, +}); +``` -The internal prompt templates can be modified [here](/examples/agents/bee_advanced.ts). +## Best Practices + +1. **Schema Definition** + + ```typescript + // Define clear, specific schemas + const schema = z.object({ + required: z.string(), + optional: z.number().optional(), + defaulted: z.string().default("value"), + }); + ``` + +2. **Error Handling** + + ```typescript + try { + template.render(input); + } catch (error) { + if (error instanceof ValidationPromptTemplateError) { + console.error("Invalid input:", error.errors); + } + } + ``` + +3. **Template Organization** + + ```typescript + // Create base templates for reuse + const baseTemplate = new PromptTemplate({ + template: "{{content}}", + schema: z.object({ + content: z.string(), + }), + }); + + // Extend for specific uses + const specializedTemplate = baseTemplate.fork((config) => ({ + ...config, + template: `Special: ${config.template}`, + })); + ``` + +4. **Function Helpers** + ```typescript + const template = new PromptTemplate({ + schema: messageSchema, + functions: { + formatDate() { + return new Date(this.date).toLocaleString(); + }, + truncate(text: string) { + return text.length > 100 ? `${text.slice(0, 97)}...` : text; + }, + }, + }); + ``` + +## See Also + +- [Agent System](./agent.md) +- [LLM System](./llms.md) +- [Error Handling](./errors.md) +- [Validation](./validation.md) diff --git a/docs/tools.md b/docs/tools.md index 1ceea4e7..e915f73d 100644 --- a/docs/tools.md +++ b/docs/tools.md @@ -1,131 +1,139 @@ # Tools -> [!TIP] -> -> Location within the framework `bee-agent-framework/tools`. - -Tools in the context of an agent refer to additional functionalities or capabilities integrated with the agent to perform specific tasks beyond text processing. - -These tools extend the agent's abilities, allowing it to interact with external systems, access information, and execute actions. - -## Built-in tools - -| Name | Description | -| ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | -| `PythonTool` | Run arbitrary Python code in the remote environment. | -| `WikipediaTool` | Search for data on Wikipedia. | -| `GoogleSearchTool` | Search for data on Google using Custom Search Engine. | -| `DuckDuckGoTool` | Search for data on DuckDuckGo. | -| [`SQLTool`](./sql-tool.md) | Execute SQL queries against relational databases. | -| `ElasticSearchTool` | Perform search or aggregation queries against an ElasticSearch database. | -| `CustomTool` | Run your own Python function in the remote environment. | -| `LLMTool` | Use an LLM to process input data. | -| `DynamicTool` | Construct to create dynamic tools. | -| `ArXivTool` | Retrieve research articles published on arXiv. | -| `WebCrawlerTool` | Retrieve content of an arbitrary website. | -| `OpenMeteoTool` | Retrieve current, previous, or upcoming weather for a given destination. | -| `MilvusDatabaseTool` | Perform retrieval queries (search, insert, delete, manage collections) against a MilvusDatabaseTool database. | -| ➕ [Request](https://github.com/i-am-bee/bee-agent-framework/discussions) | | - -All examples can be found [here](/examples/tools). - -> [!TIP] -> -> Would you like to use a tool from LangChain? See the [example](/examples/tools/langchain.ts). +The `Tool` class is the foundation of the Bee Framework's tool system, providing the core interface and functionality for creating specialized capabilities that agents can use to perform specific tasks. Tools extend an agent's abilities beyond pure language processing, enabling interactions with external systems, data processing, and task execution. + +## Overview + +`Tool` defines the standard interface and basic functionality that all tool implementations must follow. It handles input validation, execution flow, caching, error handling, and provides a consistent interface for different tool types like Python execution, web searches, database queries, and custom operations. + +## Architecture + +```mermaid +classDiagram + class Tool { + +string name + +string description + +BaseCache cache + +BaseToolOptions options + +Emitter emitter + +run(input: TInput, options?: TRunOptions) + +pipe(tool: Tool, mapper: Function) + +extend(schema: ZodSchema, mapper: Function) + #_run(input, options, run)* + #validateInput(schema, input) + #preprocessInput(input) + } -## Usage + class ToolOutput { + +getTextContent() + +isEmpty() + +toString() + } -### Basic + class StringToolOutput { + +string result + +Record~string,any~ ctx + } - + class JSONToolOutput { + +T result + +Record~string,any~ ctx + } -```ts -import { OpenMeteoTool } from "bee-agent-framework/tools/weather/openMeteo"; + class DynamicTool { + -TInputSchema _inputSchema + -Function handler + } -const tool = new OpenMeteoTool(); -const result = await tool.run({ - location: { name: "New York" }, - start_date: "2024-10-10", - end_date: "2024-10-10", -}); -console.log(result.getTextContent()); + class CustomTool { + +CodeInterpreterClient client + } + + class LLMTool { + +AnyLLM llm + } + + Tool <|-- DynamicTool + Tool <|-- CustomTool + Tool <|-- LLMTool + ToolOutput <|-- StringToolOutput + ToolOutput <|-- JSONToolOutput ``` -_Source: [examples/tools/base.ts](/examples/tools/base.ts)_ +## Core Properties -### Advanced +| Property | Type | Description | +| ------------- | ----------------- | -------------------------------------------------------- | +| `name` | `string` | Unique identifier for the tool | +| `description` | `string` | Natural language description of tool's purpose | +| `options` | `BaseToolOptions` | Configuration options including retry and cache settings | +| `cache` | `BaseCache` | Cache system for tool outputs | +| `emitter` | `Emitter` | Event system for monitoring tool execution | - +## Main Methods -```ts -import { OpenMeteoTool } from "bee-agent-framework/tools/weather/openMeteo"; -import { UnconstrainedCache } from "bee-agent-framework/cache/unconstrainedCache"; +### Public Methods -const tool = new OpenMeteoTool({ - cache: new UnconstrainedCache(), - retryOptions: { - maxRetries: 3, - }, -}); -console.log(tool.name); // OpenMeteo -console.log(tool.description); // Retrieve current, past, or future weather forecasts for a location. -console.log(tool.inputSchema()); // (zod/json schema) +#### `run(input: TInput, options?: TRunOptions): Promise` -await tool.cache.clear(); +Executes the tool with the given input and options. +```typescript +const tool = new WikipediaTool(); const result = await tool.run({ - location: { name: "New York" }, - start_date: "2024-10-10", - end_date: "2024-10-10", - temperature_unit: "celsius", + query: "Neural networks", + limit: 5, }); -console.log(result.isEmpty()); // false -console.log(result.result); // prints raw data -console.log(result.getTextContent()); // prints data as text +console.log(result.getTextContent()); ``` -_Source: [examples/tools/advanced.ts](/examples/tools/advanced.ts)_ +#### `pipe(tool: Tool, mapper: Function): DynamicTool` -> [!TIP] -> -> To learn more about caching, refer to the [Cache documentation page](./cache.md). - -### Usage with agents - - +Creates a new tool that chains the output of the current tool to another tool. -```ts -import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat"; -import { ArXivTool } from "bee-agent-framework/tools/arxiv"; -import { BeeAgent } from "bee-agent-framework/agents/bee/agent"; -import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMemory"; - -const agent = new BeeAgent({ - llm: new OllamaChatLLM(), - memory: new UnconstrainedMemory(), - tools: [new ArXivTool()], -}); +```typescript +const searchAndSummarize = wikipediaTool.pipe(llmTool, (input, output) => ({ + input: `Summarize this article: ${output.getTextContent()}`, +})); ``` -_Source: [examples/tools/agent.ts](/examples/tools/agent.ts)_ +#### `extend(schema: ZodSchema, mapper: Function): DynamicTool` -## Writing a new tool +Creates a new tool with modified input schema while reusing the original tool's functionality. -To create a new tool, you have the following options on how to do that: +```typescript +const enhancedSearch = wikipediaTool.extend( + z.object({ + topic: z.string(), + language: z.string().default("en"), + }), + (input) => ({ + query: input.topic, + lang: input.language, + }), +); +``` -- Implement the base [`Tool`](/src/tools/base.ts) class. -- Initiate the [`DynamicTool`](/src/tools/base.ts) by passing your own handler (function) with the `name`, `description` and `input schema`. -- Initiate the [`CustomTool`](/src/tools/custom.ts) by passing your own Python function (code interpreter needed). +## Built-in Tools -### Implementing the `Tool` class +| Tool | Description | Input Schema | +| ------------------ | ------------------------ | --------------------------------------- | +| `PythonTool` | Executes Python code | `{ code: string }` | +| `WikipediaTool` | Searches Wikipedia | `{ query: string, limit?: number }` | +| `GoogleSearchTool` | Performs Google searches | `{ query: string, limit?: number }` | +| `SQLTool` | Executes SQL queries | `{ query: string, params?: any[] }` | +| `LLMTool` | Processes text with LLMs | `{ input: string }` | +| `ArXivTool` | Searches academic papers | `{ query: string, limit?: number }` | +| `WebCrawlerTool` | Fetches web content | `{ url: string }` | +| `OpenMeteoTool` | Gets weather data | `{ location: Location, date?: string }` | -The recommended and most sustainable way to create a tool is by implementing the base `Tool` class. +## Tool Implementations -#### Basic +### Standard Tool -```ts +```typescript import { StringToolOutput, Tool, @@ -178,181 +186,7 @@ export class RiddleTool extends Tool { _Source: [examples/tools/custom/base.ts](/examples/tools/custom/base.ts)_ -> [!TIP] -> -> `inputSchema` can be asynchronous. - -> [!TIP] -> -> If you want to return an array or a plain object, use `JSONToolOutput` or implement your own. - -#### Advanced - -If your tool is more complex, you may want to use the full power of the tool abstraction, as the following example shows. - - - -```ts -import { - BaseToolOptions, - BaseToolRunOptions, - Tool, - ToolInput, - JSONToolOutput, - ToolError, -} from "bee-agent-framework/tools/base"; -import { z } from "zod"; -import { createURLParams } from "bee-agent-framework/internals/fetcher"; -import { RunContext } from "bee-agent-framework/context"; - -type ToolOptions = BaseToolOptions & { maxResults?: number }; -type ToolRunOptions = BaseToolRunOptions; - -export interface OpenLibraryResponse { - numFound: number; - start: number; - numFoundExact: boolean; - q: string; - offset: number; - docs: Record[]; -} - -export class OpenLibraryToolOutput extends JSONToolOutput { - isEmpty(): boolean { - return !this.result || this.result.numFound === 0 || this.result.docs.length === 0; - } -} - -export class OpenLibraryTool extends Tool { - name = "OpenLibrary"; - description = - "Provides access to a library of books with information about book titles, authors, contributors, publication dates, publisher and isbn."; - - inputSchema() { - return z - .object({ - title: z.string(), - author: z.string(), - isbn: z.string(), - subject: z.string(), - place: z.string(), - person: z.string(), - publisher: z.string(), - }) - .partial(); - } - - static { - this.register(); - } - - protected async _run( - input: ToolInput, - _options: ToolRunOptions | undefined, - run: RunContext, - ) { - const query = createURLParams({ - searchon: input, - }); - const response = await fetch(`https://openlibrary.org?${query}`, { - signal: run.signal, - }); - - if (!response.ok) { - throw new ToolError( - "Request to Open Library API has failed!", - [new Error(await response.text())], - { - context: { input }, - }, - ); - } - - const json: OpenLibraryResponse = await response.json(); - if (this.options.maxResults) { - json.docs.length = this.options.maxResults; - } - - return new OpenLibraryToolOutput(json); - } -} -``` - -_Source: [examples/tools/custom/openLibrary.ts](/examples/tools/custom/openLibrary.ts)_ - -#### Implementation Notes - -- **Implement the `Tool` class:** - - - `MyNewToolOutput` is required, must be an implementation of `ToolOutput` such as `StringToolOutput` or `JSONToolOutput`. - - - `ToolOptions` is optional (default BaseToolOptions), constructor parameters that are passed during tool creation - - - `ToolRunOptions` is optional (default BaseToolRunOptions), optional parameters that are passed to the run method - -- **Be given a unique name:** - - Note: Convention and best practice is to set the tool's name to the name of its class - - ```ts - name = "MyNewTool"; - ``` - -- **Provide a natural language description of what the tool does:** - - ❗Important: the agent uses this description to determine when the tool should be used. It's probably the most important aspect of your tool and you should experiment with different natural language descriptions to ensure the tool is used in the correct circumstances. You can also include usage tips and guidance for the agent in the description, but - its advisable to keep the description succinct in order to reduce the probability of conflicting with other tools, or adversely affecting agent behavior. - - ```ts - description = "Takes X action when given Y input resulting in Z output"; - ``` - -- **Declare an input schema:** - - This is used to define the format of the input to your tool. The agent will formalise the natural language input(s) it has received and structure them into the fields described in the tool's input. The input schema can be specified using [Zod](https://github.com/colinhacks/zod) (recommended) or JSONSchema. It must be a function (either sync or async). Zod effects (e.g. `z.object().transform(...)`) are not supported. The return value of `inputSchema` must always be an object and pass validation by the `validateSchema()` function defined in [schema.ts](/src/internals/helpers/schema.ts). Keep your tool input schema simple and provide schema descriptions to help the agent to interpret fields. - - - - ```ts - inputSchema() { - // any Zod definition is good here, this is typical simple example - return z.object({ - // list of key-value pairs - expression: z - .string() - .min(1) - .describe( - `The mathematical expression to evaluate (e.g., "2 + 3 * 4").`, - ), - }); - } - ``` - -- **Implement initialisation:** - - The unnamed static block is executed when your tool is called for the first time. It is used to register your tool as `serializable` (you can then use the `serialize()` method). - - - - ```ts - static { - this.register(); - } - ``` - -- **Implement the `_run()` method:** - - - - ```ts - protected async _run(input: ToolInput, options: BaseToolRunOptions | undefined, run: RunContext) { - // insert custom code here - // MUST: return an instance of the output type specified in the tool class definition - // MAY: throw an instance of ToolError upon unrecoverable error conditions encountered by the tool - } - ``` - -### Using the `DynamicTool` class +Using the `DynamicTool` class The `DynamicTool` allows you to create a tool without extending the base tool class. @@ -433,6 +267,59 @@ _Source: [examples/tools/custom/python.ts](/examples/tools/custom/python.ts)_ > Custom tools are executed within the code interpreter, but they cannot access any files. > Only `PythonTool` does. +## Best Practices + +1. **Input Validation** + + ```typescript + inputSchema() { + return z.object({ + query: z.string() + .min(1, "Query cannot be empty") + .max(1000, "Query too long") + .describe("Search query to execute") + }); + } + ``` + +2. **Error Handling** + + ```typescript + try { + const result = await tool.run(input); + } catch (error) { + if (error instanceof ToolInputValidationError) { + // Handle invalid input + } else if (error instanceof ToolError) { + // Handle tool execution errors + } + } + ``` + +3. **Caching Strategy** + + ```typescript + const tool = new SearchTool({ + cache: new UnconstrainedCache(), + retryOptions: { + maxRetries: 3, + factor: 2, + }, + }); + ``` + +4. **Event Monitoring** + + ```typescript + tool.emitter.on("start", ({ input }) => { + console.log("Tool execution started:", input); + }); + + tool.emitter.on("success", ({ output }) => { + console.log("Tool execution succeeded:", output); + }); + ``` + ## General Tips ### Data Minimization @@ -441,10 +328,21 @@ If your tool is providing data to the agent, try to ensure that the data is rele ### Provide Hints -If your tool encounters an error that is fixable, you can return a hint to the agent; the agent will try to reuse the tool in the context of the hint. This can improve the agent's ability -to recover from errors. +If your tool encounters an error that is fixable, you can return a hint to the agent; the agent will try to reuse the tool in the context of the hint. This can improve the agent's ability to recover from errors. ### Security & Stability -When building tools, consider that the tool is being invoked by a somewhat unpredictable third party (the agent). You should ensure that sufficient guardrails are in place to prevent -adverse outcomes. +When building tools, consider that the tool is being invoked by a somewhat unpredictable third party (the agent). You should ensure that sufficient guardrails are in place to prevent adverse outcomes. + +1. **Input Sanitization**: Always validate and sanitize inputs before processing +2. **Resource Limits**: Implement timeouts and resource constraints +3. **Access Control**: Restrict tool capabilities based on context +4. **Error Messages**: Avoid exposing sensitive information in errors +5. **Rate Limiting**: Implement rate limiting for external service calls + +## See Also + +- [Agent Documentation](./agent.md) +- [Memory System](./memory.md) +- [LLM Integration](./llms.md) +- [Cache System](./cache.md)