Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8.7.0 #636

Merged
merged 3 commits into from
Sep 23, 2024
Merged

8.7.0 #636

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ public IList<string>? StopCalculated


/// <summary>
/// An upper bound for the number of tokens that can be generated for a completion,
/// An upper bound for the number of tokens that can be generated for a completion,
/// including visible output tokens and reasoning tokens.
/// </summary>
/// <see href="https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_completion_tokens" />
Expand Down Expand Up @@ -267,6 +267,12 @@ public ResponseFormats? ChatResponseFormat
[JsonPropertyName("top_logprobs")]
public int? TopLogprobs { get; set; }

/// <summary>
/// Whether to enable parallel <a href="https://platform.openai.com/docs/guides/function-calling/parallel-function-calling">function calling</a> during tool use.
/// </summary>
[JsonPropertyName("parallel_tool_calls")]
public bool? ParallelToolCalls { get; set; }

/// <summary>
/// ID of the model to use. For models supported see <see cref="OpenAI.ObjectModels.Models" /> start with <c>Gpt_</c>
/// </summary>
Expand All @@ -291,4 +297,15 @@ public IEnumerable<ValidationResult> Validate()
/// </summary>
[JsonPropertyName("user")]
public string User { get; set; }

/// <summary>
/// Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
/// If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
/// If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
/// If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
/// When not set, the default behavior is 'auto'.
/// When this parameter is set, the response body will include the service_tier utilized.
/// </summary>
[JsonPropertyName("service_tier")]
public string? ServiceTier { get; set; }
}
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,48 @@ namespace OpenAI.ObjectModels.ResponseModels;

public record ChatCompletionCreateResponse : BaseResponse, IOpenAiModels.IId, IOpenAiModels.ICreatedAt
{
/// <summary>
/// The model used for the chat completion.
/// </summary>
[JsonPropertyName("model")]
public string Model { get; set; }

/// <summary>
/// A list of chat completion choices. Can be more than one if n is greater than 1.
/// </summary>
[JsonPropertyName("choices")]
public List<ChatChoiceResponse> Choices { get; set; }

/// <summary>
/// Usage statistics for the completion request.
/// </summary>
[JsonPropertyName("usage")]
public UsageResponse Usage { get; set; }

/// <summary>
/// This fingerprint represents the backend configuration that the model runs with.
/// Can be used in conjunction with the seed request parameter to understand when backend changes have been made that
/// might impact determinism.
/// </summary>
[JsonPropertyName("system_fingerprint")]
public string SystemFingerPrint { get; set; }

/// <summary>
/// The service tier used for processing the request. This field is only included if the service_tier parameter is
/// specified in the request.
/// </summary>
[JsonPropertyName("service_tier")]
public string? ServiceTier { get; set; }

/// <summary>
/// The Unix timestamp (in seconds) of when the chat completion was created.
/// </summary>
[JsonPropertyName("created")]
public int CreatedAt { get; set; }

/// <summary>
/// A unique identifier for the chat completion.
/// </summary>
[JsonPropertyName("id")]
public string Id { get; set; }
}
2 changes: 2 additions & 0 deletions Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,8 @@ Needless to say, I cannot accept responsibility for any damage caused by using t
### 8.7.0
- Added Support for o1 reasing models (`o1-mini` and `o1-preview`).
- Added `MaxCompletionTokens` for `chat completions`.
- Added support for `ParallelToolCalls` for `chat completions`.
- Added support for `ServiceTier` for `chat completions`.
- Added support for `ChunkingStrategy` in `Vector Store` and `Vector Store Files`.
- Added support for `Strict` in `ToolDefinition`.
- Added support for `MaxNumberResults` and `RankingOptions` for `FileSearchTool`.
Expand Down
Loading