From 6e5daa89c588cbdbd1a5feee0e3507008c048c42 Mon Sep 17 00:00:00 2001 From: Carson Kahn Date: Tue, 7 Nov 2023 04:30:28 +0000 Subject: [PATCH] Doc ways to improve reproducability besides Temp --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f0b609088..4cb77db6b 100644 --- a/README.md +++ b/README.md @@ -757,8 +757,9 @@ Even when specifying a temperature field of 0, it doesn't guarantee that you'll Due to the factors mentioned above, different answers may be returned even for the same question. **Workarounds:** -1. Using `math.SmallestNonzeroFloat32`: By specifying `math.SmallestNonzeroFloat32` in the temperature field instead of 0, you can mimic the behavior of setting it to 0. -2. Limiting Token Count: By limiting the number of tokens in the input and output and especially avoiding large requests close to 32k tokens, you can reduce the risk of non-deterministic behavior. +1. As of November 2023, use [the new `seed` parameter](https://platform.openai.com/docs/guides/text-generation/reproducible-outputs) in conjunction with the `system_fingerprint` response field, alongside Temperature management. +2. Try using `math.SmallestNonzeroFloat32`: By specifying `math.SmallestNonzeroFloat32` in the temperature field instead of 0, you can mimic the behavior of setting it to 0. +3. Limiting Token Count: By limiting the number of tokens in the input and output and especially avoiding large requests close to 32k tokens, you can reduce the risk of non-deterministic behavior. By adopting these strategies, you can expect more consistent results.