Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Temperature for Mistral 7B (DirectML model) doesn't seem to apply in Bulk Run #136

Open
SteveJSteiner opened this issue Dec 18, 2024 · 4 comments
Assignees
Labels
needs attention The issue needs contributor's attention

Comments

@SteveJSteiner
Copy link

  1. Load the Mistral 7B (DirectML model) (In this case it was loaded onto a 4090RTX .. plenty of room)
  2. Click Bulk run
  3. Use Load Sample Dataset
  4. Run bulk run
  5. Change the temperature to 0
  6. Re-run bulk run ... nothing changes
  7. Change the temperature to 1.5
  8. re-run the bulk run ... nothing changes.

Expected: A difference in temperature should cause at least minor wording changes in the output.

Thank you for contacting us! Any issue or feedback from you is quite important to us. We will do our best to fully respond to your issue as soon as possible. Sometimes additional investigations may be needed, we will usually get back to you within 2 days by adding comments to this issue. Please stay tuned.

@microsoft-github-policy-service microsoft-github-policy-service bot added the needs attention The issue needs contributor's attention label Dec 18, 2024
@swatDong
Copy link
Contributor

@jiaxuwu2021 - could you or anyone please take a look on this DML model issue? Is temperature applied to local models?

@jiaxuwu2021
Copy link

FYI @vortex-captain

@vortex-captain
Copy link

The fix will be shipped with next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs attention The issue needs contributor's attention
Projects
None yet
Development

No branches or pull requests

4 participants