-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plotting Learning Curve for best model found after optimization in talos #153
Comments
You can't with Talos. But you can see get a live training plot during the experiment by following the example here. |
I am also interested in this, and not only for the best model. If this data could be saved for all models it would be a nice enhancement. mikkokotila: would it fit talos design to add this if it could be nicely implemented? |
In that case we have to store history from Keras in the |
Great! As log as the data is there it will be very usefull, as you say displaying it can be figured out later. What I would do with the data is to generate an image file on disk with the learning graph for each permutation that interested me (or a folder with all of them). I have not dvelved into talos reporting facilities enought to see if such an operation would fit into them. |
I'd be open to create a simple dashboard that runs in a web browser. Something that runs on Flask, so it's also a REST API which allows rapidly deploying custom dashboards, etc. First for experiment analysis, and eventually also for real-time monitoring. That's basically what will need anyways for meaningful "during the experiment" changes. So we'd end up with a REST API where you can command LIVE experiments. Basically the dashboard could end up more like a real-time strategy game where you watch the experiment and work together with the machine to "play the game". |
Sounds ambitious, I would be more than satisfied with a simple json file with the data :) |
Sure, let's start with a list of dictionaries stored in to |
Great |
This version will introduce a self-contained ParamSpace API allowing Scan() a single line interface to qualified permutations. The change allows important streamlining of the main procedural codes of Talos. These simplifications are already reflected in this commit. Other changes: - a major rehaul of reducers; reduction strategies can now be easily added in a single file/function - it's now very - moved all logging/results related codes to /logging - Deals with #153 - Major cleaning up of Scan() arguments - Adds a reducer that takes in a metric, threshold, and loss e.g ['val_acc', 0.9, False] where the experiment will be ended once a given metric threshold is met by a model - /examples is now /templates - many redundant functions / files were deleted - ~100 lines of code were removed from the mainline codes and those codes were notably streamlined - reduction no longer has prepare or finish - learning entropy (metrics/entropy.py) is completely rewritten A big bunch of other things, so do check it out for yourself. NOTE: this version is still under testing.
- added `talos.utils.ExperimentLogCallback` which allows storing epoch-by-epoch training data on the local machine during the experiment (implements the request in #153) - added "edit on github" link to docs - added free text search to docs - added some styling to docs - added analytics to docs - added "copy to clipboard" to code snippets in docs - added some tests
Somewhat related with #207. Related, see Javascript frontend work is somewhat painful and tedious in comparison to backend python works, but I'll get to it probably sometime during the autumn. |
I'm closing this as this now lives in Gamify. |
I am trying to plot the learning curve for the best model found after the optimization in talos. Is there a way to do it. If the answer is yes, how can I plot the learning curve?
The text was updated successfully, but these errors were encountered: