You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The hit rate history is per worker. This means that the meaning of hitrate2000 varies with the number of nodes. Another, complicating, issue is that events are not round-robined between workers in a fair way (with the psana backend). Thus, some hit history is trailing a long time.
Should the history be changed to be a total, not per worker, since the hitrate code anyway does an MPI reduction? This would give more comparable results when running in single or MPI mode, or varying the MPI width.
The text was updated successfully, but these errors were encountered:
It would make sense and give more accurate results if we would have a
global history instead of per-worker, but who would keep that global
history in memory, the master? For large histories this could be a problem
for performance, I guess. But yeah, if performance is not an issue, I think
we should change to global hitrate history.
The hit rate history is per worker. This means that the meaning of
hitrate2000 varies with the number of nodes. Another, complicating, issue
is that events are not round-robined between workers in a fair way (with
the psana backend). Thus, some hit history is trailing a long time.
Should the history be changed to be a total, not per worker, since the
hitrate code anyway does an MPI reduction? This would give more comparable
results when running in single or MPI mode, or varying the MPI width.
The hit rate history is per worker. This means that the meaning of hitrate2000 varies with the number of nodes. Another, complicating, issue is that events are not round-robined between workers in a fair way (with the psana backend). Thus, some hit history is trailing a long time.
Should the history be changed to be a total, not per worker, since the hitrate code anyway does an MPI reduction? This would give more comparable results when running in single or MPI mode, or varying the MPI width.
The text was updated successfully, but these errors were encountered: