You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In org.clulab.reach.mentions.serialization.json.JSONSerializer, the array of mentions that gets read in with toCorefMentions(...): Seq[CorefMention] gets sent to toCorefMentionsMap(...): Map[String, CorefMention]. The map's string key comes from toCorefMentionWithId which returns (mentionId, mention). The map cannot accommodate two mentions with the same mentionId, so "duplicates" are removed. The mentionId is calculated in processors and does not involve the antecedents in Anaphoric, so we're losing mentions that differ in antecedents. The map is quickly converted back into a Seq by map.values.toSeq, but by then the mentions have been lost. The mentions that are serialized do not come back after deserialization, which does not fit the definition of serialization.
I'm not sure what the intention is. Should the "duplicates" be in there in the first place? Should they be exempt from serialization? Does the definition of equivalenceHash need to be changed so that the mentions can be distinguished by the antecedents? My plan is to skip this conversion to a map and back so they won't be lost.
The text was updated successfully, but these errors were encountered:
That with the current kind of mention ID there can be duplicates seems to make it unrealistic to use that scheme for "flat-JSON" output. For that to work there can be no duplicate IDs. It isn't especially a problem with the tree-JSON output because the mention IDs are pretty much superfluous. All the data is there without them and is thrown away when a MentionOp is turned back into a Mention.
The current plan is to output the regular IDs, but to also calculate a more complete flatID that can be used to distinguish between the more complicated situations.
In
org.clulab.reach.mentions.serialization.json.JSONSerializer
, the array of mentions that gets read in withtoCorefMentions(...): Seq[CorefMention]
gets sent totoCorefMentionsMap(...): Map[String, CorefMention]
. The map's string key comes fromtoCorefMentionWithId
which returns(mentionId, mention)
. The map cannot accommodate two mentions with the same mentionId, so "duplicates" are removed. ThementionId
is calculated in processors and does not involve theantecedents
inAnaphoric
, so we're losing mentions that differ in antecedents. The map is quickly converted back into a Seq bymap.values.toSeq
, but by then the mentions have been lost. The mentions that are serialized do not come back after deserialization, which does not fit the definition of serialization.I'm not sure what the intention is. Should the "duplicates" be in there in the first place? Should they be exempt from serialization? Does the definition of equivalenceHash need to be changed so that the mentions can be distinguished by the antecedents? My plan is to skip this conversion to a map and back so they won't be lost.
The text was updated successfully, but these errors were encountered: