-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can`t reach the F1 80.4 #14
Comments
i run this code and the F1 result is only 68%?... i had not change any parameters |
F1 result is only 68%,but the paper of f1 is 84 , how about your socre finally? |
hi , SeoSangwoo, thank you very much for sharing the code firstly , and when i run the code ,i also encountered the problems mentioned above ,the result of f1 is not same as that in paper , i found that PI was not considered in the code,isn't it? |
Does anyone get the best F1? Is it convenient to reveal the design of super parameters? |
Hello,I am sorry to just find this email and can't reply to you timely. I did not reproduce the code of the article either. I would like to ask you a question about the code,
why did use tf.reduce_sum here to reduce dimensions?is it the beat way?I would appreciate it if you could answer my questions.
output = tf.reduce_sum(inputs * tf.expand_dims(alphas, -1), 1)
…------------------ 原始邮件 ------------------
发件人: "Jerry"<[email protected]>;
发送时间: 2019年6月12日(星期三) 下午3:16
收件人: "SeoSangwoo/Attention-Based-BiLSTM-relation-extraction"<[email protected]>;
抄送: "肖燕"<[email protected]>;"Comment"<[email protected]>;
主题: Re: [SeoSangwoo/Attention-Based-BiLSTM-relation-extraction] can`treach the F1 80.4 (#14)
Does anyone get the best F1? Is it convenient to reveal the design of super parameters?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
<<< (9+1)-WAY EVALUATION TAKING DIRECTIONALITY INTO ACCOUNT -- OFFICIAL >>>: Confusion matrix: Coverage = 2717/2717 = 100.00% Results for the individual relations: Micro-averaged result (excluding Other): MACRO-averaged result (excluding Other): <<< The official score is (9+1)-way evaluation with directionality taken into account: macro-averaged F1 = 81.04% >>> using the default hyper-parameters and small batch size=10, Glove 6d.100.txt |
By using BiLSTM+attention+pytorch ,I got the best marco-F1(exclude relation
'other') 79.2% . I guess that some tricks may lead to a worse result
compare to the F1 reported from the paper.
hao <[email protected]> 於 2019年8月27日週二 下午4:28寫道:
… I also cannot reach 80%, my best result using Pytorch is only 71.3%
(BiLSTM + ATTN), 69.9% (BiLSTM), is the result reported in the paper
correct?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#14?email_source=notifications&email_token=AFKQGUIKHYL47EVZLAEXWKLQGTQS5A5CNFSM4GKSMIL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5G6FAY#issuecomment-525197955>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFKQGULNFNRQZIV5VZZKTS3QGTQS5ANCNFSM4GKSMILQ>
.
|
For everyone's inference, I ran the code on Sept 9, 2019, did not change anything, and obtained macro-averaged F1 = 81.37%.
My environment is
|
hello,i never used the tensorflow ,can you share your code with me.Rencently,i copy one code from other,which's p can only 62% , i can't find some questions ,so i want study your code by pytorch |
hi , SeoSangwoo, when i run your code , the result of f1 is 81.56, but the paper of f1 is 84 , how about your socre finally?
The text was updated successfully, but these errors were encountered: