diff --git a/federatedml/logistic_regression/README.md b/federatedml/logistic_regression/README.md index 591e1f5835..d2b096e224 100644 --- a/federatedml/logistic_regression/README.md +++ b/federatedml/logistic_regression/README.md @@ -23,7 +23,7 @@ The HomoLR process can be shown as above figure, Party A and Party B has same st In each iteration, each party train model among their own data. After that, they upload their encrypted(or not, depends on your configuration) gradient to arbiter. The arbiter will aggregate these gradients to form a federated gradient with which the parties can update their model. Just like the traditional LR, the fitting stop when -model converge or reach the max iterations. More details is available in this [paper]() +model converge or reach the max iterations. More details is available in this [paper](https://dl.acm.org/citation.cfm?id=3133982)