diff --git a/README.md b/README.md index 3a3a77d..fee9599 100644 --- a/README.md +++ b/README.md @@ -67,6 +67,13 @@ Related Codes: - By topic: [doc/awesome_papers.md](/doc/awesome_paper.md) - By date: [doc/awesome_paper_date.md](/doc/awesome_paper_date.md) +*Updated at 2024-12-25:* + +- Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions [[arxiv](http://arxiv.org/abs/2412.16504)] + - Privacy in LLM fine-tuning + +- Learning to Generate Gradients for Test-Time Adaptation via Test-Time Training Layers [[arxiv](http://arxiv.org/abs/2412.16901)] + - Generate gradients for TTA *Updated at 2024-12-19:* diff --git a/doc/awesome_paper.md b/doc/awesome_paper.md index 9bab627..5e74a9a 100644 --- a/doc/awesome_paper.md +++ b/doc/awesome_paper.md @@ -183,6 +183,9 @@ Here, we list some papers by topic. For list by date, please refer to [papers by ## Per-training/Finetuning +- Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions [[arxiv](http://arxiv.org/abs/2412.16504)] + - Privacy in LLM fine-tuning + - Transfer Learning on Multi-Dimensional Data: A Novel Approach to Neural Network-Based Surrogate Modeling [[arxiv](http://arxiv.org/abs/2410.12241)] - Transfer learning on multi-dimensioal data @@ -2023,6 +2026,9 @@ Here, we list some papers by topic. For list by date, please refer to [papers by ### Papers +- Learning to Generate Gradients for Test-Time Adaptation via Test-Time Training Layers [[arxiv](http://arxiv.org/abs/2412.16901)] + - Generate gradients for TTA + - Is Large-Scale Pretraining the Secret to Good Domain Generalization? [[arxiv](https://arxiv.org/abs/2412.02856)] - Large-scale pre-training vs domain generalization