Skip to content
标签
Preface
Algorithmic fairness
Words
231 字
Reading Time
2 分钟

Preface

Some questions and those answers:

  1. What is Neural Collaspe (NC)? a. Awesome paper-hub: Awesome-Deep-phenomena b. Most Relevant Papers:
  1. Are Neurons Actually Collapsed? On the Fine-Grained Structure in Neural Representations. [paper] Yongyi Yang, Jacob Steinhardt, Wei Hu. ICML 2023
  2. Imbalance Trouble: Revisiting Neural-Collapse Geometry. [paper] Christos Thrampoulidis, Ganesh R. Kini, Vala Vakilian, Tina Behnia.
  3. Limitations of Neural Collapse for Understanding Generalization in Deep Learning. [paper] Like Hui, Mikhail Belkin, Preetum Nakkiran.
  1. What are the current theoretical researches on algorithmic fairness? I categorize the theoretical researches before 09/2023 into three parts: metric-based, data-based and distribution-based: See the section ("Related Work")[https://github.com/Ytang520/nolebase-template/blob/main/public/analysis/LLM%2Bfair_tabular_prediction.pdf]

  2. Why do we need NC view for algorithmic fairness? (1) Why not others? Previous works focus more on the trade-offs (accuracy v.s. fairness; different fairness notions...), while ignore the details of training. Here are hundreds of researches designing thousands of methods to debias, but we have no idea what the similarities and the differences are, for what distributions they are suitable, for what conditions they can be the best methods, etc.. A framwork to cover all those methods can help us decide better which is more suitable to use.

(2) Why NC? NC represents the teminal phase of training, and we may assume that the models gradually achieve that phase. A deeper look of the ter

Contributors

The avatar of contributor named as Hua Tang Hua Tang

File History

Written by Normal Person