You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you so much for providing such a good github repository.
I ran the code myself, and the code is very reproducible and helpful to concatenate my work on it.
One questionable point is as follows:
that some papers speak louder that their method definitely outperforms the randomly sampled coreset. "Deep Learning on a Data Diet: Finding Important Examples Early in Training" is one of representative paper of them. However, according to the report of this paper, it seems to record lower performance than random sampling for coresets below 10%.
This seems to be a very strong argument, is this phenomenon becaus of not enough hyper-parameter tuning in this repository? Or are the claims of these studies partially wrong?