Understand ML Papers Quickly

5 simple questions by Eric Jang for understanding ML papers quickly.

  1. What are the inputs to the function approximator?
  2. What are the outputs to the function approximator?
  3. What loss supervises the output predictions? What assumptions about the world does this particular objective make?
  4. Once trained, what is the model able to generalize to, in regards to input/output pairs it hasn’t seen before?
  5. Are the claims in the paper falsifiable?





Food and Gut