A bunch of text formatted with *italics*, **bold**, ***both at once***, underline, and ~~strikethrough~~| a | b | c || :--- | ---- | ---- || 1 | 2 | 3 || 4 | 5 | 6 ||...
TLDR: I train many different sparse auto-encoders on a mixture of MLP output activations, trying different methods to improve their performance. The more invasive methods don't do much, but some model...
- why a problem? exceeds memory- core of the solution - link to js- Outline the problem- gumbels distribution- show how it differs from sqrt(n)- not really necessary disclaimer- Naïve approach with np...