papers-we-love_papers-we-love/machine_learning
i 342d46ab26
renormalisation
Stéphane Mallat thinks renormalisation has something to do with why deep nets work.
2019-06-28 23:24:45 -04:00
..
dimensionality_reduction remove paper with prohibitive copyright 2015-06-17 13:00:13 -04:00
1601.04920.pdf renormalisation 2019-06-28 23:24:45 -04:00
README.md Add new machine learning papers (#546) 2019-05-27 19:10:44 -04:00

Machine Learning

External Papers

Hosted Papers

  • 📜 A Sparse Johnson-Lindenstrauss Transform

    The JLT is still computationally expensive for a lot of applications and one goal would be to minimize the overall operations needed to do the aforementioned matrix multiplication. This paper showed that a goal of a O(k log d) algorithm (e.g. (log(d))^2) may be attainable by showing that very sparse, structured random matrices could provide the JL guarantee on pairwise distances.

    Dasgupta, Anirban, Ravi Kumar, and Tamás Sarlós. "A sparse johnson: Lindenstrauss transform." Proceedings of the forty-second ACM symposium on Theory of computing. ACM, 2010. Available: arXiv/cs/1004:4240

  • 📜 Towards a unified theory of sparse dimensionality reduction in Euclidean space

    This paper attempts to layout the generic mathematical framework (in terms of convex analysis and functional analysis) for sparse dimensionality reduction. The first author is a Fields Medalist who is interested in taking techniques for Banach Spaces and applying them to this problem. This paper is a very technical paper that attempts to answer the question, "when does a sparse embedding exist deterministically?" (e.g. doesn't require drawing random matrices).

    Bourgain, Jean, and Jelani Nelson. "Toward a unified theory of sparse dimensionality reduction in euclidean space." arXiv preprint arXiv:1311.2542; Accepted in an AMS Journal but unpublished at the moment (2013). Available: http://arxiv.org/abs/1311.2542