papers-we-love_papers-we-love/machine_learning
i b8c17610a2
self-similarity by Tom Leinster
Again on the topic of renormalisation. Dr Leinster has a nice, simple picture of self-similarity.
2019-11-17 17:46:38 -05:00
..
dimensionality_reduction remove paper with prohibitive copyright 2015-06-17 13:00:13 -04:00
1601.04920.pdf renormalisation 2019-11-17 17:46:37 -05:00
0411343.pdf self-similarity by Tom Leinster 2019-11-17 17:46:38 -05:00
82306620.pdf the torus trick, applied 2019-11-17 17:46:37 -05:00
curt.mac.mullen.renormalisation.video.url renormalisation 2019-11-17 17:46:37 -05:00
General-self-similarity--an-overview.pdf added new papers in Machine Learning dir. fixed-up references 2019-11-17 17:46:30 -05:00
README.md added comentaries from commit messages. more consistent formatting. 2019-11-17 17:46:34 -05:00
Truncation-of-Wavelet-Matrices--Edge-Effects-and-Reduction-of-Topological-Control.pdf added new papers in Machine Learning dir. fixed-up references 2019-11-17 17:46:30 -05:00
Understanding-Deep-Convolutional-Networks.pdf added new papers in Machine Learning dir. fixed-up references 2019-11-17 17:46:30 -05:00

Machine Learning

External Papers

Hosted Papers

  • 📜 A Sparse Johnson-Lindenstrauss Transform

    The JLT is still computationally expensive for a lot of applications and one goal would be to minimize the overall operations needed to do the aforementioned matrix multiplication. This paper showed that a goal of a O(k log d) algorithm (e.g. (log(d))^2) may be attainable by showing that very sparse, structured random matrices could provide the JL guarantee on pairwise distances.

    Dasgupta, Anirban, Ravi Kumar, and Tamás Sarlós. "A sparse johnson: Lindenstrauss transform." Proceedings of the forty-second ACM symposium on Theory of computing. ACM, 2010. Available: arXiv/cs/1004:4240

  • 📜 Towards a unified theory of sparse dimensionality reduction in Euclidean space

    This paper attempts to layout the generic mathematical framework (in terms of convex analysis and functional analysis) for sparse dimensionality reduction. The first author is a Fields Medalist who is interested in taking techniques for Banach Spaces and applying them to this problem. This paper is a very technical paper that attempts to answer the question, "when does a sparse embedding exist deterministically?" (e.g. doesn't require drawing random matrices).

    Bourgain, Jean, and Jelani Nelson. "Toward a unified theory of sparse dimensionality reduction in euclidean space." arXiv preprint arXiv:1311.2542; Accepted in an AMS Journal but unpublished at the moment (2013). Available: http://arxiv.org/abs/1311.2542

  • 📜 Truncation of Wavelet Matrices: Edge Effects and the Reduction of Topological Control by Freedman

    In Simons Foundations interview by Michael Hartley Freedman of Robion Kirby, Freedman mentions this paper in which MHF applied RKs “torus trick” to compression via wavelets.

  • 📜 Understanding Deep Convolutional Networks by Mallet

    Stéphane Mallat thinks renormalisation has something to do with why deep nets work.

  • 📜 General self-similarity: an overview by Leinster

    Again on the topic of renormalisation. Dr Leinster has a nice, simple picture of self-similarity