You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
bolshoytoster f28b9c9f6c
Fixed broken links (#641)
2 years ago
..
dimensionality_reduction remove paper with prohibitive copyright 9 years ago
General-self-similarity--an-overview.pdf Math papers from original `isomorphisms` PR (#587) 4 years ago
README.md Fixed broken links (#641) 2 years ago
Understanding-Deep-Convolutional-Networks.pdf Math papers from original `isomorphisms` PR (#587) 4 years ago

README.md

Machine Learning

External Papers

Hosted Papers

  • 📜 A Sparse Johnson-Lindenstrauss Transform

    The JLT is still computationally expensive for a lot of applications and one goal would be to minimize the overall operations needed to do the aforementioned matrix multiplication. This paper showed that a goal of a O(k log d) algorithm (e.g. (log(d))^2) may be attainable by showing that very sparse, structured random matrices could provide the JL guarantee on pairwise distances.

    Dasgupta, Anirban, Ravi Kumar, and Tamás Sarlós. "A sparse johnson: Lindenstrauss transform." Proceedings of the forty-second ACM symposium on Theory of computing. ACM, 2010. Available: arXiv/cs/1004:4240

  • 📜 Towards a unified theory of sparse dimensionality reduction in Euclidean space

    This paper attempts to layout the generic mathematical framework (in terms of convex analysis and functional analysis) for sparse dimensionality reduction. The first author is a Fields Medalist who is interested in taking techniques for Banach Spaces and applying them to this problem. This paper is a very technical paper that attempts to answer the question, "when does a sparse embedding exist deterministically?" (e.g. doesn't require drawing random matrices).

    Bourgain, Jean, and Jelani Nelson. "Toward a unified theory of sparse dimensionality reduction in euclidean space." arXiv preprint arXiv:1311.2542; Accepted in an AMS Journal but unpublished at the moment (2013). Available: http://arxiv.org/abs/1311.2542

  • 📜 Understanding Deep Convolutional Networks by Mallat

    Stéphane Mallat proposes a model by which renormalisation can identify self-similar structures in deep networks. This video of Curt MacMullen discussing renormalization can help with more context.

  • 📜 General self-similarity: an overview by Leinster

    Dr. Leinster's paper provides a concise, straightforward, picture of self-similarity, and its role in renormalization.