mirror of
https://github.com/papers-we-love/papers-we-love.git
synced 2024-10-27 20:34:20 +00:00
fix machine-learning readme formatting
This commit is contained in:
parent
5b528a5523
commit
1dd8434cdb
@ -1,5 +1,6 @@
|
||||
# Machine Learning
|
||||
|
||||
|
||||
## External Papers
|
||||
|
||||
* [Top 10 algorithms in data mining](http://www.cs.uvm.edu/~icdm/algorithms/10Algorithms-08.pdf)
|
||||
@ -49,6 +50,11 @@
|
||||
|
||||
This seminal paper introduces a method to distill information from an ensemble of neural networks into a single model.
|
||||
|
||||
* [Truncation of Wavelet Matrices: Edge Effects and the Reduction of Topological Control](https://reader.elsevier.com/reader/sd/pii/0024379594000395?token=EB0AA78D59A9648480596F018EFB72E0A02FD5FA70326B24B9D501E1A6869FE72CC4D97FA9ACC8BAB56060D6C908EC83) by Freedman
|
||||
|
||||
In this paper by Michael Hartley Freedman, he applies Robion Kirby “torus trick”, via wavelets, to the problem of compression.
|
||||
|
||||
|
||||
## Hosted Papers
|
||||
|
||||
* :scroll: **[A Sparse Johnson-Lindenstrauss Transform](dimensionality_reduction/a-sparse-johnson-lindenstrauss-transform.pdf)**
|
||||
@ -63,14 +69,11 @@
|
||||
|
||||
*Bourgain, Jean, and Jelani Nelson. "Toward a unified theory of sparse dimensionality reduction in euclidean space." arXiv preprint arXiv:1311.2542; Accepted in an AMS Journal but unpublished at the moment (2013). Available: http://arxiv.org/abs/1311.2542*
|
||||
|
||||
* :scroll: **[Truncation of Wavelet Matrices: Edge Effects and the Reduction of Topological Control](https://reader.elsevier.com/reader/sd/pii/0024379594000395?token=EB0AA78D59A9648480596F018EFB72E0A02FD5FA70326B24B9D501E1A6869FE72CC4D97FA9ACC8BAB56060D6C908EC83)** by Freedman
|
||||
|
||||
In this paper by Michael Hartley Freedman, he applies Robion Kirby “torus trick”, via wavelets, to the problem of compression.
|
||||
|
||||
* :scroll: **[Understanding Deep Convolutional Networks](https://github.com/papers-we-love/papers-we-love/blob/master/machine_learning/Understanding-Deep-Convolutional-Networks.pdf)** by Mallat
|
||||
* :scroll: **[Understanding Deep Convolutional Networks](Understanding-Deep-Convolutional-Networks.pdf)** by Mallat
|
||||
|
||||
Stéphane Mallat proposes a model by which renormalisation can identify self-similar structures in deep networks. [This video of Curt MacMullen discussing renormalization](https://www.youtube.com/watch?v=_qjPFF5Gv1I) can help with more context.
|
||||
|
||||
* :scroll: **[General self-similarity: an overview](https://github.com/papers-we-love/papers-we-love/blob/master/machine_learning/General-self-similarity--an-overview.pdf)** by Leinster
|
||||
* :scroll: **[General self-similarity: an overview](General-self-similarity--an-overview.pdf)** by Leinster
|
||||
|
||||
Dr Leinster's paper provides a concise, straightforward, picture of self-similarity, and its role in renormalization.
|
||||
Dr. Leinster's paper provides a concise, straightforward, picture of self-similarity, and its role in renormalization.
|
||||
|
Loading…
Reference in New Issue
Block a user