Commit Graph

20 Commits

Author SHA1 Message Date
inclusive-coding-bot
47e53b3859 Switch to gender neutral terms 2022-03-31 04:52:14 -04:00
bolshoytoster
f28b9c9f6c
Fixed broken links (#641)
I went through #638 and fixed the broken links in there.

There was one I couldn't find, and it wasn't in the wayback machine
so I deleted it.
2021-12-31 11:52:14 -05:00
Sean Broderick
276ecb8644 fix link in machine_learning (Top 10 algorithms in data mining) 2020-03-28 00:25:52 -04:00
Sean Broderick
1dd8434cdb fix machine-learning readme formatting 2019-12-25 23:50:45 -05:00
NewAlexandria
39fd04bdce Math papers from original isomorphisms PR (#587)
* Add gitter for community.

* Update CODE_OF_CONDUCT.md

* Add statecharts paper in a new systems modeling category (#565)

* Rename "paradigm" and "plt" folders for findability (#561)

* rename "language-paradigm" folder for findability

lang para pluralize

* rename PLT => languages-theory

* fixed formatting

* group pattern-* related papers (#564)

* combine clustering algo into pattern matching

* rename stringology with the pattern_ prefix

* improved the README header info for paper related to patterns

* consolidate org-sim and sw-eng dirs (#567)

* consolidate org-sim and sw-eng dirs
* typo and links

* Fixed link (#568)

* Update README.md
* Fixed A Unified Theory of Garbage Collection link

* Verification faults dirs (#566)

* consolidate program verificaiton and program fault detection listings.
* faults and validation gets header info

* self-similarity by Tom Leinster

Again on the topic of renormalisation. Dr Leinster has a nice, simple picture of self-similarity.

* added new papers in Machine Learning dir.  fixed-up references
Truncation of Wavelet Matrices
Understanding Deep Convolutional Networks
General self-similarity: an overview

cleanup url files (wrong repo format)

* what has sphere packing to do with compression?

• role of E8 & Leech lattice in optimal codes
• mathematically best compression was never used
• ikosahedron

* surfaces ∑

I show this paper to college freshmen because
• it’s pictorial
• it’s about an object you mightn’t have considered mathematical
• no calculus, crypto, ML, or pretentious notation
• it’s short
• it’s a classification proof: “How can it be that you know something about _all possible_ X, even the xϵX you haven’t seen yet?’

* good combinatorics

Programmers are used to counting boring things. Why not count something more interesting for a change?

* added comentaries from commit messages.  more consistent formatting.

* graphs

Programmers work with graphs often (file system, greplin, trees, "graph isomorphism problem" (who cares) ).   But have you ever tried to construct a simpler building-block (basis) with which graphs could be built? Or at least a different building block to build the same old things.

This <10-page paper also uses 𝔰𝔩₂(ℂ), a simple mathematical object you haven’t heard of, but which is a nice lead-in to an area of real mathematics—rep theory—that (1) contains actual insights (1a) that you aren’t using (2) is simple (3) isn’t pretentious.

* from dominoes to hexagons

why is this super-smart guy interested in such simple drawings?

* sorting

You do sorting all the time. Are there smart ways to organise sub-sorts?

* distributed robots!!

Robots! And varying your dimensionality across a space. But also — distributed robots!

* knitting

Get into knitting.

Learn a data structure that needs to be embedded in 3D to do its thing.

Break your mind a bit.

* female genius

* On “On Invariants of Manifolds”

2 pages about how notation and algorithms are inferior to clarity and simplicity.

* pretty robots

You’ll understand calculus better after looking at these pretty 75 pages.

* Farey

Have another look at ye olde Int class.

* renormalisation

Stéphane Mallat thinks renormalisation has something to do with why deep nets work.

* the torus trick, applied

In Simons Foundation’s interview by Michael Hartley Freedman of Robion Kirby, Freedman mentions this paper in which MHF applied RK’s “torus trick” to compression via wavelets.

* renormalisation

Here is a video of a master (https://press.princeton.edu/titles/5669.html) talking about renormalisation. Which S Mallat has suggested is key to why deep learning works.

* Cartan triality + Milnor fibre

This is a higher-level paper, but still a survey (so more readable). It ties together disparate areas like Platonic solids (A-D-E), Milnor’s exceptional fibre, and algebra.

It has pictures and you’ll get a better sense of what mathematics is like from skimming it.

* Create see.machine.learning

* tropical geometry

Recently there have been some papers posted about tropical geometry of neural nets. Tropical is also said to be derived from CS. This is a good introduction.

* self-similarity by Tom Leinster

Again on the topic of renormalisation. Dr Leinster has a nice, simple picture of self-similarity.

* rename papers accordingly, and add descriptive info

remove dup maths papers

* fixed crappy explanations

* improved the annotations for papers in the Machine Learning readme

* remediated descriptive wording for papers in the mathematics section

* removed local copy and added link to Conway Zip Proof

* removed local copy and added link to Packing of Spheres - Sloane

* removed local copy and added link to Algebraic Topo - Hatcher

* removed local copy and added link to Topo of Numbers - Hatcher

* removed local copy and added link to Young Tableax - Yong

* removed local copy and added link to Elements of A Topo

* removed local copy and added link to Truncation of Wavlet Matrices

Co-authored-by: Zeeshan Lakhani <202820+zeeshanlakhani@users.noreply.github.com>
Co-authored-by: Wiktor Czajkowski <wiktor.czajkowski@gmail.com>
Co-authored-by: keddad <keddad@yandex.ru>
Co-authored-by: i <isomorphisms@sdf.org>
2019-12-25 23:36:58 -05:00
Chandan Singh
b5614ed1cb Add new machine learning papers (#546) 2019-05-27 19:10:44 -04:00
Paige Bailey
6ce7315e2d Multiple Narrative Disentanglement: Unraveling Infinite Jest (#504) 2018-01-20 11:22:33 -05:00
Arunav Sanyal
15edd14773 Adding the trusting classifiers paper from the Seattle papers we love (#466)
chapter, presented on July 2017.
2017-07-09 16:26:39 -04:00
Eric Leung
6b8377f375 Fix machine learning paper link and spelling (#419) 2016-09-29 14:45:45 -07:00
Visgean Skeloru
e1b14ffac3 Updated Random forests paper location
the old one was not responding and https://www.stat.berkeley.edu/~breiman/papers.html links to new location.
2016-03-05 15:56:22 +01:00
Zachary Jones
d2acf0fc3b Update to all READMEs for hosted content
reorganization of so-called historical papers
2015-10-07 15:12:22 -04:00
Kunal Vyas
ef439a32c5 Added Applications of Machine Learning to Location Data 2015-09-10 21:00:03 -04:00
Sean Broderick
b09a9ffeb9 remove paper with prohibitive copyright
Also, fix readme formatting in two files.
2015-06-17 13:00:13 -04:00
Zachary Jones
72e7c59f29 machine learning additions. sublinear README.
The Fast Johnson-Lindenstrauss Transform
A Sparse Johnson-Lindenstrauss Transform
Towards a unified theory of sparse dimensionality reduction in Euclidean space
2015-05-30 23:30:36 -04:00
Zachary Jones
b8e2a19afa Merge in 'tchira/master' 2015-05-30 22:55:29 -04:00
Bryan Cardillo
8c8741f8f4 Add 'Support-Vector Networks' paper. 2015-04-08 19:01:58 -04:00
Bryan Cardillo
16b7c881a8 Add 'Conditional Random Fields' paper. 2015-04-07 21:28:45 -04:00
Tarun Chitra
cb2b1f8ba3 Fixed error with name 2015-02-03 00:28:09 -05:00
Tarun Chitra
d5573dbb45 Adding some seminal and recent papers on dimensionality reduction; I would be willing to give a talk about this topic\! 2015-02-03 00:25:08 -05:00
Ryan Swanstrom
9d4b35f497 Create README.md
just the initial folder structure

Update README.md

added 3 links

Update README.md

update links to academic sites
2014-04-06 23:04:27 -05:00