Folks who are into this may also like the "Type Safe Neural Networks in Haskell" https://blog.jle.im/entry/practical-dependent-types-in-haske...
I'm also starting work on a set of bindings to libdarknet for Idris with similar properties.
Note that the article has a comment by Yann LeCun (hopefully it's not an impersonator).
You may also be interested in Differentiable Neural Computers:
Discussed at the time: item?id=10165716.
Some related work also include:
- "Strongly-Typed Recurrent Neural Networks" http://proceedings.mlr.press/v48/balduzzi16.pdf
- Principled Approaches to Deep Learning workshop http://padl.ws/ (maybe this is in line with the meta-point Colah's paper)
- Haskell accelerate library https://github.com/AccelerateHS/accelerate/. Not deep learning per se but perhaps some of the ideas are applicable
This covers some of same topics plus the rapidly expanding #s of frameworks, SIMD /SIMT backends etc https://julialang.org/blog/2017/12/ml&pl
20 pg tutorial on why RNN's are tricky https://arxiv.org/abs/1801.01078
There's also this take on differentiating datatypes: https://codewords.recurse.com/issues/three/algebra-and-calcu...
Reading this makes a lot of the operations in colah's article feel more intuitive. (To me, at least. I'm no expert here.)
The examples are odd because he doesn't incorporate any notion of differentiability.
So a Generating RNN is not quite like foldr, since foldr has no notion of differentiability.
One needs to show examples that pulls in some kind of automatic-differentiation capability.