Differential geometry is a mathematical discipline that uses tools of calculus and algebra to study problems in geometry. Sounds daunting? Yeah, same. So this series of articles is an attempt to break it down into simpler questions and even simpler answers because I myself am not from a purely mathematical background which made it difficult for me to learn and implement this intuitively for my current research (https://ozamanan.github.io).
Coming back to differential geometry. It has applications in physics, chemistry biology, computer-aided graphical design, computer vision, image processing, machine learning and much more. I will be focusing on this from the point of view of machine learning and its applications in machine learning.
I am going to spare you most of the mathematics for this this article. So what is a manifold? That is a bigger question. Imagine being a point on a straight or curved line. How would you be moving in your “line-world”? Either forward or backward, isn’t it? What if you were a point on a plane? You would be moving in any direction but still be a part of the plane. You can keep adding dimensions and your degrees of freedom would keep increasing and your “world” would be a manifold of a some topological space. (I’ll be talking bout topology as well but in later articles).
That was a layman’s explanation to what a manifold means. The next property of a manifold is that it is a euclidean space and is homeomorphic.
Now homeomorphism is that property of a space wherein you can continuously modify it (change its shape geometrically) and get a new shape. If you have been able to do this then the initial structure and the new one are said to be homeomorphic.
Now, Riemannian manifolds are smooth manifolds equipped with the Riemannian metric that is defined as the shortest length (geodesic) from one point to another along the curve (manifold). Well, That is the simplest I could make it without using math. 🙂
Machine learning is highly dependent on some or the other kind of data. This data is usually high dimensional and training models in so many dimensions comes with its own set of challenges pertaining to the training, accuracy, etc. This is where computational geometric learning (CGL) or manifold learning (a concept in CGL) steps in. Manifold learning is nothing but nonlinear dimensionality reduction to train the models in lower dimensions which improves training time, accuracy and learns the data distribution more accurately. 
Now for obvious reasons we would want our training process to be as short as possible and data manifolds to be accurate, therefore we use Riemannian manifolds for their geodesic property. Now this may sound like incomplete knowledge and I agree that it is. But to understand the new questions that arise due to my above claim there is a lot of reading and math to get into. I will cover that in the future articles. I am giving links to a few references that I went through to understand this concept and think further.