All Things Devops Podcast artwork

Ep. 5: Diving into Kubernetes

All Things Devops Podcast

English - December 11, 2017 00:00 - 27 minutes - 37.9 MB - ★★★★★ - 2 ratings
Technology devops development software Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed


In this Episode, Neependra founder of cloudyuga and Rahul discuss about the Neependra's experince into starting with container technolgies like docker, kubernetes etc.

In this Episode, Neependra founder of Cloudyuga and Rahul discuss the Neependra’s experience about starting with container technologies like docker, kubernetes etc. Neependra focuses on the challenges involved in using persistent volumes and how to overcome them using some third-party solutions. They also discuss how one can start learning this tools and what are the best places to start.
Further Neependra share’s his experience on when he decided to move into training people. He also believes that going forward containers will be a base for deployment terms like serverless.

Links Mentioned in This Episode

BigBinary on Twitter
Cloudyuga on Twitter
Rahul on Twitter
Neependra on Twitter
Docker Cookbook
Kubernetes
Containerd
CRI-O
Rook

Transcript

[0:00:00] Rahul: Hello and welcome to the new episode of “All Things DevOps” podcast! And today we have Neependra Khare founder of “Cloud Yuga Technologies.” Welcome, Neependra to this “All Things DevOps” podcast! And I’d like you to introduce yourself to our audience.

[0:00:19] Neependra Khare: Thanks, Rahul! As you know my name is Neependra Khare and I’m the founder of “Cloud Yuga Technologies.” I’ve been in this industry for around 13 years now and I’ve seen the shift when people moved from Bare-Metal to VMs and this what I observed three years back that people are going to make the shift from VMs to containers. Luckily I got a chance to write a book on Docker, “Docker Cookbook” which got published in 2015 and also I became one of the organizers of Docker meetup group in Bangalore. While writing a book and during the meetup, I realized that there’s a skill gap and thus I decided to leave my full-time job at RedHat and started my own company “Cloud Yuga Technologies” and we are providing consulting and training around container technologies like docker, kubernetes and so on. Recently I have also published a course on Edx on “Introduction to kubernetes” by Linux Foundation which has I think more than 20,000 people reading around the world.

[0:01:20] Rahul: That sounds really amazing! And I have been watching you since my college days. I think that time you were in RedHat and mentoring some of the college interns and I had a chance, I think years back. So that was nice meeting you. Recently I saw your work on containers like docker and kubernetes, and also you have traveled almost around the globe for your workshops. So I just wanted to—I just want to ask you like how did you start your docker journey back in 2014 or 2015, where docker was just a new kid?

[0:02:00] Neependra Khare: So when I heard about docker first I was in Red Hat and somebody was talking in the mailing list that docker is something new with something you should check it out. So what happened is I just then just started researching about docker and saw that there is a meetup starting in Bangalore, the first meetup for docker by Shippable and then I went and joined that meetup but before that meetup, I did my homework and learned about what docker technologies, what containers are and so forth. So I did a pretty cool discussion with the founder of Shippable and that kind of inquisitive curiosity in the subject and as RedHat has been the open-source company so we started organizing docker meetups at RedHat office. And just like that I started writing blogs on containers, docker and so forth and then PacktPub approached me to write a book on docker I said, “Why not?” So that’s all where I started and while writing a book I kind of got to learn a lot of stuff because I was writing a book and that’s what I kind of a starting point in container world.

[0:03:03] Rahul: Interesting. So which was your first orchestration framework for deploying containers? Was it dockerswarm, kubernetes or something else?

[0:03:14] Neependra Khare: I couldn’t recall but I think it was OpenShift if you will… I’m pretty sure a lot of people don’t know about OpenShift – Redhat has been using containers for long and I have been OpenShift as a Redhat employee. So if I think of that I have used OpenShift at the very beginning of containers and for the orchestration perspective, I think I used docker swarm first and then played with kubernetes in parallel because of the book. So I was trying both things in parallel so actually, I tried orchestration and kubernetes in parallel we can say.

[0:03:46] Rahul: OK, awesome. So looking out at your k8s 101 courses which is featured in Linux foundation as well and as you’ve already mentioned there are almost around 20k people have enrolled for it, right? I think that course was public since 2015 or 2016 and until that time I think kubernetes was really not matured or not production-ready. So people just started diving into kubernetes, some of them tried putting it into production, some of them just held it back. So what was the point when you decided that kubernetes…? I mean most of the time it is going to be a kubernetes as an orchestration framework for containers rather than other frameworks like, maybe ECS or Apache Mesos or Docker Swarm because as of now if you see that almost more than 60% of the people are using kubernetes as their container orchestration tool.

[0:04:47] Neependra Khare: Yeah, good question. So just to correct you a little bit there are two courses on EDX, one is on “Cloud Technologies” which was published on 2016 and Kubernetes course was published on 2017th of July. So the question was I think when I decided to kind of think about kubernetes going to be the orchestrator. So I think earlier this year when I was kind of getting more dive into kubernetes in docker swarm. Actually, I was seeing that both of this ecosystem are from each other like, if you think of docker swarm, Orchestration was pretty easy, but kubernetes was very difficult, right? So then kubernetes learned from them and meet the kubeadm and so forth by which we can start kubernetes quickly. But I think when I kind of compare these Docker Swarm in kubernetes because I’m delivering the classes to different clients I realize that, I can say the construct, the resources what the kubernetes provides kind of flexibility kubernetes gives to you that is much better in terms of Docker Swarm. And the kind of community support what you see in kubernetes that is just beyond imagination. So I think based on these two things because I’m seeing the demand in the industry and the kind of community support what I see, I kind of see that kubernetes is going to be the winner and that’s what we see now towards the end of this year that kubernetes are the container orchestrator to afford.

[0:06:14] Rahul: Yeah, yeah, so I’m just adding to your facts like 50k commits on Github and around a large number of contributors and one of the top five open-source projects on Github and in the open-source community. I mean we all have agreed about like kubernetes is something really great. So talking about kubernetes you just mentioned about kubeadm. I hope that is kubernetes cluster provisioning tool. There are some other tools as well. So I think at least for us the kubernetes cluster provisioning on production environment was not that easy but once we just get through it and once we dockerize our kubernetize our app deployment process or continuing integration and continuous deployment with the tools just be easy and it also works faster than the traditional architecture. But, yeah, when we talk about kubernetes there are multiple components if we are self-hosting our kubernetes cluster we have to manage our masters and those have to be highly available. So we deploy multi-master cluster. It is some sort of easy if you’re on a public cloud, something like AWS, Azure, Google cloud or some other clouds like digital ocean and Scaleway. But what are the tools you would recommend or do you use for Bare-metals and some other technologies like open-stack? Have you tried playing with Bare Metal and other cloud stacks?

[0:07:54] Neependra Khare: So, most of the time deployment is been on digital ocean and on the AWS cloud and of course GKE they just provision it quickly but for the Bare Metal or kind of playing with the local system I have been using kubeadm and that’s what I recommend, get started with kubeadm but when people are comfortable using that then we can go with kops. Then we can deploy the kubernetes cluster on the AWS and then use it. That’s what I kind of recommending people that once you kind of get familiar with kubeadm and because kubeadm is almost there but I think it’s lot in the GFEs… still works and people are using it. But on the cloud either if you go with AWS then I would recommend you to use Kops but if you’re going to Google cloud then just use GKE, just forget about the management part. And to the recent announcement with AWS reinvent I think once the AWS launches the Manage Kubernetes Cluster I think it’s going to be the real game changer in the cloud industry.

[0:08:58] Rahul: OK, we still have to wait because I don’t think it’s available in preview or it is available for you… So we have also been using kops and we also have tried some other tools like kubespray. I think it was named as kargo before.

[0:09:18] Neependra Khare: Right, right, right.
Rahul: So first things like when we started container deployment and first wherever you go you’ll just see like containers are first stateless applications. So we had some services which were like database services or stateful apps and with that, we have to have some storage system on our container orchestration platform. So with kubernetes I think there are various types of strategies using volumes and storage system. So how was your experience in handling storage and stateful apps on kubernetes?

[0:09:55] Neependra Khare: Yeah, so the experiences are not so bad so far. Whatever we’ve deployed so far it’s been working. So I think people need to understand or kind of think about the difference between stateful and stateless application. As you mentioned that as we started the container journey people just thought about deploying the stateless first and we’ll see everything later on. We have passed that phase right on. We know that stateless works seamlessly in the container world and we can use kubernetes or docker swarm or anything we want to use but in the stateful when we have to kind of save the state of the transaction or whatever we’ve done. That’s when these kinds of need for the storage comes along. There are different strategies what we have tried in the past or we’ve been using so far as well is basically… I can tell you about two use-cases which I’ve been using; One is basically for the training perspective and one is for some of the customers where we have the deployment. So one is for the training perspective where I’ll just use the host path and for the empty directory and uses that to deploy the applications, but when it comes to the productions then we use the persistent volume which kubernetes provides either on AWS or on GKE. So with the persistent volume, we have been deploying free applications there.

[0:11:16] Rahul: Have you tried some other third party storage options like Ceph?

[0:11:21] Neependra Khare: Yeah, correct. So I have tried Rook. If you’ve heard about Rook I guess it container-native orchestrator which uses Ceph behind the scene. So what we can do is we can put together a few notes and I think they are part of the kubernetes cluster itself. So you can create a storage class for the rook storage and—so once you create a rook cluster then you get a storage class for the rook and that storage class then can be used in the persistent volume claim by which we can provision the backend storage on rook which using Ceph at the backend and with that your application gets the storage are running in the Ceph cluster.

[0:12:08] Rahul: Does the rook or similar kind of storage options supports scaling of pods or containers which are using the storage? Because I ran into this issue and I was using the persistent volume on AWS and was running one container which was using persistent volume claim and then when I tried to scale my container account 2 from 1 it just failed because it was not able to mount the volume and also, are these issues addressed with the providers like rook or OpenEBS or Ceph?

[0:12:41] Neependra Khare: So I think I tried with rook and it seems to work seamlessly, that’s what I could recall.

[0:12:46] Rahul: Okay, that’s great because with these public cloud technologies scaling of stateful applications or wherever we used this persistent volume claims and all we were not able to be scale. So have you got chance to try out the latest versions of kubernetes 1.8 or 1.7.2? And have you tried some other container runtime environment other than docker or rocket?

[0:13:13] Neependra Khare: So I have tried rocket once but not at the recent release. I have tried during the CoreOs and that seems to be worked. Basically if you use rocket as the container runtime on coreos it supposed to work better. So I think but the recent changes with the CRI-O basically is going to give lot of flexibility to the users to choose which container instead they want to use at the backend and that’s why I think docker is also supporting that containerd as a part of that CRIO framework by which you can use containerd or rocket or whatever runtime you’d like to use there.

[0:13:45] Rahul: Yes, exactly. I think this has been added in the latest kubernetes 1.8 version.

[0:13:51] Neependra Khare: Correct. For that features I’m looking in terms of storage there is.. one is the autoscaling of the PV’s. Basically, that’s been… let’s say think about you’re running your persistent volume let’s say 100GB and that might get filled up and I want to increase to 200GB, how will you do that? I think that feature is being added in the latest alpha version 1.8 I think only GlusterFS supports this point of time so what we can have is you can kind of provision the storage persistent volume of some size and if you need more you can just scale it to the larger size and it would just work transparently for the users. So that feature is really I’m looking forward to it.

[0:14:34] Rahul: Interesting. Because as per my experience I think we’ve used AWS EBS and when we were specifying the size of PV if it’s 50GB and the volume is out of capacity then we will be in trouble. So adding this kind of feature will really help users and I think with AWS we had EFS which is the Elastic file system. There’s also limit, you have to always specify the upper limit and it will scale accordingly. A lot of work happening around storage and people are really working out to deploy all kinds of applications and kubernetes so that is really interesting. So another thing I’d like to hear from you is, do you have any scaling stories on kubernetes like you have deployed an application on kubernetes and which is a high traffic application and how do you scale up and down? What are the scaling techniques you have implemented?

[0:15:36] Neependra Khare: Okay. To be very frank whatever work I have done I haven’t seen a very high volume application or kind of high volume transactions in deployments so I’m really not sure whether I can comment on that or not.

[0:15:48] Rahul: Okay, no problem. How’s your experience regarding logging and monitoring of these kinds of containers?

[0:15:56] Neependra Khare: Oh, that’s really nice. I think with the containers… of course, containers can go, come up and go at any point of time and this is a really big challenge to kind of monitor and trace the different applications in the container world. So there are tools like if… I’m sure you’d have got Sysdig, Datadog, Prometheus… I have used Sysdig and Prometheus for my use-cases. Well Sysdig is kind of more of it like feel of it what… how does Sysdig can collect the stats burning in the agent on the nodes and then send across the data to the cloud for which we can get the stats and overall we can say the visual display of these clusters that has been really awesome but I think after I have used Prometheus. After using Prometheus, grafana in terms of both grafana thing. I think we can build our own dashboards and present whatever you like to have. So I think after I have used Prometheus and grafana I kind of switched more towards them. Basically, there’s kind of I have been asking to… I mean customer to use them and kind of doing it for my training as well.

[0:17:08] Rahul: Yeah, I think Prometheus is the one which is gaining a lot of attention these days as it is a time series database and yeah we can create our custom cross and fire our own query.

[0:17:23] Neependra Khare: No, I think all these storage providers like scaleIO, EBS or rook. They kind of providing the end point for Prometheus to kind of read the specific details that Prometheus can now understand the storage’s specific details also and I think that really make things interesting there. The other thing that I want to touch-up on is I think maybe you might touch-up on but I thought to talk about it. The container storage interface, right, which is kind of becoming… not becoming it’s kind of being involved right not to become a specification by which we can develop volume plugins. So think about… currently let’s say you are a vendor for volume or volume provider, right. And I’m here to write the volume plugins for docker, kubernetes, mesos or cloud foundry separately, right. The kind of… they work in a different way but with this container which also interface API’s are common kind of slave, special features are bugged upon basically once you’ve written up volume plugin for kubernetes that’s work with docker or mesos or with cloud foundry as well. There’s also work going on around container storage interface and I think the couple of vendors who already supporting the CSI interfaces I think this would be really great for all of us as well as for the vendor of storage because they have to manage multiple things for different orchestrator. I think that’s the good place to watch for.

[0:18:56] Rahul: And which technique do you prefer for logging on kubernetes? Like, we have plenty of options available for logging.

[0:19:04] Neependra Khare: Yes, I have been using kind of ELK stack and that’s kind of and I haven’t played with other technologies. Mostly ELK stack and if you are in the GKE then just use the default login for GKE.

[0:19:19] Rahul: Okay, I think ELK is most widely used one.

[0:19:23] Neependra Khare: Yeah.

[0:18:24] Rahul: And it comes as an add-on when you provide your kubernetes cluster and I think that works just great for the cluster as well as in the application monitoring. The next thing I’d like to understand from you is like, now you are almost like training people on containers, DevOps tools, and other cloud technologies so what would you suggest for maybe a newbie or people who are not yet have dived into this containers ecosystem? What is the great start, according to what is the perquisite that one should have to start learning these technologies like cloud-native technologies we can call?

[0:20:11] Neependra Khare: So first thing is everybody has to learn because there’s no going back now that’s what I see and at the same time basically where you kind of start going search about containers, right. You get the things from last three years, right? And once you kind of learn something and then you kind of look at the latest documentation something else. So I think there are a lot of confusions when you kind of just dive into the container world. Because there’s so many information available and they are varied from different, right. When things are evolving and things are changing very fast that if you’re not keeping up with the phase then you might lose so I think… First thing what people need to understand is getting the right source from where you get the information is very important.

Basically either you can take documents from docker or from I mean… I think the documentation is the place where you should start. At the same time, you need to kind of cover-up with your basics. What containers are, what docker is… and yes, people have to build their base from what containers are before dockers are… basically containers are what docker had brought, right? Containers are there for a long time, right? So basically people need to understand that container technology is being evolved for last 10-15 years and people have been using it, and docker had made it so easy that they can consume it. So they’re getting the right starting point I think it’s very important and a couple of good places which I would suggest is there’s docker community, GitHub which is really a good place where people can start and there is documentation for docker and kubernetes which is there where you can start and at the same time both docker and kubernetes community sends these docker weekly and Kube weekly which are the right resource or stuff. So I think people should subscribe to that mailing list and start learning from them and one more important point is maybe they can join the local meetup groups. Basically like docker, kubernetes whatever the meetup groups is needed to them they can directly join and they can basically learn from people in the meetup groups and at the same time ask them questions like, what are right places to start, maybe that might get them started and at the last basically I’ve been running a training company you can always come to us and then we will give you the right direction.

[0:22:28] Rahul: Yeah, I think this will really help for the people who want to learn the container technologies or just want to start learning cloud technologies like kubernetes or some cloud-native technologies as well. So I’m just interested to know about your “Docker Cookbook” in 2015 and if I’m not wrong you’re also co-authoring course on “Cloud infrastructure technologies.” So what is that course about and what are the tools/technologies one can learn from that course?

[0:23:04] Neependra Khare: Okay, so the “Cloud infrastructure technology” course got launched in 2016 and basically that course is for somebody who is starting in the cloud world in general or a kind of a mid-level manager who is kind of transitioning from the older way of doing to the newer way of doing things. So basically it’s for those people who kind of trying to start in this technology world of the cloud. So that course covers basically from your IAAS, PAAS, SAAS, containers to DevOps tools like Chef, Puppet ansible and then maybe logging. That course covers a kind of we can say the breath with some small demo and the content about them. So basically if you have no idea what’s happening around the world you can take that course. It’s a free course. And you can kind of get a good feeling of it what the latest technology trends are and what you should learn to upscale yourself.

[0:24:00] Rahul: So the last question maybe, what led you to jump into this full-time training thing rather than working on real-time projects? What was the moment when you thought like now this is time for me to start mentoring people? When did this happen? Because this is… I mean for guy who has like… most of the hold on all the cloud technologies and posses more than 10 years of experience. So what was that moment when you thought, now I’m going to mentor/train people in this moment?

[0:24:36] Neependra Khare: So there are few things. One is basically I love teaching so I’ve been teaching people on and off from my college days itself and I taught of all semester course in Symbiosis college, in Pune 2012. I love teaching, that’s one. Second thing is when I was kind of writing a book and attending and managing the docker meetup group I saw there’s skill gap and what would happen is if a person who doesn’t have the knowledge, he comes and start teaching and then he would teach random stuff. Because we see that there are like so many of the places where he can kind of talk about what are the latest technologies/trends is… they can kind of just come and learn and teach. That does not do the justice to technology what is it and how people should learn about it. So with that in mind where I see, there’s skill gap and maybe I should fill it up. So I started with an experiment I thought… I’ll just try it out for 6 months and if it worked out it’s good, and it’s been good so far and it’s been two years. Actually, I finished… 2nd December of 2015 I left my full-time job. It’s been two years now. It’s going good and getting strong slowly.

[0:25:46] Rahul: That’s really nice. So what are your final thoughts about all your experience? As you clearly mentioned you started with Bare Metals, virtual machines and now into containers. What do you think like, what is going to be the age of ahead in a year of 1, 2 or 5 years, now it’s mostly containers and technologies like kubernetes or docker? What are your final thoughts about it?

[0:26:13] Neependra Khare: So final thought about it is everything depends… everything in the world kind of rest on the cost. So basically… while we move from Bare Metals to VMs because of the cost, now we are moving from VMs to containers is because of the cost and overall efficiency and maybe in conjunction with containers and server-less that’s what I see as future. Basically when you can have containers supporting your server-less functions which can kind of do the jobs for you and be very minimalistic kind of overhead, right? Basically, I think the future kind of going towards where we use containers and serverless together and deploy and mange our applications.

[0:26:56] Rahul: Yes. I think server-less has already started gaining its attraction and I think frameworks like kubeless, fission or the most used AWS lambda, one of the most adapted frameworks for server-less functions and services. So I think, yeah, we just have to wait and watch like—

[0:27:16] Neependra Khare: Yes, yes.

[0:27:19] Rahul: So it was really nice talking to you Neependra on this episode of a podcast. Thanks for sharing your thoughts and Thanks for taking out your time and attending this podcast!

[0:27:32] Neependra Khare: Thanks, Rahul! And thanks to BigBinary for hosting the podcast!

[0:27:35] Rahul: Thank you! Have a nice ahead!

[0:27:37] Neependra Khare: Thank you!

Twitter Mentions