All Things Devops Podcast artwork

Episode 3: Kubernetes and Rancher in Container Eco-system

All Things Devops Podcast

English - November 19, 2017 00:00 - 32 minutes - 44.6 MB - ★★★★★ - 2 ratings
Technology devops development software Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed


In this Episode, Shannon Williams and Rahul Mahale discuss Kubernetes, Rancher, Docker and more.

In this Episode, Shannon, co-founder of Rancher Labs, and Rahul discuss
Kubernetes, a container orchestration tool,
and
Rancher, a multi-cluster container deployment tool, in detail.

They discuss the challenges faced in setting up Kubernetes
clusters for production and how rancher(2.0) helps easing the Kubernetes deployments.

Further, they discuss some of the challenges like Deploying stateful apps
on container platforms and how we can solve them.
They also talk about serverless and how container ecosystem
is an easy platform to develop serverless applications.
Shannon thinks that equipping yourself with container tools is a need
of an hour as most of the enterprise companies have started migrating to container deployments.

Key Points From This Episode

Introduction to Rancher 2.0.
Key Features added in Rancher 2.0 for kubernetes.
Storage and Networking with Rancher and Kubernetes.
Running stateful applications on kubernetes.
Serverless on top of container ecosystem.

Links Mentioned in This Episode

BigBinary on Twitter
Shannon on Twitter
Rahul on Twitter
Kubernetes
Rancher
Serverless
CRI-O
Rkt
AWS Lambda

Transcript

[0:00:00] RAHUL: Hello and welcome to BigBinary’s podcast; All Things Devops and today we have Shannon Williams, the co-founder and VP of sales from Rancher Technologies and I would like to welcome you on behalf of BigBinary.

Hi Shannon, thanks for taking out time from your busy schedule and we really love to hear from your Rancher kubernetes and all the ecosystems. So, Shannon, do you mind introducing yourself?
[0:00:30] SHANNON: No, not at all Rahul. Thanks a lot for having me. It’s great to chat, I’m a fan of your podcast. So we started Rancher in 2014 so about three years ago now, and, at its core Rancher is an open-source software company. We create open source projects and develop, really, to help people manage and put containers into production. Four of us started the company after working for about 6 years prior to this on building infrastructure as a service in cloud computing.

We really set out with the goal of answering some questions about how people were going to really run applications in a world where infrastructure was plentiful and available but still quite different, whether you’re using all premise resources or cloud providers, you have this very plentiful resources, but there wasn’t a lot of standardization and so we were really excited about the potential of Docker, because it really opened up the idea that maybe we could get to a point where the design and development and even operations of containers or applications could be standard, even if you’re running on wildly different infrastructure. And so, that’s really driven us for a long time. Our primary project that’s gotten so popular is called Rancher, and Rancher is an opensource container-management platform that really makes it incredibly easy to build and deploy container clusters across any cloud, any infrastructure anywhere.

We’re a relatively mature team, so anyone who’s looking to use Kubernetes, use Docker they tend to find Rancher to be the layer that ties it all together. You know, Dockers has the runtime and the daemon at the base layer, I think Kubernetes is just the kernel of orchestration, so Rancher’s kind of like an operating system, you know, it sort of, it puts everything together, it simplifies how you sort of, compose the application and manage the storage and deal with the networking and control user access and just, everything, to make it work. So that’s a little bit about us and what we do.
[0:0231] RAHUL: Sounds really interesting. Rancher and your job name and brand in general and the company and how you’re product Rancher which is really evolving as an best orchestration tool for different orchestrators like Kubernetes, Cattle, Docker-Swarm, or Mesos. So we have been using Rancher and it has really helped us solving some real challenges.

But I’ve got an email with a couple of weeks back you released Rancher 2.0, even though I haven’t got really, a chance to try it out, but as we are Kubernetes shop and we try to build everything on Kubernetes, I would like to understand like, what is a breaking feature in Rancher 2.0 because, as a tag-line says, run Rancher on your Kubernetes so would you just tell us, in short, about that?

[0:03:20] SHANNON: Yeah, sure. So 2.0 is a big release for us. We released 1.0 sort of, towards the beginning of 2016, so about 18 months ago, and as you said, the defining feature of 1.0 was a real powerful, sort of multi-cluster management layer that would you know, kind of orchestrate your ops on multiple different orchestrators. So we deployed an operated upgrade of Kubernetes or Docker Swarm, or Mesos.

We had our own orchestrator called Cattle that was based on Docker Compose and so we had these different orchestrators, we could deploy them, we could operate them and run them and then manage sort of centralised user administration and logging and policy and things like that, across them. Over time though, what we found is two orchestrators were dramatically more popular than the rest, so despite supporting Swarm and Mesos, most of our users weren’t using those, they were using Kubernetes and they were using Cattle, that we had developed ourselves. And when we looked at it, we were kind of like ‘Ok, so what’s the reason that these are so much more popular?’ and when I say ‘so much’ I mean, you know, 95, 98% of people were using one of those. What it turned out is that Kubernetes had grown and grown in popularity since we introduced it in 1.0 based on the scalability, the maturity and the reliability of Kubernetes, really, is what attracted users to it. And we had always been pretty big fans of Kubernetes, so it made a lot of sense, it was sort of, reflected what we saw as well, that Kubernetes was quite production-grade, production quality.

But the other orchestrator that was really popular was Cattle. And Cattle was interesting. Cattle was sort of our initial orchestrator that we built with Rancher at the very beginning. And we built Cattle, actually only because we could never really get Swarm to stabilise and be sort of, a standard orchestrator for people who liked Docker Compose and liked the Docker framework, so we had a lot of trouble making Swarm really work well with network and storage on any kind of scale and so we decided to take and sort of build out some simple orchestration around Docker Compose, and as we built it, Cattle sort of got better and better and it ended up being, in a lot of ways, pretty similar to Kubernetes. Not as feature-full and not as robust but from an architecture perspective, actually quite similar. It had concepts like health checks, like you know, daemon sides, it had very similar concepts. And so that actually was even more popular than Kubernetes among Rancher users.

Probably about 75% of all the deployments of Rancher were using our Cattle orchestration, and as we looked at it, what people liked about Cattle was the fact that it used Docker Compose and was very much the API, CLI were all Docker based, so you could use all your native Docker commands very easily, and they like the user experience, they liked the, you know, simplicity of being able to really easily understand all the relationship. If you do understand Docker, you can understand Cattle really quickly. So as we look at 2.0, we have to decide what we wanted to do. Do we want to support lots of orchestrators or not? And we decided that from our perspective, Kubernetes had really gotten to the point where both its momentum and its quality and reliability were so high, there was just no good case to use anything else in production. We felt that it was just the best orchestration for production, and what we really wanted to do is we wanted to use Kubernetes as the standard implementation of orchestration across whatever we did, and so everything we were doing on the orchestration side in Cattle was already being done as well in Kubernetes, so we really felt like it was a lot of waste of resources to keep adding the same features or similar features to what was going into Kubernetes. So what we decided to do was standardise on Kubernetes but exceptionally the Cattle user experience. And so, what that meant was you know, taking everything we knew, and everything people liked about Cattle and implementing it around Kubernetes.

And so what Rancher 2.0 is, you know, it’s really 3 things. The first is something we’ve always done, which is a really reliable implementation of a Kubernetes distribution. You turn on Rancher and it automatically builds and deploys a very good upstream deployment of Kubernetes with good networking, and storage drivers ready to go and full implementation of Kubernetes. That’s at the core of Rancher, is that distro component to run Kubernetes really well anywhere, on VMware, on bare metal, on AWS, on any cloud.

But then the other piece is what we consider Kubernetes operation and Kubernetes container operation, which is really towards sort of, the IT department. And this is where you sort of think of Rancher’s sort of, multi-cluster management. That really gets to kind of, ‘Ok, we as an organisation are going to run lots of clusters, we want to have a reliable way of managing and moderating those clusters, we want to manage authorisation RBAC, essentially we want to be able to configure and set security policies, make sure those are complied with. We want to provision infrastructure, you know, we want to manage all the terraform scripts and we want to be able to look at things like, what’s the capacity and utilization of these clusters’ and it’s all of that, that IT side of deploying and running Kubernetes, we implemented to work on top of our distro. But the only thing we did, was we made it so you could mount any Kubernetes cluster into Rancher. That was a really cool, I think we heard from customers and users was that in taking Kubernetes from lots of places going forward, but it was not just that they were deploying Kubernetes but that they also had teams who had already deployed Kubernetes. But the bigger thing, they expected Kubernetes to be a service available from cloud providers GKE is a good example already, from Google, their container engine but Azure now offers Kubernetes as a service, IBM offers Kubernetes as a service, I think there’s a lot of rumours that maybe Amazon will be offering Kubernetes as a service, what we really did was be able to deploy Kubernetes anywhere, or manage a Kubernetes that’s deployed anywhere from an IT perspective.

Then at the top layer, we still see this sort of need for an application management, and all of the end-user experience, so if I’m given a Kubernetes cluster, now I need to still actually deploy on it and this is really just about exposing the Kubernetes API a really dynamic user interface, application monitoring, CI, CD, Logging and everything else that is developed around that and these are things we’ve done really well with Rancher for a long time. So we continued to implement all that, but we implement it in a way that is consistent across the Kubernetes clusters, regardless of where they come from. So yeah, it really became a complete stack, or deploying and running Kubernetes or managing any Kubernetes, enforcing and deploying all your policies and then building the app. So it’s a pretty big project. What we released a couple of weeks ago is the tech-preview. An Alpha release of this. But we’ll be doing a couple releases over the rest of this year, the beginning of next year before the GA Rancher 2.0.

[0:10:41] RAHUL: Nice, nice so, the features you all have mentioned with Rancher 2.0 would be really great for any of the kubernetes cluster provider or for the people who deployed their app on kubernetes. We being both user of Cattle framework as well as Kubernetes using in production. Initially we started of with Rancher and we also got going with Cattle itself because as you already mentioned with the simplicity and all the features just like kubernetes more or less offering with Cattle and Rancher that really eases an job of cluster provider or developer to deploy an app with Rancher. So you were being mentioning about different things and challenges like networking and scale and as of now 2.0 will be used to directly deploying rancher on kubernetes. So I think few of the things we found couple of months before were not available with Rancher with cattle framework, like isolation of network and scaling using something like weave and flannel or something like that. Yes I mean even though kubernetes has features like namespaces here we have stacks and services i.e well and good. REST API is we do have with kubernetes as well as with Rancher and one thing when we try to deploy kubernetes on rancher even though I think Rancher supports specific version of kubernetes. If we were running our native k8s stack with v1.6 at that time I think Rancher was still supporting V1.5. So With 2.0 I think this should not be a thing of worrying so really I mean I personally like kubernetes on rancher and one interesting thing you mentioned that cloud providers have started offering kubernetes as a service and the best example is GKE but that is another topic of debate like why people are not choosing vendor specific kubernetes just wanted to understand from the opposite, what is the exact thing that if there is a native Kubernetes and Kubernetes with Rancher, what are the pros and what are the cons, if there are some cons like, if the compliance management or something like that, could we just focus upon running Kubernetes natively, and running Kubernetes with Rancher?

[0:11:09] SHANNON: Sure. Yeah, I would say with our 1.06 running Kubernetes with Rancher was really made it easy, we optimised around simplifying the deployment and the provisioning of Kubernetes. But as part of that, we really kind of made it, you know, really kind of well defined. In the sense of we had a very well-defined implementation of Kubernetes that was easy to bring up. You’d tag some nodes that you wanted to be your management layer, you know, your ETCD nodes, your Kubernetes managers, you’d set out and deploy cubits everywhere, and we really kind of got out of the way, you’d deploy Rancher. But as part of that, we were setting up this Rancher networking we were implementing a lot of RBAC, things like that, for Rancher. So when we first starting doing Kubernetes a lot of the challenge was just getting it up and running, making it deploy and getting it out there.

With 2.0 we kind of changed that, because I think the downside of simplifying that was, it didn’t make it easy for people to change the operation of Kubernetes, kind of like we had a default configuration of Kubernetes that we would implement and you could change it but it wasn’t all that intuitive on how to make those changes, so with 2.0 we wanted to separate out the piece of deploying and configuring Kubernetes from the piece of managing it, in 1.0, all that was just one thing, and so it was all integrated all the way through to the UI.

With 2.0 we really left the idea of a supported Kubernetes distribution as a distinct thing that it was fully upstream, easy to configure, you could run anything that you could deploy within Kubernetes, you could deploy within that distro. Where we added around that was just to, you know, make it as easy as possible just to implement and run on infrastructure you provisioned for Rancher’s ops layer, so if you deployed VMware or Bare metal there were good implementations on top of that. Though we tried to persist, some of the simplicity and ease of deployment, but we left it a lot more open, so you could configure and change and run, you know, whatever CNI or CSI kind of develops, any driver and network configurations, we segmented out how we do RBAC, so that we could pull out Rancher as a proxy on top of Kubernetes and then we’re done. In 1.0 we really simplified that and now do configuration a lot when we create the clusters and create the deployment, so I think the real insight is Kubernetes is going to be available anywhere, let’s not try to be a deployment and sort of management layer specificly for Kubernetes, let’s deploy Kubernetes where necessary, when people need Kubernetes clusters and make sure that’s well deployed and can be configured, but let’s also absorb these Kubernetes as they come in. So it’s a pretty big change, I would say, in terms of how we run Kubernetes.

The Kubernetes Rancher today, runs probably, I want to say there are about 15,000 active clusters on any given day that people are deploying and running and using Rancher on and about 3 or 4 thousand of those are Kubernetes clusters. So lots of people use Kubernetes on current Rancher 1.6. So for those people, I don’t think it’s going to be a dramatic shift, they’re just going to have a lot more options and there’ll be a lot more, they’ll be given a lot more configuration choices for how they run Kubernetes. The real sort of ‘what do we have, how do we make this, for all of the users, why do they use Rancher?’ I really think that’s usually kind of, it’s not so much the distribution that we make available, I think that’s pretty consistent with open source. It’s really the Ops management layer around it, the multi-cluster management, the federation, the sort of RBAC, and management and authorisation, the security and compliance stuff we do to make sure that whatever global policies are, they’re applied to each deployment of Kubernetes even if it’s coming as an existing cluster that you mount in. You know, areas like capacity management and infrastructure, provisioning and deprovisioning and scaling. Those are the areas that people get really excited because we do a lot of Ops on top of Kubernetes that work on any Kubernetes, that work with GKE, work with Azure’s Kubernetes service, work with IBM Bluemix and with AWS any Kubernetes​​​ service.

[0:15:26] RAHUL: Yep, and that is really a nice thing, Rancher has being porting Kubernetes with other orchestration tools. So one specific question regarding Rancher then we can move on for general discussion about containers. Does currently Rancher supports other container engine than Docker? Something like from CRI-O Community or Rocket or something like that or Containerd?

[0:15:53] SHANNON: Good question. So right now, we kind of support Docker as the primary daemon or runtime element or Moby Project for now. The long term expectation for us is that Containerd will be what we’re using at the lowest level. So that’s what we’re really working around, is sort of that expectation. You know, we’ve spent time with Rocket over the years, but never have we decided that we’re going to do large-scale production support. We’re definitely looking at CRI-O, but at the moment, we’re leaning toward Containerd, we think that’s a really good standard and it will be something we can standardize on everywhere, so we’re definitely in the, you know, fans of that in the approach or sort of the answer.
[0:16:41] RAHUL: Ok, awesome. And how about if one of the Kubernetes cluster is running non-Docker as Docker Engine and we are putting Rancher on top of that Kubernetes cluster. Does that combination work now?
[0:16:55] SHANNON: You mean like, I have a Kubernetes cluster running Rocket or CRIO or something underneath?
[0:17:00] RAHUL: Yes
[0:17:02] SHANNON: You know, I don’t know if we sort of, essentially said we will support that yet. When we get deployed, Rancher is basically on top of the distribution, you’re just running a kubectl command on that cluster and deploying our agent out to it, so as long as our agent can come up and run, we should be able to start doing things but I know we’re not officially supporting clusters that are running other daemons, definitely focused on Docker and Containerd.
[0:17:32] RAHUL: Ok.
[0:17:33] SHANNON: As far as I know, but it doesn’t say that, that’s just based on demand. So if we saw the demand for others was growing, we’d jump on it. I don’t think we’ve seen, other than occasional requests I would say it’s only come up 10 times in the last 3 years, you know, requests for Rocket. I don’t know how much CRI-O is sort of, there seems to be a red-hat and the folks said about it. We haven’t heard about it. As we hear more and it becomes something to look at, we’ll certainly look at it. We’re not eager to see that layer get de-centralized, de-standardized because it seems like a bit, under the covers, we’re all doing exactly the same, so it’s hard to justify from our perspective deploying multiple engines.
[18:21] RAHUL: Yes, got it. I just wanted to ask because a lot of people with Kubernetes 1.8 they are trying out other container engines, I mean, I don’t want to say all the people are going away from Docker to Production, and with the latest news it’s like Docker itself supports development with Kubernetes so that won’t be an issue, but just wanted to understand the Ranchers regarding that. So that’s really interesting about how the Rancher is going and what all the features and challenges that are solving in the container orchestration world.

Moving on to some general specific discussion, as of now people are still, we can see, 50-60% have moved towards a Containerized ecosystem. Most of them are adapting Kubernetes, some will go with patching at source, some will go with other container services, and then something like Nomad from HashiCorp itself and some people are really finding it interesting who are developing normalizing their containerizing service. So this is an interesting. Other thing is coining up a serverless.

People have started putting their resources and all their expertise, some of the companies like Bitnami itself has Kubless which is like Kubernetes as a serverless frame. So going forward, like what do you think, like how the ecosystem will move, or some of the people are still running traditional old architecture like they’re on bare metal or VMware and they haven’t yet moved to containerization world and here is serverless. So how one should bridge that gap of the ecosystem or the enterprise company might be eying on the containerizing and serverless things?
[0:20:13] SHANNON: Yeah, huge question Rahul. You know, from my perspective, I suck at predicting things, so I always try not to, but what I would say what we’re definitely saying is the things like serverless and the rise of different orchestrators around containers, the rise of orchestration as a service, coming from the cloud providers, I think it’s all kind of interesting and tied together. Clearly microservice architectures are very popular among web services and webscale companies and many larger, traditional enterprises are looking at these and thinking there’s value in those architectures, but they’re pretty early in the adoption cycle, you know, when we spend time at companies, especially companies that aren’t digital - if you’re sort of a digital first company like a publisher or a newspaper company, a broadcaster, you know, and entertainment company, you’re on the bleeding edge of managing systems that are user-facing or webscale, but if you’re coming from more traditional industries, the vast majority of your stuff is not going to be micro-services architected, obviously.

So what we find is there’s a lot of interest in just containers right now, coming from those types of organizations as they look at how to migrate the existing workloads they have on to newer platforms, onto the cloud, improve some of the ops around older things, maybe isolate them a bit more. We see a lot of container projects now that are really geared around, almost you know, like a VMware project of 10 years ago. People talked about containerization projects the same way they used to talk about virtualization projects. Ways to get better density, ways to get better operations management, some portability across clouds.

I think there’s going to be a big, just pure containerization trend that will be kind of an enterprise and large organization I think. If you’re running hundreds of thousands of servers you know, the benefit of density that the Containerization provides is just real. Docker talk a lot about this. DockerCon this year talked a lot about migrating traditional apps, MTA programs. I think that’s a very real thing. When you start talking about serverless, you start talking about microservices. I mean this is how people use Rancher today. Very much where the early adopter have been continuing and so as we– what constantly pushing the envelope, how can we make our code more portable, how can we make out code less expensive to operate, if we make our code more reactive to events and triggers, serverless makes a lot of sense.

I mean, Lambda is awesome and I think when everyone else pulls out their features, companies that use the cloud, that are comfortable running in the cloud are going to embrace these serverless functions. The thing is, most of this serverless stuff is really dependent on having a really rich set of cloud-centric features, and so if you think of Lambda because you’re on Amazon and you can pull on call to all sorts of great Amazon features and functions and if you’re on Google, you’ve got all the great things to call from Google cloud functions.

I think serverless in a Kubernetes world, in a container world, is interesting, because it’s pretty easy to deploy a serverless framework on top of Kubernetes. There’s probably a half-dozen Rancher Catalogue already, including Kubeless and a whole bunch of others. The interesting thing is in the case of something like Kubernetes and containers and Rancher and stuff, they’re really about the runtime of applications and such, they’re not necessarily about, if you think of the rich set of services that exist around Amazon, you know, something like Kubernetes or Rancher, they’re really not fair, they don’t have Consoles and databases and S3 and you know, and Bigtable and all these different types of services that providers have developed.

So what’s interesting to me is how does the portable, open part of the world that we live in eventually sort of start to be compelling for serverless. For basically, to be compelling for serverless, I think it has to do more than just execute functions on command, I think it also has to have really good thoughts about where data goes and how you update data and how you pull to lots of different pieces of computing function that need to be available as API, so a lot of the discussions we have these days in Rancher, something to develop that really is a portable layer on top of cloud providers, really turn cloud providers into commodity providers.

I don’t think Kubernetes and Docker are sort of step one, and building a good way of deploying and running applications, like that’s easy too, but you know, there’s just a lot more that needs to be done. And there are interesting companies that are trying to tackle some of the problems of storage to sort of develop the EBS of the cloud, companies like ThoughtWorks and CoreOS and others are trying to imagine that. And companies that are trying kind of a really interesting approach to database service within an orchestration layer. So there’s a potential for a portable framework that’s every bit as rich, and potentially much richer even than what you can do in any given cloud, as it’s powered by all the innovations coming out of the open-source community, but you know it’s still not there, so I think today, for all the talk, when I talk to people about serverless in a container world, I think everyone appreciates probably Google, but where are the back-end services that are going to empower that in the same way they empowered Function as a service on Public cloud.
[0:25:14] RAHUL: Alright, so I was going to say that CloudNative is the thing that one should focus on because it is with tools like Kubernetes or Rancher or something that it is, CloudNative is a thing that people should focus on as of now for production thing and serverless, it’s still evolving and we just have to wait and watch about that.

So my last question about Kubernetes and related orchestration is that you just mentioned there are, they are supporting doing things like how to use database services with Kubernetes and we are let’s say, something like persistent volumes or how to read in persistent data, and we have companies like OpenEBS Rook, Ceph and other people who are like providing those services. So I haven’t really heard good production story about how people are running stateful apps on Kubernetes or any Containerized platform, and how do they scale. So is it something to worry about or it is also production ready in terms of Kubernetes and Rancher?
[0:27:30] SHANNON: Yeah Rahul, I mean, its Stateful services and production in data running in Kubernetes, I think it’s been solved. I really feel like there are great solutions out there, I mean the simplest is for people who are running Kubernetes in the cloud providers, In the middle age what most of them are doing is using the data services from the cloud provider. So if you’re running Kubernetes from Amazon, or you’re running Kubernetes on Google or Azure, you’re leveraging it using whatever storage services, local disk or such you want. You can get it right on the VM. On-premise you start dealing with you know, your own implementation, the things that are available on Kubernetes, we find the vast majority are just using some type of NFS as the plugin, so they’re deploying NFS you know, it’s coming right out of a native app, right out of some existing EBS that they’ve built and deployed as their network storage over from their private cloud or VMware and we see a lot of deployment on Nutanix, things like that.

So I think the existing storage solutions are being used pretty heavily, just as they were before. Where I think the innovation is still coming and what we’ll see more is on these cloud or sort of cloud-native storage software. So you mentioned Ceph I think that there’s still a lot of innovation that goes beyond that. Running Ceph for your container is not easy. It’s not something most people are doing. I think what works is really interesting. Having StorageOS is really interesting. I think both of those companies are doing cool stuff to make it easy to deploy persistent storage out there. I think they’re both proprietary, so I think I still expect the whole source thing to develop.

We’ve got a project called Longhorn that is open-source and is focused on what an open-source distributed microservice might look like. We’ve built storage out of raw local disk and isolated that, so I think that we’re still a little bit away from software-based storage development but it’s an exciting space and to be honest with you, it’s not slowing down the adoption persistence. What it’s doing is, it’s just pushing people to use the existing source, you know, apps with persistent data are running on Netapp, and it’s great. Netapp is awesome. They’re running on EMC with Rexray and Rexray is great. It’s a great tool. So I think it’s something, there’s a lot of stuff that’s happening today, because people are very pragmatic, so, you can get enormous benefits, even when you’re running on existing infrastructure, running on top of VMware, running on top of Nutanix or EMC and Netapp
[0:30:08] RAHUL: Yeah, Stateful apps is really a solvable problem, and even though we, in production, run the custom databases services like RDS on Amazon and that really also solves that issue and we use Kubernetes and Rancher to deploy our app. I think that was really a great discussion between us about Rancher, Kubernetes and ecosystem as we are already out of time, I would really like to thank you for taking the time from your schedule and joining ‘All Things Devops’ podcast and discussing about Kubernetes and Rancher. So, your last thought about Kubernetes and Rancher?
[0:30:50] SHANNON: My last thought was really just to say thanks, Rahul, for having me and I really dig the new podcast, so thanks for doing this and I would just say that the most interesting thing that I’m seeing kind of happening right now is that we’ve kind of got a consensus right now about orchestration and Kubernetes seems to really be the default standard, I think there’s really going to be a big surge of adoption, so I really encourage people to get out there, get trained. You know, every month Rancher runs free Kubernetes training, free training on building CI pipelines building docker and containers.

If you’re not already spending the time to go get some training on this stuff, do it. Go get trained, there’s tones of free resources out there, get certification, because these are going to be really valuable skills in the next 5 years. This is like getting AWS certified back in 2008. It’s just evidence for the consultancies, the companies who jumped in early and this is not slowing down. There’s no ebbing of the tide, so dive in, get some knowledge, training always wins.
[0:31:55] RAHUL: As they say, “we should always keep learning and adopting these new skills” as you mentioned, it will at least keep ourselves in the default standard of containerization with Kubernetes and Rancher. So I think, that’s it for today’s episode and we will like you to come again with your next release, whenever you release Rancher 2.0, 2.5 or 3.0, we’d really love to have you again on the podcast. Thanks!
[0:32:23] SHANNON: Hey, any time Rahul. Thanks for having me!
[0:32:25] RAHUL: Bye-bye.

Twitter Mentions