Links Mentioned in This Episode
- BigBinary on Twitter
- Kelsey on Twitter
- Rahul on Twitter
[0:00:00] RAHUL: Hello and welcome to the new episode of “All Things Devops” podcast. Today we have Kelsey Hightower from Google. I think I don’t need to introduce him. He was a keynote speaker last week in Kubecon and a keynote almost everywhere where there is a container conference or any other conferences like Gophercon is happening, you’ll find him. I welcome him on behalf of Big Binary and All Things Devops podcast. Hi Kelsey, how are you doing?
[0:00:29] KELSEY: Hey, how you doing?
[0:00:31] RAHUL: How was Kubecon and what else apart from the great debate about Kubectl, Kub-C-T-L or Kubcontroller. How was it?
[0:00:43] KELSEY: Oh yeah, I don’t care how people pronounce it. I say Kub-C-T-L but I never corrected it when they say it different.
[0:00:50] RAHUL: And how was the crowd and what were the, that was happening. How was the crowd doing about Kubernetes. I saw the tweets and videos like a lot of people, like from all over the globe. So how was the experience of Kubecon, which happened in Austin last week?
[0:01:08] KELSEY: Yeah, we had a 4200 people, so lots of people, coming to talk about not just Kubernetes but also things like Networking, SDL, Service Meshes, Security, lots of Machine Learning on top of Kubernetes, so a lot of people were coming to look at Kubernetes as a platform and different people need different things from Kubernetes [0:01:30] RAHUL: I saw your demo that you demoing, you were just talking with Google Assistant and it was doing things for you, and it looked really great, so thanks a lot for that awesome demo. I was just curious to know if you’re working on something else or, is there any side project that in future we might see? Like if you want to deploy our app, we can just use a Google Assistant or something similar like that. Would we expect that something is coming up in the mainstream process of deployment workflow with Kubernetes?
[0:02:04] KELSEY: No, I don’t think there’s a really compelling reason to deploy with Google Assistant. I mean, you can, because the APIs are there, but I think people are better off just checking in their code to you know, to search repository, like GitHub, and then using a CI/CD pipeline to deploy their apps, I think that is a little bit more realistic and a little bit more repeatable, so I think people should probably focus more on, you know, end-to-end pipelines versus just deploying directly to a cluster.
[0:02:37] RAHUL: Yeah, end-to-end pipeline is the thing. So, one thing, what we notice in the past six months are, in couple of conferences like maybe Kubecon or AOOC and Hadoop. We see a lot of people are looking for Kubernetes as a service platform, so we have Google Container Engine already from Google. There’s ECS and now EKS has been launched. So, but still, if we check on most of the people are hosting their self-hosted Kubernetes cluster or container orchestration platform tool. So, with this all kind of, services as most of the cloud providers offer Kubernetes services, should we just assume now, going forward, will we almost having Kubernetes as a service platform, and we should be only worried about our deployment pipeline and all the automation that we need to just get them, or application deployment, and we should not be worrying about most of our cluster provisioning things, including bare metal, so there might be possibly the cloud providers will come up with bare metal cluster as well.
[0:03:49] KELSEY: Yeah, I think that’s the goal. I mean, if you’re a big company and you’re on-prem and you have to manage bare metal, there will probably be a small collection of people who have to install Kubernetes, make sure that it’s running, but the rest of the organization, ideally, should not have to think about Kubernetes or cluster, right? They should just think about deployment application. On the cloud, the cloud provider wants to also offer that experience, like of course, you can go and get a bunch of VMs, install Kubernetes on those VMs and tweak and tune it, but if you have a small team, you’ve got to ask yourself, do you want to manage the infrastructure, or do you want to build applications for your customers? And that’s why I think service providers will always try to provide an option where you can just focus on using the cluster and not managing the cluster.
[0:04:39] RAHUL: I’ve been managing some of the Kubernetes cluster and I would say, like, it’s really not as straightforward for anyone to maintain a Kubernetes cluster. Instead, it makes sense to just focus on the application deployment pipeline. So, I’m a big fan of your course “Kubernetes the hard way”and this is how I started learning Kubernetes and using it since then. So uh, in the latest post you just added Kubernetes 1.8.0 as a Kubernetes version to be used and you are using Containerd as in runtime instead of docker, though it is in alpha instead of Docker. So we know that CRIO is gaining a lot of attention and we are having a lot of other container runtime environments, there’s Rocket, there’s Containerd, so can we assume that now instead of using Docker, we have other, better alternatives, or else what was it like, you thought, your focus for version 1.8 will use Containerd instead of Rocket or something else?
[0:05:44] KELSEY: Yeah, I think going forward, most people probably won’t use Docker underneath Kubernetes unless you are using Docker to deploy Kubernetes. Most people I think, you’re better off with just the CRIO, which is the Container Runtime Interface if you’re using Redhat, they’ll probably ship CRIOs, CRIO, which is this purpose built, just for Kubernetes, some other people may decide that you should use Containerd, but I think going forward, Kubernetes does most of the heavy lifting, you only need a very little bit of support from the Container Runtime. So this is why you’re starting to see much smaller container runtimes, so people that are currently using Docker may want to transition to Containerd as the support starts to mature there, but going forward, I think Docker itself will evolve into, or it already has, to become its own platform so not just a container runtime, but something much bigger and, in that world, you’re probably better off just with Containerd or Rocket or CRIO.
[0:06:50] RAHUL: Ok, got it. And how about the recent announcement from Docker, like Docker is also providing the development platform with Kubernetes. So, going forward, if someone comes from the background, like he has developed his application with Docker, using Kubernetes as a development platform, and if the production Kubernetes cluster is running, container runtime as the Containerd or Rocket or CRIO, so I assume that would work straight away, because he’s shipping his application, developed with the Docker, which is backed by Kubernetes and it isn’t CRIO, as in Docker.
[0:07:30] KELSEY: It should work, if it doesn’t work, we’re going to have problems, right? If, if you run Kubernetes with Docker and the you run Kubernetes with CRIO and things don’t work the same, I mean, there’s a high chance that there might be a few things different, because there’s different runtimes underneath, but we hope that doesn’t happen, because that would lead to a bit of fragmentation and would hurt the portability story, right? If you’re running your app on one Kubernetes platform, and it’s not easy to switch to another, that would cause friction, so we need to make sure that doesn’t happen.
[0:08:04] RAHUL: Yeah, so I particularly haven’t explored such kind of things but uh, yeah, just, it came up while we were discussing so, so, uh, apart from this Containers and CRIO, so I have been coming across people who are trying to move their application with the traditional deployment where either they are on servers, bare metals or VMs and trying to containerize their app. So first they’ll Dockerize it, then they will move towards Kubernetes, actually people are facing the problem with some of the common things, one is regarding stateful applications and as you know, that has been solved, but different plugins, volume techniques or some techniques like using files system from Ceph, ClusterFS or something, I assume now, but that’s a solvable problem. Then, another major question is whether we should be using database services on Kubernetes clusters. If we are using, we’ll be needing more system data, storage techniques than which persistent storage to confirm and if we are running self-hosted Kubernetes then it is better to run database service from cloud provider if you are using GCP, use GCP database service, if you are on AWS use RDS and if you are running Kubernetes as a service from a cloud provider, then how to adapt this database service, or other services, or in general, how to migrate legacy applications in easiest way to Kubernetes or Containerized platform.
[0:09:36] KELSEY: Yeah, I, regardless of the technology, most people don’t have experience managing data services, right? It’s not about “Can I do it on Kubernetes, can I do it on Docker?” it’s that most people install the database on one big machine, they never touch it, they back it up sometimes, maybe the test the fail-over, maybe they test and restore. If that’s you, then you kind of don’t have operation expertise of managing a database anyway, so the easiest way of managing the database if you don’t test the failure cases often, is just to put it on a big machine and leave it alone, right? That’s just kind of where you are. If you put it on Kubernetes, life is going to be bad for you. It’s not about Kubernetes not being able to support stateful workloads, it’s just that you don’t know how to manage stateful workloads in a dynamic environment. That’s a skill. Kubernetes will give you that skill.
So when it comes to that, some people will say, “Well, if I don’t have the ability operationally, to deal with the database in a, let’s say, containerized environment or where there’s a deployment orchestrator, I’m more used to managing a database in a static environment, where things don’t move around” then just leave your database where you are until you get comfortable enough that you can manage it in an orchestrator. This is less of a technology situation and more of your skill set. Do you have the ability to do this? If you don’t, then maybe it’s better to use something that’s fully managed, right?
Everyone that uses a database isn’t necessarily a DBA, so if that’s you, then maybe RDS or Google Cloud SQL or a managed database offering is the best idea, because they know how to operate a database where maybe you don’t have the time to. So you really have to understand who the person is. If you know what you’re doing, and you know how to manage the database even when the orchestrator’s there, the volumes don’t change, right? You can use local storage with Kubernetes. You can decide to pin a specific container to a machine where the data lives. You can mount in network storage. You can use NFS, nothing changed between VMs and containers, other than in containers you have to be a little bit more explicit about what you want. But this, this is not a huge technology shift here, right? This is just a different way of interacting with the technology, so I think people need to understand; what are your capabilities? How great, good are your operations skills around managing data services? If it’s not that great, maybe that’s the last thing you want; to move into an orchestrator until you get comfortable.
[0:12:19] RAHUL: Ok, yeah, so, apart from all these things like database services, and so on, I know that a lot is available for deployment pipeline. Helm is one of the open-source projects and people have their own deployment tools as well and they have open-source tech. But still, once you have Kubernetized or Dockerized your application and you want to automate it in your deployment work-flow, you will try, I mean, most of the people will try for Jenkins, they’ll try for Helms, some people will not like helm charts, some people will not like YAML manifests and this is leading somewhere like, actually, this is a good thing as well, people are coming up with new tools, people are writing their own automation tools for just deployment, so how would you suggest, like which automation tool for deployment, like, Helm or something else. Also, there are some other tools as well, which might be coming from companies or cloud providers. So how do we like, I would say, like, how do we like, take the call that this is the right tool for my deployment with Kubernetes. Either it is Helm or something else.
[0:13:31] KELSEY: Yeah, uh, the tools don’t matter to me. You’ve got to figure out what you want to do, like check in code, know the container, run some tests and deploy it. That’s pretty much what people are doing. You can also get metrics from the thing that’s deployed, you can use those metrics to roll back if it’s a failed deployment. You could also introduce things like a port across where you version the deployment manifest in an automated fashion, kind of like I did in my keynote at Kubecon, but either way, you should design the work-flow that you want. Forget the tools for a moment. Think about what the end-to-end experience looks like. You can just do it on the whiteboard.
Once you understand the way that you want to work, maybe your cultural needs manual, approval before it can go to be deployed. Whenever you need, write it all down. Once you’re done, then you can start experimenting with a few tools, right? Grab some repository to check your coding that supports webhooks so you can trigger automated builds. Whatever tool you pick, Jenkins, Spinnaker, Cloud Builder, Worker, Travis CI, it doesn’t matter. At the end of the day, it’s pretty similar, even though in some steps and some order and if it fails, you’re going to try to notify people that the build or the deployment failed. So the tool to me is less interesting than “What do you want to do?” “What workflow do you want?” Not everybody wants the same workflow, not everyone has the same interface and not everyone has the same exact problems. But at a high level, checking code, build test, deploy, use data to tell you if everything is working ok and then rinse and repeat. That’s the pipeline.
[0:15:15] RAHUL: Yeah, makes sense. And this, I think, this reminds me of your keynote at Gophercon where you demoed something like you deployed your Go application directly to Kubernetes cluster and that was just a single batch command. That is also one of the way you showcased like how we can do. But yeah, so after deployment, like, with the, tool doesn’t matter but after that what we come across mostly is about scaling the application, so Kubernetes is beautiful and it allows us to use HPA and they have also added custom metrics auto-scaling and we can scale all this from our own metrics, either we can use prometheus or any other metrics collection, something like Cadvisor or something like that, but yeah.
I think we have HPA for CPU based auto-scaling, and we use auto-scaling, if I’m not wrong, it is not, available, but sometimes we need to scale our application based on other factors maybe on query process, request base so what we actually do is we write some other techniques, get the metrics count and we can use that without HPA and scale above that. I think this is reliable, but what do you suggest like, is there any other easy way to just auto-scale based on other metrics of pair of metrics instead of just HPA and if it is, when do we expect that, which day is it will come up?
[0:16:53] KELSEY: Well in the HPA definitely has some features that will support custom metrics at some point, today, a lot of PET people will use external tools, right, like Kubernetes has a robust API. There’s a tool called Capacitor, which comes from the InfluxDB team, Capacitor knows how to read external metric sources, average them, aggregate them and then based on your rules, they can scale deployments, replica sets, up and down based on some criteria. So you don’t have to use HPA for everything, right? There are other tools that will let you accomplish the same thing, so yeah, maybe we won’t have the same tool for every use case, but there are many tools that do auto-scaling based on some data, you will trigger an event. So a high level goal is; I have a data source, based on some rules, I want us to go up, then I want to go down, right? That’s a very common pattern that isn’t new just for Kubernetes, so if you look at some of the existing tools, that have been doing this, you can make those tools work in Kubernetes as well.
[0:18:00] RAHUL: Ok, yeah, so, we can also use this, so this will provide some brief use cases and we can adopt these tools and auto-scale as well. That is nice. So, another thing regarding application when we deploy on Kubernetes is; most of the time we need to monitor it and as for monitoring, we need to trigger it, sometimes we may have to scale manually, so we’ll just scale up, so for monitoring, there are a lot of solutions. There’s Prometheus, the mostly used one, if I’m not wrong, there is Sysdig and there are others as well. So in terms of Kubernetes applications, if it a self-managed Kubernetes cluster, we have privilege to install any of the monitoring tools, but when we are running applications of a hosted Kubernetes service, how would we custom our monitoring either with Mystique or with Prometheus Grafana, or we can just rely on providers monitoring solutions. Or if we need some more data, of course from this time series and we can configure our own queries and graphs, what would you suggest about monitoring?
[0:19:25] KELSEY: You can do whatever you want, it’s just Kubernetes, right? That’s the whole point of it being Kubernetes, if it says Kubernetes, whether it’s managed or you manage it, at the end of the day, it’s just Kubernetes. So the cloud provider may integrate with their own metrics, but you don’t have to use it, you can install Prometheus on any cluster and let that Prometheus collects any metrics you want, you can install Grafana on any cluster, whether it’s managed or you manage it. So that’s the nice thing about people offering Kubernetes. You don’t necessarily have to use all the integrations, and the way Kubernetes works and it’s API, as long as you have access to the API, which you do, regardless of if someone else is managing the cluster, you can install anything you want and collect metrics from any other resources that you want. So I don’t think there’s a big barrier there. I mean, you may have to manage Prometheus on your own if you decide to do that, but that was going to be the case anyway.
[0:20:27] RAHUL: I would say, it was a tough question from here, or a tough decision for me at least, which network runtime whether to go with kubenet, whether to go with Calico, whether to go with FlannelV or something else, so one has to list priorities with networking, of course, I mean it’s hard to choose, which one to choose as your container network runtime for your Kubernetes cluster.
[0:20:57] KELSEY: I mean, here’s the thing, the choice is there because there’s going to be lots of people stepping up to provide some solutions. There’s a pluggable interface, so of course, most people are going to try to use it. But when you think about a managed offering, we just pick one, for example, when you use VMs on the cloud provider, they also have a networking stack, but you don’t care about it, right? You just get your IP and you use the VM. They also have in a hyper-visor world, there’s also something like Calico, not necessarily Calico, but there is something that allocates IPs and makes sure you can connect to other things inside of your project. Kubernetes is the same way, so if you look at a managed offering, ideally they are just going to pick the best networking stack that works best for that provider, whether it’s Redhat on-premise, or Google in the cloud, it doesn’t matter normally, you just don’t care about what the network is, as long as your pod gets an IP and your loadbalancer can direct traffic to that IP, everything should be fine. It’s when you self manage that’s where you look at all these options and say “Oh, I’m going to go with this one because of X,” so soft managed, you get a lot of options, right? That’s the benefit of open source. You get to make all the choices you want. Which choice is right, have no idea. You have to just experiment and use one that is right for you. Or, you can just look at what the providers do and say, “Hey, why did they choose what they chose?” Maybe there’s an ability to support it but you know, there’s no one thing that you always do. If your provider provides networking, just use that. For the most part, you’ll probably be fine.
[0:22:38] RAHUL: Yeah. After that, most debatable parties about like, open authentication whether to ship your Kubecon with all your details and then comes the RBAC, you use RBAC effectively and yeah. We are alternately saying that other tools or other services are adding more on authentication layer if we look at Rancher, if we look at EKS’ preview that they are going to interpret with IM and some with other services. So I think RBAC works, we just have to deal with some rules, permissions, cluster binding and all, but you just curious to know why authentication resources of Kubernetes and applications is still, as you already mentioned, it is a part of choice, but maybe I’m wrong, but I still feel like it is not that straightforward for setting up an authentication for your resources on Kubernetes, so apart from RBAC, how is it going to be there, or which one is preferable, just choose one, the service provider which provides, or RBAC and use the RBAC with the unequal privileges?
[0:23:56] KELSEY: Yeah, RBAC is, is always tough, right? Every system I’ve ever seen in my life, whether it’s VMware or any tool where there’s lots of objects, and you want granular permissions, then it will be painful, right? That’s just, any time you have that many choices to make, then it’s going to be complex. So, one way to make it easier is to have things like predefined roles, you know, based on certain attributes of course. You need to understand how the API works in order to do that. Inside of core Kubernetes, you know, the building blocks will be there, RBAC, the API and then the goal is tools like Rancher, that you’ve mentioned can make it easier to manage those rules, right? Either having a GUI, maybe some roles, and you can map those roles to workloads, that’s kind of the job of a UX that sits on top, so Google Cloud can do this with IM, you may have Amazon do this with their own IM and their User Management Interface. But in core Kubernetes, it’s more of a platform so that you can do that above it. That makes sense. It’s like, we don’t want to provide only one way of doing it in Kubernetes, we just want to provide the APIs, the Hooks, everything you need to do RBAC correctly, and then when you look at higher up, and let’s say your provider or Rancher or whatever dashboard you’re using, then you can start to think about things like roles or integrate with LDAP or some other authentication tool that you’re using.
[0:25:30] RAHUL: So we are in process of almost all of people are now adopting Kubernetes or any other container orchestration platform tool, and as of now, I think like, containers are ruling the ops thing, or most of the people, when they have to deploy, they firstly think about using the container platform. But yeah, recently there has been a trained, or serverless is getting more and more attention, there are projects going on to run serverless applications on Kubernetes or on using containers so if I’m not wrong, then it’s Kubeless, Fusion and other projects as well, so how do we see, like, how serverless with Kubernetes can go on? Actually work in other way, we can say deploying applications with Kubernetes it is almost a kind of serverless, but how would you say about serverless with Kubernetes and other orchestration tools?
[0:26:30] KELSEY: Yeah, when it comes to serverless, I think people are just mixing things. So on, let’s say Amazon Lambda, for example. When they say serverless, it’s fully managed, right? There’s no cluster, they manage authentication, they manage credentials, they manage access to all the services, logs, everything. Like fully managed and in the case of Lambda, it’s a, you know, functions as a service, right? You give it a function, it just does everything else based on that function. And the other mode of serverless at Amazon, which they talked about at Reinvent was Fargate, right? Give me a container and then we’ll try to do some of the similar things around load balancing, logs, metrics and so forth. So when you think about a serverless from that standpoint, I don’t want to deal with any infrastructure, I just want it to be fully managed, whether I’m writing a function or I want to give you a container, right?
So that idea of serverless is really taking fully managed all the way to the compute. I don’t want to see anything, just give me my workload. Now, if you have Kubernetes, you could decide that you want a serverless or you want functions on your cluster, right? So you want your developers to write a function and deploy it on Kubernetes. I mean, you could do that, but it’s just a different workflow then just creating the container yourself. In terms of the events, you may or may not have access to the same events that you have on a cloud provider, so I think things like Kubeless and all of those serverless frameworks, more of “How do I get a function and deploy it into my cluster” and they will just wrap it’ so it’s almost like a platform as a service. And then the events, where they proxy events through let’s say docker or something, is trying to give you some similar experience, but I think there is a difference between a fully-managed serverless platform and something you install on top of your cluster, right? They both have value, I’m not sure they’re 100% the same thing from a user experience perspective.
[0:28:45] RAHUL: Yeah, so the point like AWS Lambda is a truly, we can call a serverless platform and things like Kubeless or some other things they, the work is happening in order to make the use of Kubernetes for serverless technologies, and as you mentioned, make them usable as a functioning service, but I think, I haven’t experienced any stable uses of that, so we’ll just have to wait and watch. So, the last question I would like to ask; you’re recent trip regarding 2020 productions, like you said, Monolithic applications will be back in style, in 2020 and people will realize the drawbacks of distributed Monolithic applications, so I’d just like to talk more about that, like, as of now people are trying to move away, or most of the people are trying to move away from the monolithic to micro-services architecture and containerized thing, so just like, I wanted to know your thoughts about that really.
[0:29:46] KELSEY: I think people underestimate the amount of discipline required for microservices. If you move to microservices, it doesn’t get easier, it gets harder, right? The benefits you get maybe that you can work on different components easier at the same time, because they live in different repositories, but you’re going to have to resolve the conflict somewhere. You’re going to have mismatched APIs, you’re going to introduce more performance over the network where, you know, things if you slice them wrong, you are going to be going back and forth, so unless you really do a good job of getting the discipline, really practicing what it means to run a microservice, and actually have real services, so what most people are doing is taking a monolith and splitting it into different processes and then everything depends on each other and writing to the same database. That is not microservice, that’s just a big monolith that you’ve broken into little pieces and you may even have a worse time, performance may go down, you may have increased errors, so just going to microservices alone doesn’t really solve your problem.
It can solve your problem, if used wisely so I think there’s a lot of people who are not really thinking this through, not really making the right investments and learning about how to deal with microservices, and there’s this belief if you just go to microservices everything will get easier, or everything will get better and that isn’t true, you have to actually do things a certain way in order to leverage that particular architectural style and also, the reason behind the tweet is that monoliths are not necessarily bad right?
I think most people don’t really consider that you can multiple monoliths and still not have a microservices platform. You can say “Hey, I have this one application, it’s for this particular product, I have another applicaton that is also a monolith that only does batch drops, right?” So that’s 2 different monoliths, they’re not necessarily microservies because you have 2 of them, you just have 2 monoliths. And it’s ok just have 2, you know, what we would consider monoliths, right? So a monolith in my opinion, in this case, would be it has its own authentication, maybe it even presents the UI, maybe it has some of the business logic all couple in to it for one purpose. The batch drop, on the other hand, may only deal with reading things from a cue and handle any authentication or parsing or even sending email based on the cue, so people look at that and say “Well, maybe you should just have a dedicated email service, versus having it into the batch drop” but to me, that’s just a matter of how you take multiple modules, either you can deploy them as one unit and call it a monolith, or you can break it up into many pieces and call it a services architecture, but I’m not so sure that most people are really understanding the challenges of dealing with microservices especially if they don’t have a good understanding of how to deal with a monolith.
[0:32:44] RAHUL: Makes sense, and we can say like, people are moving from monolithic application deployment to microservices and as you mentioned like, they are just breaking an application into different processes and that is very, not a, a preferred microservices architecture. So yeah, that makes really sense and I think that as you mentioned, people interested in learning and adopting the microservices practice more effectively. So, thanks a lot Kelsey, for taking out the time and talking to All Things Devops Podcast. It was really great fun having you, we all enjoy your course and other work and other projects which you are contributing to the Kubernetes community that helps us really up to great extend and the way you simplify and make it look so simple really helps us a lot. Thanks for doing such a great work for Kubernetes and the community. Thanks for being here.
[0:33:44] KELSEY: Alright thank you.
[0:33:45] RAHUL: Have a nice day.