Django Chat


Episode Summary

We discuss how caching dramatically improves website performance, Django’s 4 built-in options, and Redis vs Memcached.

Episode Notes

Episode Transcription

Will Vincent  0:06  

Hello, and welcome to another episode of Django Chat weekly podcast on the Django web framework. I'm Will Vincent joined as always by Carlton Gibson. Hello, Carlton Halliwell. And this week, we're going to talk about caching, which is a power tool of all developers, but may not be familiar to folks newer in their career, and Django has some fantastic built in support. So we're gonna get into all things caching. So, Carlton, what is caching? Why is it important?


Carlton Gibson  0:28  

So, good.


caching is good. If you, you know, say you've got some database query that takes quite a long time. And you're doing it all the time, maybe you want to cache that. And that just means keep it hanging around. So you don't have to make it again, an ideally your cache gate fetching. So you store a result that you got from the database, you can put it in your cache, and ideally, it's quicker to get it back from the cache than it is to go to the database. That's the idea.


Will Vincent  0:52  

So performance, right, yeah, the general idea that rather than spinning up the physical disk, if you load it into memory, like the RAM, it's going to be fast. That's the general idea. Yeah, I'll give


Carlton Gibson  1:02  

you some time. You might give cash on the disk, though, for instance. Yes, there's a, I know you're rendering a complicated template and you've taken, you know that you've gone to the database, you've got some data, you've rendered it into a template, there's no reason why you've got then some HTML out of that, which was computationally expensive. There's no reason why you couldn't read cache that on the disk, because then all you got to do is fetch it from the disk and serve it straight away, rather than do all that heavy computation before but normally, normally we caching Right,


Will Vincent  1:29  

yeah. And we'll get into where you can put the cache. But that's the basic idea is you pre process things so there can be loaded faster. And I think something that's maybe a little confusing is it's such a broad idea of caching, I mean, so if you index a database, if folks have heard of that, that's basically a cache. But then you can also in Django, you can well, for example, Django has a built in caching system, where we'll get to where you store it, but there's four options to give folks a sense of how to think about this where you can do per site, so you can just count Everything. So if you have a Django site, but it's basically a static site, if it's a blog, that's not changing, you can just do a thing. It's basically one line will link to this in the docs, and just per site, cache everything. And so after the first time, the first time it loads, someone will come in, hit the site, it actually needs to process but then after that everything is just served from memory. And actually, on that topic, let's talk about refreshing the cache hot and cold because I think that's an important thing. Because there's this idea that, you know, cache has to be run, it has to be hot. So for example, if I've changed, I have a blog, simple blog site, I add a new blog post, the first person who comes in hits that site is going is not going to be cached even if I have caching run. So either I just say, Well, the first folk, first person who comes in is going to take that performance hit or you can hit you can what's the time you can run the cash in advance


Carlton Gibson  2:54  

never know what the terms for these things are. You can pre heat the cash right so what you can do yeah, Something will publish your own blog post, you can go and check that it appeared on the site. And in so doing it will load and then it will be cached in your page. So normally you cache per page, right, like Django has got this nice page caching option where you like each individual page has a so


Will Vincent  3:16  

right, so yeah, so the four options per site per view template, which I guess is what you mean by, I think I can never remember you can get to the low level cache API. So this is something you want to play with in reality. So in production, like you can make predictions all day long on local usage, but you really just want to see how your site actually performs. But I would say with God, we're just butchering this heating up the cache warming up the cache, I played around with this a ton. And on a big site, it's worth it maybe. But it's also fine to just say, you know, if I have thousands of visitors, the first one on this page, when I do a change, it can be a little slower for them, and they'll live


Carlton Gibson  3:52  

Yeah, like the page. But I mean, what's good about the caching, right? So the pay however long it takes to go and get your blog post out of your database to render it into the template put on well The first time that's a bit slow, and then the Index page needs to change as well, because that, you know, the one where it lists the first five blog posts, the most recent five blog posts, you need to update that. So, questions about invalidation that we can come back to in a minute, but so that first, the first person who loves that is a bit slow, if that's you, brilliant,


Will Vincent  4:19  

right? Well, I did that i i manually, again, this is early days, startup, I would mainly go manually go through and reheat God, these pages. But you know, in practice, when you're dealing with hundreds, thousands of users, it all is a


Carlton Gibson  4:36  

tidal wave. Would you set this to cash forever?


Will Vincent  4:39  

No, I don't think you want to do that. I mean, you could if you're updating all the time, I recall setting it for a very long period of time, though off the top my head, I can't remember what a long period of time is. I think maybe it was a month,


Carlton Gibson  4:50  

right. Well, that is quite a long time. I mean, the thing is, and usually it's like a like a week or a day on day time and an hour, right because let's say you've got One person coming and hitting your Django application once an hour, it's really not going to kill your Django application, right. But if you've got 20,000 people all at once that will kill it. So if you can cache that blog post, even for an hour, it means that the Django app is only really doing the hard work once every hour or once every day, once every week.


Will Vincent  5:19  

Yeah. And again, I'm thinking of this as this was very early stage with a startup. Yeah, I think so. So there's a timeout. So there's arguments, you can pass into the cache caching framework built into Django? And I mean, the docs give an example of 300 seconds. So five minutes as, you know, substantial period of time. Yeah. I think that what I said was way too long. But whatever. Yes, play play around with it. Yeah. This is why you want logging and other information on your site. So you can see actually, how fast the page is loading. You know, it's a balancing act. But in general, cash, everything.


Carlton Gibson  5:55  

Yeah, I mean, before we go on and talk about the details of Django caching, there's another layer to think about which Could you get nginx or whatever, front end proxy, you've got to do that instead, because nginx files off the file system, you know, far more efficiently than any application you can ever write. And so, if you've got a blog post, which perhaps you update, I don't know,


Unknown Speaker  6:17  



Carlton Gibson  6:18  

why not tell nginx to cache it on the file system, and then it's just like nginx or nginx perspective, it's just like it's serving a static site. And it's like, not even talking to your back end. And that's really quite easy to configure. You give it a path and you say, that file cache and you you give it the module, the amount of time you want to cache it for and it will just do it.


Will Vincent  6:37  

So that's what what do you what do you make? I remember using varnish, which is a proxy cache layer, when I was doing this all in Digital Ocean what's what's your take on nginx verse varnish, I mean you could you could use both right? They do different people do so from for me for your


Carlton Gibson  6:53  

me I'm ever varnish was like a huge speed. Yeah, maybe the biggest of all the things I'd like to say varnish is a dedicated extra layer that you can, you can use, and super powerful, but I always say don't go to these things until you need it. Right. So your what's your what's your base?


Will Vincent  7:09  

I did not do that. But what's your base setup?


Carlton Gibson  7:11  

Your base setup is do you know just for example, I mean, you might be using Apache or whatever. But let's just go with one example using nginx. We're going to go on with Django. Okay, you've already got nginx in play, and it's, it's got first grade caching module, that's really easy to configure. You can use that and that will, that will, really will cope with, you know, probably 90% of sites out there. That's perfectly good enough. And then if you are really pushing it to limit then you're going to investigate whether or not you need another dedicated caching layer on top.


Will Vincent  7:44  

And I would say this is the type of stuff that is it's really fun to do, because you can at the end of the day, you can say, Oh, I increased my you know, or decrease my load time by X amount. It sort of scratches that developer itch, but it is. I'm certainly guilty of spending way too long. Getting that last Five to 10% when it was totally unwarranted. So it, I would say, be aware that this is fun and feels binary. And so a lot of times you'll, you know, neglect things like talking to users, that is a little more gray, you know, marketing,


Carlton Gibson  8:14  

any of that stuff.


Will Vincent  8:15  

Marketing. Yeah, all these things design. Okay, so where do you put the cache? So let's talk about so it historically. So mem cache was the, the first big popular caching layer though these days, I think Redis almost everyone would say Redis. If you're starting from scratch, you would use


Carlton Gibson  8:32  

people like Redis is got some fancy Redis


Will Vincent  8:35  

a little bit simpler, or no, it's not simple. It's a little bit faster for easy. Is that the cache? I, I believe so. I there I've seen will link there's a detailed analysis. I believe in most cases, it's actually faster.


Carlton Gibson  8:45  

Okay. I mean, but back in the day, so we're talking, you know, early 2000s remain cache was the option you'd run mem cache. You'd be that even into and


Will Vincent  8:54  

it was an amazing idea, right? Because it came out of I'm sorry to interrupt but yeah, I remember like it came out what Like live journal or something in 2005, like it was, I don't think it was like the first major caching of that tight. And it just did the dope the job


Carlton Gibson  9:10  

and it did it very well and massive adoption because of that. And still put brilliant, right? It still works and no reason what not to use memcached unless you're already thinking about using Redis. And again, it's like, how many components do you want in your stack? So if you've got Redis running, why not use Redis as a cash back


Will Vincent  9:29  

end, right, so I guess the other general thing, mem cache a little bit simpler, but if you if you need if you're going to need Redis things anyways, you might as well just use Redis for all of it. Well, so let's talk about those things. Why would you use Redis? So I mean, basically, for any cue like task, so emails, one example. What are some other examples that come to mind of when you would? So So I guess, I'm confusing two things here. So there's caching and then you'd also use something like Redis for Cubase. Yeah, so why would you


Carlton Gibson  9:59  

have Redis? Yeah. You want to you want to use a queue. So let's take a good cue packet. So, you know, everyone always talks about celery, but celery is overkill for, you know, the majority of use cases. So what's a good package? Well, there's one called Django Q, which I love and have fun with. That's nice and simple. And that's got a Redis back end. So you pip install, right or, you know, apt install Redis. And then you pip install Django Q into your project, you know, a little bit of settings, magic, and you're up and running, what you put in there anything that you want to put out bad. So you know, you're rendering a PDF, you're


Will Vincent  10:30  

gonna be process intensive that would sending an email


Carlton Gibson  10:33  

you do any of these tasks that we talked about, we talked about all the time. And then you've got, you've already at that point, you've got Redis in play, so you might as well use it as your Django cash back end. For which you'll need a couple of packs or a package. There's a couple of options, right? There's Django Redis and Django Redis Cache. And I can never remember the differences between these two ones. Every single time I start a new project. I have to go in search history. What did I use last time and is it still as good


Will Vincent  10:59  

Yeah. While I was just updating my awesome Jenga repo, which has a bunch of curated third party apps, and I was going through the exact same thing, because you know, there's a, I read a section and I was like, What is the difference? There is a difference, but it's, I can't remember either.


Carlton Gibson  11:14  

I like it. So I was looking this up before we started the talk. Last time I did it. I use Django Redis Cache. I've been very happy with that. It turns out I've used that load two times in the past, but I've also used Redis. past and I haven't no idea why. I don't know which one's good. Yeah. Just Don't peek under the hood. yet. There was some talk about bringing a Redis cash back end into core. I think the general the state of play on that is Yeah, we keen on that. But it needs a Django enhancement proposal adapt, it needs someone to step up and write the thing. But in principle in a, you know, 234 versions time when something's actually got random and written it there might be Redis cash back and in Django itself.


Will Vincent  11:56  

Yeah, because it really is on a decent size. Pretty much a guarantee you're gonna have read Redis or memcached, but probably Redis these days.


Carlton Gibson  12:05  

Yeah. And you do want to cache it. I mean, like, you know, just the one thing we haven't talked about, it's not just the pages, but the template fragment. Sometimes templates are computation expensive to render. And if you've got, I don't know, let's say you're converting, user submitted markdown to HTML. Okay, first of all, you've got to render that as HTML using markdown. And then you've got to run it through a sanitizer like bleach, which is html5 lib, which is not necessarily the fastest library in the whole world. You if you can cache the output of that rendering, then the next time you have to do it, I mean, you could cache it in the database. See, so you've got that markdown stored in the model field, you could have an extra model field for it for the rendered HTML, you could do it save time, but equally, you might do it by caching the template fragment.


Will Vincent  12:49  

Yeah, and I'm thinking this would be a great tutorial to do because for local development, just so folks can see that this actually works if you just have Django debug toolbar, which in addition to showing queries will show A local page load time, which again isn't a proxy for production, but it gives you some sense, if you just flip around the switches for per site per view and just see how much faster it is. I mean, it is orders of magnitude faster, obviously, to serve from a cache. So I would say that would be the way to play around with it is just just simply Django debug toolbar. And then you can use more complex tools to see how fast in production your pages are reality


Carlton Gibson  13:25  

really speaking, if you've used one of these APM tools, these these profilers, these these live production profiles, the monitor your execution time, you will see that the number one place where you're losing time is trips to the database, and the number two time where you're losing time is rendering templates.


Will Vincent  13:42  

So yeah, you know, if you can eat well after, you know, doing something stupid with front end, not stupid, but doing something with front end assets, like huge images or something Ah,


Carlton Gibson  13:52  

okay, but Okay, so here's here's the interesting thing with caching right is this actual time, the time your Django application took to serve the world response versus the perceived time that the user had on the endpoint. So you know, let's say your and your Django application took 300 milliseconds to to go to the database, render the template serve the response, you know, that seemed is that fast is that slow? Who knows. But let's say you're loading, you know, two megabytes of JavaScript, which took two and a half seconds to be responsive. And to load on the on the client, the client isn't going to notice. If you have your response time from your Django application, they're just not going to notice it, because it pales into insignificance. So write, quite often, you'll see the front end people talk about this a lot. The dominant factor in perceived responsiveness is how fast your page loads to the user. So how much JavaScript you have received, how much how many images you I mean, the images aren't even the thing. It's JavaScript, how much JavaScript Are you loading? How long does that take to put down the page, especially if you're doing one of these single page application these these client side render things where it's gonna load all the JavaScript, then it's gonna pull the data from from the API and that's the bit where your Django app does This thing and it takes around a millisecond. And then it's got to render the landing page before the user says,


Will Vincent  15:05  

Oh, yeah, the page loaded, right. And that whole perceived time. I mean, it reminds me of Instagram back when it came out, because I was actually working at Quizlet. Like, just next door to them. One of the things besides filters, one of the things that I remember being a wow moment was, so this was still when the cell reception in San Francisco was terrible, a lot of places. What they did is they, as soon as you, you picked a image you went to load and start typing and all the information in the background, they started loading it. So it felt really fast, you didn't you know, press the button and then wait 510 seconds, it felt instantaneous. And I'm sure some others had done it. But that was one of the first apps I saw that, you know, basically said we're gonna we're gonna blow up your bandwidth, or your cell phone plan, but it in the background and now that's a standard process anytime I know Tumblr or something your Instagram still um, you know, when you're loading an image. First thing you do is you load the image and then you type in a whole bunch of stuff. In the background, it's already processing. So you can just click the button and go load it. Yeah.


Carlton Gibson  16:04  

And I think the reality, of course, you know, you can, you can, Django gives you these caching tools, and you can use it to speed up your Django response. But in a lot of cases, the real work is on the front end. And, you know, do the basics in Django, get it, get it performant use use that nginx caching layer that we talked about, but don't sit there then worrying about micro optimizations when you've got a front end app that's


likely to offer a better return on investment for that optimization time.


Will Vincent  16:32  

Right. And I guess then the last major point, I would say, is it really it depends. It depends on the type of app that you have, how often is the data updated? Is it personalized for every user? So for if you think of Facebook, you know, every you and I log, not that I have Facebook, but if I did, you and I log in, there's different content being loaded there. I'm sure I know that in the background, Facebook is periodically loading those things into cache. So when you log in It's there. But how often does that change? If you have a timeline feed or Twitter, right? I mean, that's updating quite a lot. That would be a little more challenging than a blog or something that doesn't update as much. We can be a lot more aggressive with the time limits that you. Yeah,


Carlton Gibson  17:14  

yeah. Yeah, exactly. It's like, how aggressively Can you catch it is like for static blog posts? And do you have a mechanism for invalidating it? Right, so let's say you've cached a blog post in your, you know, Redis, whatever, using the Django back end? Are you able to identify that by key such that when you update it, you can, you know, use? You can, when you in your safe handler, wherever you put that save handler, you can say Oh, and invalidate the cache. So there's there's two problems in computer science, right naming things cache invalidation, and off by one errors.


Will Vincent  17:47  

Yeah, yeah. Well, and I guess the last point I would make is, the cache is not an infinite supply. It's not the database. So often you are finding yourself you're like, Oh, just cash everything all the time, but it's really more expensive than database space.


Carlton Gibson  18:02  

Yes, and this but this is where file system caching comes back into its own right. So everyone's like, right, let's go straight from Ram. Well, Ram can get expensive, but file system can be cheap. And you know, it's this. It's so there's this, you know, when there's this idea about the different latencies of different things, you know, l one cache, blah, blah, blah, all the way down to reading something off the disk and, or getting something over the network. It's like, how far up that scale? Can you move your relevant data? It's just a question of thinking about, you know, your requirements and your performance things and all the rest, you know, it's like algorithm design all over.


Will Vincent  18:39  

Yeah. And it's, I mean, again, it's, for an engineering mind is sort of fun, because it feels black and white, and you can see your progress. I would last point I would mention the so there's a there's a book that's a couple years old at this point, but still very relevant called high performance Django, by the folks at Lincoln Live Super that talks about caching, but talks about a lot of these performance because this all comes around. performance. So that's definitely worth a look. We'll put the link for that in the show notes. So yeah, so I was just going to say there was some old, I remember when I was learning back in the day, and there was some, some really good


Carlton Gibson  19:11  

books on this sort of stuff. And, you know, from O'Reilly, and you know, all the rest of them. I don't know what it is. But yeah, I don't know what the latest scaling books are, you know, what the latest, you know, high performance web application type things, books, web scalability that,


Will Vincent  19:27  

well, if I may end, I'll just like rant. So I was updating my awesome Jenga repo where I have a book section and there's still like, almost no up to date books, up to date being you know, actually written in the last couple years books on Django because it changes all the time. So it's not that the advice especially around this stuff is wrong. But I would love to know about more up to date things I mean, as far as I know. So Django Django just released an updated book. That's a classic that's been around. But there's still I still think I'm almost the only one with 2.2 versions of my books. So it's, yeah, if you think if you find those, we'll put them in the show notes. But that, you know, it's it's nice in a way that that's the stuff that doesn't change as much. I mean, that's the challenge and opportunity for me as a content creator is I have to update things all the time, which can become tiring, but also makes me do it better. But it means that a lot of I sort of look longingly at these things like algorithms and stuff that are more,


Carlton Gibson  20:24  

but I would argue, I would argue that the principles of sort of web application scaling haven't really altered in the 15 years that I've been sort of looking at it like it was That's true.


Unknown Speaker  20:38  

Maybe it is now and, you know,


Carlton Gibson  20:41  

maybe version numbers have changed, but not, you know, not the actual way you go about it.


Will Vincent  20:46  

The point is, though, since you already know how to do it quite well, that 510 percent that's changed doesn't throw you off. That's as difficult. For example, people ask me all the time, what's the difference in the book between 2.02 point 2.2 it's about 10 15% actually different content. And if you already know Django, it won't throw you off. But if you don't know Django, which is why you bought the book, it will definitely throw you know, those.


Carlton Gibson  21:08  

Those differences are fatal or what you know, I've got it says 2.2. And I've got 1.8 What's going on here? Like, you know? Yeah, I remember that. Yeah, I remember being in that exact position. I couldn't I you know, embarrass my wall. Head on the wall for ages. Right. That's a cheerful note to finish on.


Will Vincent  21:24  

Yes. All right. So caching, it's important. Hopefully, this episode helped you all out. We are as ever at the Django chat comm website. We are chat Django on Twitter. The episodes actually are also on YouTube, the audio only if you prefer that. I keep putting them up there. And there are some subscribers, but if you prefer YouTube, go check it out. We'll see you all next time. Buh bye.