Django Chat

AI in the Real World - Marlene Mhangami & Tim Allen

Episode Summary

Marlene and Tim both gave talks on AI at the recent DjangoCon US conference, but with very different angles. In this episode, we discuss the real-world strengths and weaknesses of AI, how it is impacting developers' daily workflows, and also examples of AI failures. Marlene is a Senior Developer Advocate at Microsoft and Tim is the Principal Engineer at Wharton Research Data Services.

Episode Notes

Sponsor

This episode was brought to you by HackSoft, your development partner beyond code. From custom software development to consulting, team augmentation, or opening an office in Bulgaria, they’re ready to take your Django project to the next level!

Episode Transcription

Carlton Gibson (00:03.245)
Hi, welcome to another episode of Django Chat a video cast on the Django web framework. I'm Carlton Gibson, joined as ever by Will Vincent Hello, Will.

William Vincent (00:11.382)
Hey Carlton.

Carlton Gibson (00:13.313)
And today we've got with us Marlene and Tim are coming to coming on to discuss. don't know what AI our new overlords, these kind of things.

Tim Allen (00:20.825)
You

William Vincent (00:22.605)
Well, let's give them an introduction, Carlton. Can you give them a quick intro?

Carlton Gibson (00:25.613)
Okay, so well Marlene, you're a senior developer advocate Microsoft is that right?

Marlene (00:30.477)
Yes.

Yeah, I'm happy to introduce myself. Hi everyone. I don't know if that would be helpful. But yeah, my name is Marlene. I'm a Senior Developer Advocate as Carlton said I currently work at Microsoft. I focus on Python and AI. So I'm on the Python on Azure team. So I'm doing a lot with AI right now. I think most of the Python code I'm writing, including the Django stuff, is AI. So yeah.

William Vincent (00:33.078)
Directed.

Marlene (01:02.574)
I'm also very involved in the Python community. was on the board, the PSF board for a couple of years. Was one of the co-founders of PyCon Africa. So I love that. So love the Python community. Love Django as well. So yeah.

Carlton Gibson (01:19.381)
And Tim, you're at Wharton, right? Wharton Business School.

Tim Allen (01:22.855)
Yeah, so I'm a principal engineer of the Wharton Research Data Services platform. We actually work with Harvard, Stanford, MIT more than Wharton faculty on my actual job. The platform we run, we have several petabytes of data, of finance data that we store that the business schools across the world come to our platform to do academic research on.

Marlene (01:34.039)
You

Tim Allen (01:45.319)
So it's a pretty cool job. I mean, I'm lucky that I get to build cool things and work on solving interesting problems. So I still get to write a fair amount of code too, which is which is pretty nice.

Carlton Gibson (01:55.949)
Well that's the magic trick right? mean how do you manage to stay involved?

Tim Allen (02:00.869)
Well, we've managed to actually develop sort of a path for our technical folks that doesn't involve becoming a mediocre manager. So that's one thing I've never understood about the tech industry fully is so often the career path for your top engineering talent, which has been the rarest, hardest to find talent over the past couple of decades, has been, you want to progress in your career, become a mediocre manager and do what you really don't want to do. So it never made much sense.

Marlene (02:10.316)
Yeah.

Carlton Gibson (02:27.297)
and that you've had no training for, no experience of, you know, off you go.

Marlene (02:28.268)
Yeah.

Tim Allen (02:30.085)
Yeah, so I'm glad to see that.

William Vincent (02:30.956)
Well, it's the Peter principle, right? You rise to your level of incompetence and there you stay. That's I've heard it phrased.

Tim Allen (02:36.933)
Yes.

Marlene (02:39.534)
Thanks.

William Vincent (02:41.611)
Well, I do want to mention, this came about in part because you both gave excellent talks at DjangoCon US, which Carlton, sorry you weren't at. I think this is a DjangoCon US. Yeah, sure. And then it was continuing online on LinkedIn and finally we're like, you should just come on the show and, you know, cause this is about continuing the hallway track for everyone to make it accessible. But broadly speaking, there was discussion about AI and I know Carlton, we're not making this an AI podcast, but both of you had.

Carlton Gibson (02:49.901)
Yeah, yeah, yeah.

Marlene (02:50.516)
very nice.

William Vincent (03:08.531)
slightly different takes on AI. Marlene, you gave a keynote. Tim, you gave a talk that touched upon it. I guess I'll just start with a question, which is in your day-to-day jobs, how are each of you using AI? Not the marketing type, but actually day-to-day, either one of you.

Marlene (03:22.018)
Yeah.

Marlene (03:26.858)
Well, I'm happy to go first. yeah, like all you mentioned, I gave a talk at DjangoCon about using AI. A keynote, yeah. And I was a little nervous because it's like AI, not everyone loves it. But definitely, I would say in my own work, I tend to use AI quite a lot. Right now, I would say the primary way that I'm using AI,

William Vincent (03:35.731)
A keynote, to be fair. Yep.

Marlene (03:56.258)
different ways. I think with developer advocacy, I'm doing a lot of writing, for example. And so typically what I'll do is I'll have, you know, if there's a topic that I'm supposed to write about, I'll have AI go and get all of the documents or I'll, I'll paste in some documents and then have it summarize the documents to me. And then I'll like use that as a starting point for a draft blog or something like that.

Another thing is that I'm writing a lot of code. So right now, for example, I'm maintaining the Azure integrations in Lang chain and I'm doing a bunch of Python code for that. I don't yet feel comfortable having AI like write a full PR for me to for me to merge into into those libraries. But what I do have AI do is is, for example, run linting for me. So

I don't know why whenever I do linting, there's always like something that tends to be missed. So I have something where it just automatically runs the linting for me, which I think is quite nice. And I also have it sometimes if I'm having trouble with a bug, it figure out where in the code the bug is. And then I use that as a starting point as well to debug.

Yeah, those are the primary ways I would say I'm using. I'm using BS code and co-pilot by the way. That's a plug for Microsoft. But yeah, that's what I'm doing.

William Vincent (05:23.018)
That's fair.

William Vincent (05:27.944)
And Tim, what about you?

Tim Allen (05:29.593)
I too am using VS code with Copilot pretty much every day. Like I said, I'm really lucky I get to work on solving some really interesting problems. So I still write a fair amount of code, but I've found as I become a more and more senior engineer, I'm the first principal engineer in the history of the university. So this sort of new career path was sort of developed with me in mind, which was a really big honor. But I find the actual time I'm pressing buttons, writing code.

Marlene (05:32.7)
they're cool.

Tim Allen (05:58.128)
is getting to be less and less. And these days I also find that my best PRs often delete more code than they create. And I like to say, you know, while the LLM is very useful for those, you know, the 10 to 15 % of the time, I'm actually pushing buttons in the IDE. The most value my employer actually gets for me isn't Monday through Friday, nine to five. It's when I'm doing two things. One, sleeping, and two, showering.

because that's where I come up with the solutions to the really hard problems I work on. know, there's sometimes I wake up in the morning and I've been staring at the screen for three days trying to solve a problem and I wake up and it's just there. Or I get out of the shower and the solution to that bug is just there. Another part of my job that I really enjoy on my day-to-day activities

Marlene (06:29.538)
Ugh.

Carlton Gibson (06:29.613)
Thank

Marlene (06:33.848)
Yeah.

Tim Allen (06:49.753)
is working on projects with junior engineers and sometimes an LLM can be involved in moving that along. So, you know, when you're pairing up with somebody, you're not sitting there. you know, one person isn't sitting there for 15 minutes while the other person is looking through the documentation, trying to find that thing you know is there somewhere, but you can't quite find it because every documentation setup is slightly different. I mean, that's another thing. Teaching is something that is that I found is still always fulfilling that I never get bored of. So,

know, having an LLM integrated in that does make it a bit more efficient, but I don't think it's any kind of magic wand for my day-to-day activities. And yeah, that's sort of a summary of how it's, you know, it's another tool in my toolbox.

William Vincent (07:35.529)
Carlton, are you willing to say publicly? I mean, I know you play with everything.

Carlton Gibson (07:37.473)
What you want?

You know, I give all these things a go. My usage is very much what I call the stack overflow use case where I found last year or so where I would previously have gone to stack overflow, I'm happy to type it into Claude or type it into Copilot and the answers are for me about as good. They're sort of like, yeah, okay. I'm probably better than stack overflow because otherwise I wouldn't have replaced them in that I don't have to spend quite so long reading.

the number of threads, I get a concise, good answer straight away. And so for me, that's super. What I'm not doing with them is writing code. I might ask for an explanation. How does this work in JavaScript? How does this work in Playwright? And it'll tell me, and then, okay, I'm still gonna write the code myself though.

William Vincent (08:29.65)
So not even autocomplete, tabbing autocomplete, no.

Carlton Gibson (08:31.661)
No, God no. What do I use for autocompletion? I still use what they call LSP based autocompletion, which is deterministic and correct and perfect for me, in fact. Like the next line suggestion stuff that comes up. like, I don't want that. I literally don't want that. As well, here's something you've got to remember.

Marlene (08:36.449)
Yeah.

Carlton Gibson (09:00.933)
I have snippets for every code construction. So if I'm writing a loop, have a snippet for the loop with placeholders that I tab between and things like that. I'm never typing for variable name in loop name. I'm tabbing between the placeholders very rapidly. But that's pre-LLM. It's totally static. So the use case for autocomplete just doesn't come up for me.

Marlene (09:16.226)
Yeah.

Carlton Gibson (09:30.477)
in my usage.

William Vincent (09:33.138)
Yeah. Well, no, no, you go, Tim.

Tim Allen (09:34.152)
Yeah, I turned off the sorry, go ahead. I was just gonna say, yeah, I found that the LLM auto complete really gets in the way. know, for years people have talked about getting into flow as a programmer and hitting that flow state. And I found that when the LLM sort of interjects itself, it's jarring, gets in the way, gets in.

It gets in the way of my thought process. So I like having the chat agent on the side, but for the autocomplete or the think ahead type stuff it tries to do, I just find that really impedes my progress and breaks chain of thought.

William Vincent (10:11.132)
Yeah. I've been thinking a lot about, you know, the hype and like, where does this actually sit? I mean, I think we're all sort of coalescing around, and this is one thing that came out of DjangoCon US is it's a tool. It's a tool that has benefits. I don't think anyone feels like we're immediately going to be replaced. And it's imperfect for sure. But also I was thinking, you know, the old days, web stack overflow and hunting around for a Google blog post and just being stuck, stuck.

It's not that that was perfect either. So, you know, I guess like the three of you, I use it a lot for discovery, for research. I also find the autocomplete aggressive and also just annoying. But I'm using, yeah, the chat interface all the time for research or to like hunt down a bug. And increasingly I'm using agents on the command line, but more for boilerplate greenfield stuff. And still I haven't quite.

I still don't fully sit back and even when I sit back, it's almost like I sort of miss the days of like fully doing compiled code. But when you sit there and just like it's worrying away, like what is that feeling? Like I sort of, I just get like existential waiting three seconds, 20 seconds, and then I have to evaluate. Like I'd like to be a little more leaning in. So for myself, it's much more of like a research tool. That's a great research tool. But the fully agentics spinning up six parallel agents and

Marlene (11:23.628)
Yeah.

William Vincent (11:38.555)
you know, having a coffee and coming back and it's done. I've never quite had that.

Carlton Gibson (11:43.853)
Sorry, Molly, I just have to ask, is that not the management thing that Tim was just saying you wanted to avoid?

Marlene (11:50.946)
when you're coming up to management incompetence levels.

William Vincent (11:54.885)
Yeah. yeah. No, I far, I've, I've far past my incompetence levels, but yeah, it's well, but I think, I mean, the thing I've come back to is, mean, again, we're all in the Python space. So if you're doing Python stuff and Django stuff, you get almost as good a result as you're going to get because Python and Django was so mature, so well documented, right? If you're using a programming language or a smaller programming language or framework, it doesn't work anywhere as well. So like this is tip of the spear, what we're all doing. and even then,

Tim Allen (11:55.943)
Thank

Marlene (12:20.44)
Yeah, 100%.

William Vincent (12:24.742)
You know, on the one hand, I'm like, ah, it's like, I still have to code, but it's also this amazing research discovery thing and finding bugs. I mean, I remember spending hours, if not days on some little thing that I, and I couldn't find the right post. And now the LLM, even if it doesn't get it right, right away, it'll, you know, I can be like, try harder, you know, and it'll try harder and find stuff, right? It just feels like a speeding up of that, but not a total replacement of, of code. And yeah, and I still

I still like elegant code. These things don't write elegant code. Even if you have rules or guidelines, like it's incredibly verbose and you know, it just vomits up code rather than like concise code. I've even tried, you know, write it in the style of, know, Carlton Gibson, like go crawl his GitHub repos. And you know, that helps a little bit, but it's still not a total replacement for you Carlton.

Marlene (13:12.686)
Yeah.

Carlton Gibson (13:15.245)
what no tests and sort of working on the happy path.

William Vincent (13:19.439)
Cowboy coding, cowboy coding, right? You put your cowboy hat on when you code Carlton.

Marlene (13:20.192)
One, two, three.

Tim Allen (13:20.551)
you

Carlton Gibson (13:24.151)
Go on Marlene, I can see you by jumping at the bit there, go on.

Marlene (13:24.216)
You know, something, yeah, I wanted to say that it's so interesting that all three of you don't like the auto completes because when we had been, so one of the things, know, Copilot and the VS Code team have been working on is just growing and meeting the demand for AI programming with Copilot and VS Code. And so,

I've been kind of looking at the comments on like VS codes, like social medias and things like that. And one of the number one complaints people have on social media is that they just feel like that what they're complete is not fast enough, it's not like they come back. A lot of people compare it to cursor and say that they think that, you know, in VS code it's too slow or it's not.

doing enough and they just want to tap, tap, tap sort of is the vibe. And so it's so interesting to me that actually all three of you think that the, to complete is usually too aggressive or it's there's too much going on there. And I think it also shows kind of there's like a bit of a discrepancy in terms of like people's comfort levels with this in terms of I think certain groups are like pushing the boundaries and want to be

like really at the edge there. And even will you mentioned not wanting to leave agents. And I was talking to Armin who's like the creator of Flask and he was like his primary thing. If he does 90%, he lets Claude code just write all over the code. He just wants to give it instructions and let it do its own thing and come back. And he's a great programmer. And then on the other hand, know, I also, for example, even when I was

William Vincent (14:57.861)
90%, right?

Marlene (15:16.378)
first getting started with autocomplete felt like it was too aggressive. But over time, I think I just started to get used to it and that kind of wore off. I'm now kind of using it quite a lot. But yeah, I think that discrepancy there sometimes is so interesting to me and kind of different. So yeah.

William Vincent (15:38.255)
Well, an Armin, think just specifically an Armin, he's written some posts on how he uses it. Like he has a custom something or other for YOLO mode where he just like lets it, you know, so you have, he's spent a lot of time to fine tune this. And I think, I think that's part of it is not just playing around, but fine tuning it. Because if you just try to one shot something, of course it's going to be, I mean, this is the main thing I have when people say like, it didn't work. Whatever tool they're using, Junie, Claude, you know, copilot. It's like, well, if you just ask it as a simple one sentence prompt.

Marlene (15:45.57)
Yeah, that's it.

Marlene (15:50.818)
Yes, yes.

Marlene (15:55.235)
Yeah.

Marlene (16:05.027)
Yeah.

William Vincent (16:08.654)
how could you get something good, right? It's like the joke is like, you kind of have to use it to write a spec. You'd never write a spec for a fellow human, right, Tim and your team. But if the machine will do it for you, you write the spec. So I do feel like you need to kind of like play with it more and to get to a point where you can fully evaluate it. But that said, I still am shocked that people, and there are some like Armin who knows how to write code, is fully comfortable and happy and sped up.

Marlene (16:10.487)
Yeah.

Marlene (16:15.234)
Yeah.

Marlene (16:23.394)
Yeah.

William Vincent (16:38.399)
you know, doing that. So yeah, it makes me feel like what, what, what's going on? Yeah. Wait, wait, wait, where's.

Marlene (16:41.262)
Yeah.

What's happening?

Tim Allen (16:46.151)
So much time reviewing code from running a successful project that He's doing the LLM code because I think this verbosity issue is is a pretty big issue You know, I still prefer chat mode to agent mode Agent mode feels far too much like autopilot to me and when I've gone down that rabbit hole I've ended up with sort of a maelstrom of nonsense a couple times

William Vincent (16:53.762)
Maybe he's used to that.

Tim Allen (17:14.971)
And studies have started to show that if you use LLMs too extensively, it does add a lot of technical debt. I recently, so I haven't owned a car in a dozen years. I just recently bought one. And the state of software on cars is absolutely miserable.

So the average vehicle, yeah, well, car play is actually an improvement, but the actual code and the individual components of a vehicle, the average vehicle has more than four times as much code as Facebook in it, which is terrifying because for me, every line of code I write, consider a liability.

William Vincent (17:33.988)
Carplay, Carplay.

Marlene (17:35.374)
Carplay.

Tim Allen (17:51.752)
Every line of code that I put out is a potential security flaw. So when I purchased my new car, I actually did research on what vehicle I could buy that had the least lines of code. And I ended up with a 2022 Toyota Corolla with only one screen. It doesn't have blinking lights all over the place. It doesn't have sensors all over the place. Because there was an interesting study that came out of Ford. Ford...

has 150 different software vendors that write the software together for their vehicles that they try to put together into one working package. And of course, car repairs for software are just as common as car repairs for anything that's actually wrong with the engine now.

So, know, this, I use this as an example of verbosity of code and runaway code bases that just become unmaintainable. And I think we're going to see this problem continue to grow over the years. You know, we're starting to see people whose actual job is fixing your LLM coded mess. Like engineers who are marketing themselves as I will come in and fix your VOD coded nonsense.

William Vincent (18:50.786)
Yeah.

Marlene (18:51.47)
Yeah

William Vincent (18:57.572)
And they'll probably use an LLM to do it though. I mean, and again, a friend of the show, Jeff Triplett at RevSys, he's in the Arm and Rannacher camp. He's figured out how to make it work. And I think it's a combination of knowing, of being used to your point, Tim, of being used to reviewing PRs. So you're kind of in that manager, whatever higher level mindset anyways. But it's just a...

Marlene (19:00.238)
They will! They likely will!

Tim Allen (19:01.179)
They very well may.

William Vincent (19:27.543)
I thought paradigm shift, but it's a shift to go from just evaluating someone else's thoughts to like thinking the thoughts yourself and then writing them. And yeah, I'm trying not to be all calcified about it, right? Like I use the chat and the research every day for two years, but the agents and I still play with it and I'll use it for like fun projects and it mostly works, but then I just get, you know, then the debt builds up or I've lost control of a mental model of what's happening and you know, but it's weird because you can

You can, you know, we anthropomorphize these models, right? You can sort of treat it like a person, right? Like it'll do something and then say, like, are you sure? Be like, okay, imagine that like, I don't trust you. Like you can do all these sorts of tricks and it shouldn't, it doesn't seem like they should work, but they do kind of work, right? To, be like, you know, how sure are you? You know, and, and they're getting better about being a little less synchopatic, but they're still.

Marlene (20:07.576)
Yeah, tricks too.

Marlene (20:18.648)
Yeah.

William Vincent (20:25.078)
You know, they're not, they're not to your point and they're not going to like take out code as much, right? They're, more like, want to, they want to generate stuff for you.

Tim Allen (20:34.063)
Generous we need deletion all AI

Marlene (20:37.216)
We need to say, I mean, I think this is the thing. This is one of the things that I mentioned as well in my talk is that I think the best way to, I agree completely that the technical dip thing is a thing. And I think more people are struggling with their code. They have vibe coded and not knowing what to do with that. And

I really do think that's real issue. My thoughts there, or what has worked for me when I'm using like agent mode to help me with coding tasks is I really use it in a modular way. So I don't have it generate unless it's something like completely greenfield where it's like a prototype or something like that, or I, you know, it's just.

creating the structure of it or something, usually what I will do is I'll give it one file and I'll say, this is the issue with this file, I want xyz done in this file, or I see this bug in this file. Here's all of this context. And if you treat it in a modular way, where you know exactly what the goal is for this problem.

and you give it all the context that it needs. I actually think it can be super helpful in those cases when it's a modular case. When you're just generating these larger, bigger projects and just having it go off, I would say those are a little bit harder to control. And that's where I think the technical debt comes in. But I think there's other ways to use it. I primarily use agent mode and I found that for me agent mode is better.

because the model usually has access to the logs and things like that and the whole structure a bit better when it's in agent mode, but I will restrict it and say, even though you have all of this, only write code, change this file or do things like that. So giving it restrictions.

William Vincent (22:42.123)
Mmm.

William Vincent (22:46.017)
Do you have that in your rules or do you have to manually put that in each time?

Marlene (22:50.542)
I do sometimes add it to my rules. But usually I will put it in the chat. So usually I will give it the context that it needs. And I found that that works for me because I'm not usually trying to refactor the entire code base, at least not in the problems that I'm working on.

For me, it's usually adding a feature or debugging an issue one at a time. so that's where I would use it. yeah.

Tim Allen (23:27.559)
how much of your productivity boost this really is or if it's just a different way of attacking problems. You know, I started to think about what are the biggest productivity boosts. So I started writing code when I was six years old. So I've been writing code for a long time. I was trying to think of what are the biggest productivity boosts I've seen as a developer over, you know, over four decades. And, you know, Stack Overflow sort of coming about

Marlene (23:31.084)
Hmm.

Tim Allen (23:53.454)
in the 2000s and 2010s was definitely a big boost to my productivity. Having sort of the amalgamation, the coming together of all the development knowledge on a single source that I could rely on was. But I think the biggest productivity boost I ever got as a developer was Windows 98. Yes, Windows 98 was the first time I had an operating system that had support for a second monitor.

Marlene (24:12.415)
Mm.

Marlene (24:22.444)
You okay?

William Vincent (24:22.685)
Uh-huh.

Tim Allen (24:23.079)
That was truly a game changer for my development productivity. me tell you, having like browser and Emacs up at the same time, I'm dating myself, but that was a game changer.

Marlene (24:34.454)
Yeah

Carlton Gibson (24:36.333)
Yeah. The thought that comes up with Will's example when you get stuck on something for hours. And I think that the thing that these tools are amazing for is, know, if you've, remember when I was learning, got stuck, missing a semi-colon or something and I'm at home by myself and I had nobody to help me and I couldn't work it out and eventually I worked out, but it was hours of like tearing my hair out about this stupid semi-colon. Whereas one of...

one of these machines would have spotted that and told me and gone, look, this isn't working. Why isn't this working? It's not working because, and that would have been an amazing boost. think, you know, again, stack overflow, lot of people learn by copying and pasting from stack overflow. And there was all this kind of program of superiority about, you mustn't copy and paste from stack overflow, but a whole generation of people learned by copying and pasting from stack overflow. Well, nothing wrong with that. That's brilliant. More power to them. And I think the same

hear from LLMs. think if you're a junior by yourself, you haven't, you know, hasn't got a support there who a mentor there who can brainstorm it with you to have the LLM go, here's the problem, that's going to let you go forward faster. I think for me, that use case is very interesting. We can open up an awful lot of doors with these, I don't know, are they, magic lanterns that speak, you know, speak kind of the truth.

William Vincent (25:58.24)
Well, it may be Carlton, you and I are the self-taught, non-formally trained ones in this discussion. yeah, like days, days on a semicolon and maybe newer IDEs would fix it, but that feeling of just helplessness, like that's gone. I think at the...

Marlene (25:58.446)
Kind of. Partly the truth, yeah.

Carlton Gibson (26:17.153)
You know, I still have JavaScript today. You know, I'm like, you know, why isn't my, why aren't my dick, why aren't my keys for my object being right? because they're key literal keys and you need a evaluated key. but it, brilliant. thank you job. Thank you. LLM for solving my problem. It's not just about being a beginner.

Marlene (26:17.902)
Yeah, absolutely.

Marlene (26:30.36)
Yeah.

William Vincent (26:34.537)
Yeah, I did.

Marlene (26:35.714)
Absolutely. Yeah.

Tim Allen (26:37.861)
much of what we learn as programmers, as software engineers, comes during those three hours you are hunting for that semi-colonel.

Marlene (26:45.614)
Yeah, yeah.

Carlton Gibson (26:46.152)
yes, yes, yes.

Tim Allen (26:47.173)
And how much, I mean, that feeling I used to get when I was writing, you know, turbo Pascal, there was no internet. And I was there with my own devices and a book of Borland turbo Pascal. When I found that semi-colon after three hours, I'd learned a lot more about my code base. I'd learned a lot more general knowledge from going through the book and learning about, you know, how you should write code and how to avoid these problems in the future. I'd learned a lot about debugging and the feeling of accomplishment

Marlene (27:04.334)
Yeah.

Tim Allen (27:17.107)
you get after struggling like that. Sheena O'Connell said something brilliant at last year's DjangoCon when she was teaching people during the tutorial during sprints. And that was never steal someone's struggles from them. Because that is the best way that people learn and how they come to love programming is that feeling of accomplishment after the struggle. That journey is an important part of becoming a senior software engineer.

Marlene (27:31.576)
Yeah.

Tim Allen (27:44.579)
And I think that part of that is being taken away from a generation of people and part that I truly love. Love and hate.

Carlton Gibson (27:51.566)
Yeah. I just want, I just want to in there. Cause the flip side of that as well is that the re the real reason why I don't use the agents more is, you know, quality of output and all these things. can say all that the real reason why is I'm worried about losing my strength. I'm worried about going to the gym and getting the machine to do the weight, the weights for me. And you know, what happens in six months time when I haven't really been coding the same way, if I still got the same, the same strength. So that's the flip side of.

Marlene (27:52.462)
Yeah, go ahead.

Carlton Gibson (28:20.673)
The hard learning is by doing the reps every day coding. You keep your skills sharp.

Tim Allen (28:27.143)
And what happens in six years time when our entire industry has lost those muscles? Are we in a situation where the entire developer industry is something like trying to find COBOL programmers now?

Marlene (28:27.395)
Yeah.

Marlene (28:33.037)
Yeah.

Marlene (28:39.118)
Yeah, I actually would agree with that. and if I'm being honest, I think that's my primary concern about LLMs in AI. My primary concern is about is for junior developers and building that muscle. And I actually was, it's so interesting because I had a conversation, it was reading something recently where, yeah, where there's been some studies done about how

LLMs tend to select for or it's biased towards senior engineers where even we talked about Amin earlier and even the intuition needed to review what the agent has done, ETC to know what to change, what not to change is something that is biased towards senior engineers that already have the experience. And so if junior engineers don't have

that struggle that Tim is talking about to be able to understand their code. How do they actually get to the point where they know how to interact even there with an LLM in a way that's productive? I think that's actually an industry level concern that I think we need to be worried about as an industry. How do we actually solve that problem? I do think that

there needs to be some spaces where there's a joint collaboration. So I think LLMs can be good, like Halton said, for potentially like personalizing education and helping to make it easier to ask questions. So maybe if a junior developer doesn't know how to solve a specific task and maybe you give them room to struggle. So I think we need to be

Thinking rethinking how do we use what we have these tools to be able to create these environments where the junior engineers can still grow, can still learn, but we are preparing them for the future that is inevitably coming with the with the LLMs in general. So I would say here on this point I do. I agree. I'm is something I'm concerned about and I think we need to be thinking about this. Yeah.

Carlton Gibson (31:01.913)
just had an idea as you were talking there. think if the chat UI had like a kind of timer where you had to spend 10 minutes writing the prompt, right? So if you've spent 10 actual minutes writing the prompt, then you can press enter and then send it to the LLM. But if you haven't done that, then you have to spend the 10 minutes because that makes you think it's rude. That's like a rubber duck.

Marlene (31:09.464)
Yeah.

Marlene (31:17.422)
Bye!

William Vincent (31:17.821)
Marlene (31:22.432)
Yeah, 100%. That's a idea.

Tim Allen (31:23.344)
Alright.

William Vincent (31:25.287)
Well, one thing I think about.

Carlton Gibson (31:26.035)
I might write that as a demo.

Marlene (31:28.642)
Yeah, we need to do that.

William Vincent (31:31.581)
I mean, on the education point though, we're replacing experts with ghosts, right? These LLMs are kind of ghosts of like where did the stuff come from? And not everyone has access to Ping Carlton when they're stuck on a bug, but it is, I don't know if it's magic or what it is. Like I was just watching, Andres Karpathy just had a long interview on, was it Dharakash Patel's vodcast and.

some really interesting takeaways. And he was saying something I've thought about, like he's like, the internet is such garbage. Like it's not, you know, it's not New York Times articles. Like it's just complete garbage. There's no, it doesn't make any sense how something can come out of this garbage. And so on the one hand, like what's left there that has any like good meaning. So stack overflow, like, like let's take that as an example, right? So.

You ask a question and then humans come in and give responses and vote up or down. There's no, and it's not perfect, but it's pretty good. Like there's nothing like that with these LLMs and yet somehow they sorta kind of do it. But like you can't trust it. And also why would any of us go on Stack Overflow now and try to get credibility or write an issue, do a good response, write a book, you know, like the new Django survey will be out by the time this comes out, you know.

People aren't reading books. They're not even reading blog posts. know, like Adam Johnson, if you just read Adam Johnson blog posts, you would be like a senior dev, but you know, he's just doing it because he wants to, right? Like there's no...

Tim Allen (32:59.249)
What does it mean for us?

Marlene (33:09.954)
Yeah.

Tim Allen (33:10.279)
Your point on Stack Overflow is incredible because we might be at peak LLM coding time right now because the top the top model is now a ghost town like Stack Overflow is dead the number Yeah, so I mean if you look at the number of questions tagged Django or tagged Python

William Vincent (33:19.301)
Well, right.

William Vincent (33:25.113)
Yeah, it's probably three years ago, the training set.

Marlene (33:28.152)
Yeah, it's so sad.

Tim Allen (33:32.73)
over the past five years, it's dropped off a cliff and there is nobody on Stack Overflow. I used to get 30 points a day for my various responses out there. I now get 30 points a month or something like that for upvotes to my old responses. What happens if things change and there's no central model for the LLMs to steal their training from?

You know, I'm looking at right now all of these LLMs are being operated at a big loss. OpenAI made $4 billion last year and spent $9 billion to make that $4 billion. This is not sustainable and it's not going to last forever.

William Vincent (34:06.299)
Here we go.

Tim Allen (34:17.519)
And you know, I referenced AI a bit in my talk. A lot of my senior thesis written in the mid 1990s was about the dream of AI. I am not a hater, but I've been around enough hype cycles and I've seen what these big tech companies have done to the internet and done to search engines and enshitifying their own products that I truly think, you know, the same

patterns that social media and big tech have brought us over the past few decades are sort of being repeated on steroids and they have not yet enshitified the LLM. What happens when they want to keep you on the LLM longer like Google has and starts intentionally giving you worse results? Some of the worst companies and the worst people in the world are the ones pulling these levers while genuflecting to the current US administration.

This is worrisome to me. I can see the same exact pattern of what has happened to Google, to search engines, to social media, to them lobbying and not having any concerns for our children's mental health or futures, being amped up on steroids. And I see the same sort of people. see Sam Altman and I see Mark Zuckerberg. I don't see much of a difference in the morals.

Carlton Gibson (35:09.805)
Yeah. Yeah.

Tim Allen (35:36.122)
or the discussions coming out of them. I see move fast and break things. I see little concern for the mental health of people, for humanity or the future, and that worries me. So I think we're currently at the peak of performance we're gonna see because the companies are gonna start intentionally and shitifying them to put profit over pulling people in. Because that's a pattern we've seen over and over and over over the past couple of decades.

William Vincent (36:03.94)
Well, sorry, Marlene, you go.

Marlene (36:07.374)
Yeah, I mean, I definitely agree to the extent that I do think that there is, I mean, I think currently the way the world is structured is to maximize profit in a lot of ways. And I don't think, know, LLMs are an exception to that. I do think LLMs are not explored enough in terms of the good potential that could come out of them.

Do I think there's loads of bad things that are going to come out of them? Yes, I think we are seeing that already now. I'm not sure if you saw even like I don't know Sam Altman shared some updates recently that are concerning that I think as well the intention is to continue to keep people using chat GPT for example and and you can look for that online. I won't talk about it, but I think that.

I also think that technology is always going to be this sort of double edged sword where you have very good things and very bad things coming out of it. you know, when I was at DjangoCon, I talked about how we haven't really explored to the extent that we could these open source models, these small language models that have huge potential to transform education.

I talked about the Django Girls curriculum and how we could do something like Django Girls offline because there's so many parts in Africa, for example, that are being left behind and there's this growing digital divide that is consistently growing and no one is doing anything about it and it's not changing. people just don't have the resources there to bridge that gap. And the thing that has helped to sort of close this gap over time, technology is a huge part of that.

You know, I grew up in the mob way and a big part of why I learned how to code in Python is because I had access to an internet cafe where I could go and I could connect with people online who are like writing Django code and, and who can teach me stuff. And, that's because of technology and those advances. Yes, absolutely have huge ramifications, negative ramifications, but

Marlene (38:30.286)
Also, I think about the parts in Zimbabwe, for example, that don't have access to stable interconnection or internet is super expensive. And thinking to myself, can I imagine a world where we bring in small language models that can teach people when they don't have an internet connection and what that means if someone now has access to all this information is huge. my...

personal perspective on this, is there two parts that are going to continue to grow? Who's very negative stuff and then also potentially very positive. So that's my perspective. I would agree to an extent, but also don't want to forget the other side as well.

Carlton Gibson (39:15.543)
Yeah, that's a really good way of looking at it Marlene. think I really positive the thought that came up with so I saw that a similar article to you Tim was about the economics of all this and the sort of latest whether it's latest or recent figures they're spending three dollars to get one. Well, that's not you know, three dollars of spending to make one in revenue. Well, that's not sustainable and nobody's going it seems like nobody's going to pay three times as much.

for the same service because you can run a local model, which on, you I've got a five year old laptop that runs a model that I can do 90 % of what I do with the, you know, the Clauds and the Chachi PTs or the co-pilots of the world. It seems those local models might be, you know, the way forward. If you can, you know, there's an internet in a box project, which I think is...

you you download Wikipedia, you download your PIPI mirror, you've got everything you need to run workshops without an internet connection. If they had a local LLM on them as well, then you've got the complete set. I just, I wonder what you think about local models and, you know, how that affects the economics of these frontier, the anthropics and the open-eyed eyes of the world.

Tim Allen (40:36.743)
I think, you know, especially when we look at things like agent skills and MCPs right now, people are spending a lot of time making these attempts to plug these sort of answer anything god machines into our local systems. And I remember Hitchock has got to the galaxy. They build the computer deep thought to find the answer to life, the universe and everything, the ultimate question of the universe. And it comes back with the answer being 42.

But what is really there in that? And I think you're really onto something because it's been shown, know, studies have started to come out that show that the smaller the scope, the higher the accuracy and utility of a language model. And, you know, I could be completely wrong, but I think the far more useful future that you're

speaking about here could be small, focused, tightly scoped language models. You know, not just RAG, but the entire language model being contained to a very specific scope. I mean, imagine if our coding LLMs didn't have billions of unused parameter pathways for when someone wanted to make a cat picture.

but was just focused really on the Stack Overflow training set. It would be a lot more efficient. It would be a lot more environmentally friendly to run. It could run locally. And maybe dividing these things up, having 1,000 different models instead of one big God machine is the way to go, because then it also can be run locally. Maybe the race shouldn't be to a trillion parameters. Maybe it should be to a million parameters.

William Vincent (41:47.988)
Mm.

Tim Allen (42:15.525)
Yeah, it's going to be interesting to see.

William Vincent (42:18.153)
I think those are called focal models. That's, you know, small focused. That's the term I've seen bandied around. but I, Carlton, I do want to, can we mention the, the comment you shared with me yesterday about the art heist at the Louvre in the context of what. Well, so, so, so yeah, Carlton shared a, you know, I guess a meme or something going around saying, you know, if, the thieves had said they were training a LLM model would have been fine to go.

Carlton Gibson (42:33.781)
Yeah, you can, yeah, I don't think it was controversial.

William Vincent (42:47.395)
take off with the jewels. Because there is, I keep coming back to the underlying there, there, like what, how, if the technology keeps improving, how could it have good responses if the underlying content is garbage? And as all the economic underpinnings for creating good content on Stack Overflow, on a book, on a blog post, even something as prosaic as like, I want to buy, I'm going hiking, I want to buy a backpack. In the old web, you know, you could

try to rank for like top backpacks and do a ton of research and have affiliate links. And that would justify hours and hours of time to do well. Well, why would you do that now? Right? Especially as like Tim, you were mentioning these, all these LLMs are now going to be adding in, you know, e-commerce ads, right? Like I do sort of wonder if this is the glory days of, you know, if this is like Google and, know, 2000 or something, right? Cause they're actually trying to give you the right response.

But because of the underlying economics, we know that they're going to be doing the things you alluded to, Marlene, like they're going to be turning it in, they're going to be inshittifying it. And it's going to be about engagement. 1000%, right? Like there's that, another meme is the connector core, like the power adapter plugged into itself. Like that does seem to be, yeah, accurate. But Marlene, you seem like you had something to add.

Marlene (43:54.904)
Yeah.

Marlene (44:04.856)
Yeah.

Marlene (44:13.846)
It's so tricky because I'm gonna pay devil's advocate, guess, for a little bit. I'm not sure. think it's hard because at the same time, one of, so there's definitely opposing views in the space in terms of like, on one hand as well, we need to kind of be thinking about the future and how do we get to

How we get to the best possible technology we can create for the future? And do I think that, I think the issue is that the frontier labs are at least framing themselves as having the solution to get to that future. So a lot of people have talked about AGI. That's a big thing that people are hoping we get to. And so,

you know, when we look at OpenAI, for example, they were the pioneers for Chachi BT. They've been the ones that have been pushing the frontier. know, Anthropic has now come on the scene and is also pushing the frontier there with LLMs. And we've seen that scaling these LLMs has helped to an extent. And if the people who have the most knowledge at the moment, we're assuming the people that

have the most knowledge about this space are working for these frontier labs. Do they not kind of owe it to us to kind of explore to the full extent how good these LLMs can get so that potentially maybe the LLMs could generate really good results on their own? And I know there's a lot of talk right now on.

reinforcement learning and having these LLMs learn by themselves and improve themselves over time. I don't know if that's a practical future. Do we think that this is a practical future, that that's something that is actually going to happen? Or do we think this is all going to be driven by economic incentives and there's no incentive almost at all to reach this kind of AGI future is my question.

Tim Allen (46:36.249)
I referenced in my DjangoCon talk this year, the one I gave two years ago, which blew up beyond my expectations on YouTube and got, I guess it struck a chord, it got a lot of views, whereas imploring people not to buy into the AI hype. Some people took that to think that I was a technology hater or something like that. It's quite the opposite. The thesis of that talk was meant to be that

Marlene (46:40.461)
Hmm.

Tim Allen (47:02.907)
This insane hype cycle, which is the biggest technology hype cycle I've seen in my over four decades in tech, is actively preventing us from finding how to use these algorithms to improve the human condition. Because I do think there is a there there. There's an undeniable something there in these algorithmic advances we've made over the past few decades. We see it in better prediction models and tracking hurricanes and stuff like that.

trying to make these into some kind of magic wand, think is actively dangerous. think it, you know, I've watched OpenAI go from being supposedly a nonprofit that was supposed to improve the future to humanity to now, you know, creating interactive sex bots that I Marlene was referring to earlier when she came out last week. It's like, how do you go through that?

Marlene (47:51.15)
That's true.

Carlton Gibson (47:52.845)
Thank

Marlene (47:55.552)
I keep it PG on the Django chat.

William Vincent (47:57.457)
Yeah, well Tim, to be fair, you read like Empire of AI and other stuff, they never really meant it. They just couldn't, they didn't have the salaries to compete with Google. And so what better way than to say we're an academic lab? So I think, I don't think they ever meant it. I mean, there's internal chats like a weekend with between Musk and all the rest saying, well, as soon as we hit scale, we can just discard this.

Tim Allen (47:58.705)
Bye.

Marlene (48:09.42)
the different myth!

Tim Allen (48:21.169)
knew they never met. I knew Elon Musk. I knew Elon Musk during college. He's never been a straight shooter. Yes.

Marlene (48:22.476)
Wow, my gosh.

William Vincent (48:26.77)
Yeah, yeah, yeah, that's, we've had the discussion, yeah.

Carlton Gibson (48:30.527)
I, since you mentioned the, the, the, the sec interactive sex box or whatever it is, they, I saw a good comment about that the other day, which was that if I was 18 a month away, if I really thought I was 18 months away from AGI, I wouldn't be pivoting to interactive sex box. Right. I think that's the answer to Marlene's question is they don't think that they're anywhere close.

Marlene (48:36.323)
Yeah

Marlene (48:46.52)
Yeah.

Tim Allen (48:48.775)
Yeah.

The hidden truth is there.

Marlene (48:51.576)
Close.

William Vincent (48:54.344)
If I could switch gears slightly, because we're coming a little bit on time. I did want to ask a question around how can Django be an, if not an AI first framework, not miss out on this wave that FastAPI is riding? Carlton and I have discussed this. I'm curious, Tim and Marlene, if either of you have thoughts on, right? mean, FastAPI is completely ascendant for a number of reasons. And how does Django latch onto that and not be left behind?

Marlene (49:16.334)
Thank you.

Marlene (49:24.546)
Well, I think for in my opinion, I know what, that's a great question. I mean, in my opinion, I do think that as a community in the Django community, we just need to be, I think more open to AI. Do I think AI has some very toxic things potentially associated with it? Yes, absolutely.

William Vincent (49:27.956)
It's a big question.

Marlene (49:50.906)
but at the same time, I think there's lots of goods that we can add, that AI ads. And so, you know, I mentioned potentially a lot of Django just generally as it's structured is fantastic for, doing modular programming, for example, and, and helping people who are vibe coding applications. Django's a potentially really good framework for that.

And I think doing things to actively interact with that AI community is something that I think can grow the Django ecosystem in terms of AI. So I mentioned creating a potential agents.md file that people can go ahead and put in their code and have.

ProPilot or whatever, Juni, whichever assistant they want to use, be able to create a Django app for them, but that's following the guidelines that this agents.md file creates. So really creating these centralized resources that we're also using to kind of steward where we want the industry to be going. And even as Django developers,

I think these conversations we're having right now are really helpful. you know, how helping Django developers have some guidance in terms of how they should use AI or how we think we should be approaching the community. And I forget, why am I forgetting his name? Corey is fantastic and he has a really good video he made on vibe coding with Django that I think has like some fantastic principles.

on approaching it in terms of modularity and things like that. So more resources about AI and Django is what I would personally love to see. Yeah.

Tim Allen (51:51.154)
So Wagtail's space was about a week and a half ago on October 9th and 10th, and the videos are now available. And on the second day, Sage Abdullah and Tom Usher, two of the core team, gave a really good talk about how Wagtail is going to handle AI in the future. And I really like the path that Wagtail is going down. So many of the corporate entities out there

are forcing AI into all of their products, forcing it down our throats, raising prices after initially giving it for free. We've seen it over and over over again with Microsoft putting copilot everywhere. There are like 18,000 different versions of copilot coming from every angle. No offense, Marlene, I know it's just part of the business strategy.

Marlene (52:33.934)
That's fair. That's true, that's true.

Tim Allen (52:39.047)
Salesforce is doing the same. You know, all of the big tech companies are making it sort of mandatory. They're not giving people an opt in option. Wagtail is taking the opposite approach. Wagtail has made a commitment that there will never be any AI forced into the Wagtail core.

Marlene (52:52.984)
Thank

Tim Allen (52:55.323)
but that the Wagtail team is making a secondary package for anybody who is interested and wants to opt into it called Wagtail AI. So you can pip install Wagtail if you just want the core Wagtail AI free, but if you want some of the AI features, you can pip install Wagtail AI.

So the talk given during Wagtail space, think it's available on YouTube now, is called AI and Wagtail, Responsible Innovation for Content Editors. And a couple of the commitments Wagtail has made is again that it won't ever be forced into core, but this is a blessed under the Wagtail organization's umbrella package that is being developed secondarily to Wagtail for people who want those AI features.

It also said, we'll provide, you know, we'll provide a clear picture of our AI vision, have it publicly stated, and we will always avoid any kind of vendor locking. It also gives sort of a practical knowledge of what's available today in the Wagtail AI in this talk. So it's something that people can look at right now. think that's a pretty good model. So, you know, I consider myself incredibly lucky. We've had.

Django code in production for a decade now. I've also been sober for a decade now. So it's caused me to sort of reflect on how lucky I am to be part of these amazing communities, my recovery community, my work colleagues, and then the Russian doll of Wagtail within Django, within Python. You know, I've met some of the most amazing people of my life through these communities. And I think that...

having the right people in place here to have this sort of moral and ethical conversation about the right ways to do it, to allow people to opt in without forcing it down our throats.

Tim Allen (54:41.637)
I think Wagtail is really sort of leading the way here and as being sort of the smaller Russian doll within Django and Python, I think it's a good place where we can look where on a slightly smaller scale than Django or a much smaller scale than Python, maybe Wagtail is sort of leading the way in something we can look to as Django on a way to sort of address it head on.

but without being the first ones to dip our toe in the pool, it might be somewhat easier to do when you're in the content management space than a bigger web framework space. But I think there's some really good ideas there.

Marlene (55:17.39)
Thank

William Vincent (55:18.284)
just add my talk was related to this idea of I think there's a sense that Django and FastAPI just on the underlying technologies is an either or when practically speaking it's both. can and probably should use both and I see a whole new generation of Python developers starting with machine learning and pure Python and then FastAPI comes along and it's an endpoint and they think that's the end of the web.

And so I see a gap there, an education gap around, you know, when does Django slide in? Cause people think, well, do I even need Django? And part of that is just if you read Reddit or Hacker News, you only see posts from OpenAI or Anthropic. You see massive, massive scale as opposed to, nope, me, a couple of people trying to incorporate LLMs into a workflow. And then Django's, Django's there, but I think we in the Django community need to tell that story a little bit better.

One of the things in my talk was showing how you can hook Django into a local LLM and have a chatbot because like it works, you know, it works pretty fast actually too, you know, so like we're not running our own frontier models here. So yeah, fast API has its uses and for sure, but like just clarifying, like I'd love to give a talk next year on kind of choose your own web framework, you know, like choose your own adventure and just break down like Flask is great here. Fast API is great here.

Marlene (56:38.915)
Yeah.

William Vincent (56:43.503)
Django is great here and just sort of, guess, redefine what those boundaries are. Cause I think it's still a little fuzzy, especially to a newcomer, you know, it used to be Flask versus Django and now it's really fast API and like, do I even need Django? You know, so that's something.

Tim Allen (56:58.459)
I've really enjoyed over the past couple of months on a project I'm working on is Django Ninja. I give you sort of the same feeling of fast API, but I've got those nice comfortable batteries that I'm familiar with of Django and the Django ORM. So I've got all like my webby stuff, but then it gives me sort of that fast API, you know, quick.

William Vincent (57:04.825)
Yeah.

Tim Allen (57:20.013)
easy API access, alternately flexible. So Django Ninja has been one of the tools I've really enjoyed working with sort of in the LLM space tying it into Django.

William Vincent (57:31.599)
And Carlton, you have some skunkworks projects around APIs that maybe shall remain hidden for now.

Marlene (57:31.832)
Yeah, that's what I'm saying.

Carlton Gibson (57:38.488)
Yeah, no, no, no, no, So I'm targeting end of year to have a, the proof of concept out for a take on serialization, is, new modern serialization that, is sort of ORM friendly because, it's, you know, the bottom line is for me that rest framework serializers are kind of the last generation and they're a lot slower than caters or pedantic or message spec. And, know, there's a whole raft of these newer ones, which are just much, much, much, much quicker. but.

They don't know anything about the Django ORM. And I think there's a nice way where we can handle restricted field queries, handle automatic prefetch related and have modern serialization and the speed effects from there. I'm working on that. I'm hoping for end of year around that kind of period to have something to show.

William Vincent (58:31.885)
Okay, well as we end the shows, we've started a habit of referencing a project and if you have it a book recommendation, maybe I'll go first. We'll put a link to it. Marlene, you have a Django Girls offline repo that is excellent, fun, fun to explore. Maybe the start of something more of having offline resources for places that don't have internet access. So that's the one I will call out and we'll link to.

Marlene (58:57.944)
you. That's awesome.

William Vincent (59:00.665)
Who's next?

Marlene (59:09.006)
Okay, who's gonna go next? I I was gonna say the Django Girls Offline one. Okay, because I can't say what I did. Okay, okay, okay.

William Vincent (59:16.155)
you can't say one that you did. You can't say one that you did. Well, put your DjangoCon, your talk repo, you have the great tokenizer and the MCP. Like your demos were so good.

Marlene (59:27.95)
Yeah, exactly, exactly. Okay, listen, I'm not going to recommend one for me then. Maybe Tim go first so that I can think through which one to recommend.

Tim Allen (59:41.778)
So, I honestly don't read many technology books, but a couple that I've read. One of my favorite recent reads was by Carl Sagan's daughter, Sasha Sagan. It's called For Small Creatures Such As We.

And it's a love letter to the universe on behalf of an atheist who has recently had a child and how she seeks to create meaning in the universe through rituals without falling back on religion. And it's just, it's an absolutely wonderful book. It's, I needed something optimistic and uplifting given the state of the world and it definitely hit the spot for that. So I will definitely recommend Sasha Sagan's for small creatures such as we.

And I also wanted to mention just two projects, Wagtail Nest and Django Commons, I think people are probably pretty familiar with. I've moved some of my projects over to Django Commons and Wagtail Nest and I found them to be a huge help with avoiding developer burnout. Just having other people to pick up the slack during those months when you are doing something like, don't know, planning a wedding and now have a 13 year old in your life. It's been very handy. if you're looking for...

A group to get together to avoid sort of that single developer on project burnout. Both Wagtail Nest for the Wagtail community and Django Commons for Django projects has been absolutely wonderful.

Carlton Gibson (01:01:11.789)
I'm definitely going to check out that book that you talked about. That sounds amazing for me. I wanted to mention a project Django keel by Sayum Kurana who goes under the tagline on GitHub of curious learner. It's another kind of production ready template for your Django projects. Sayum is top notch.

really quality developer. I'm really keen to see what his opinions are and what options he's had. It looks like it goes into a lot more depth than most of your starter projects. He's got things about observability. He's got things about async versus background tasks. He's got front end story, back end story. He's got proper docs on read the docs. It looks really good. So I'm excited by that. That's Django Keele.

Tim Allen (01:01:57.16)
Salyam is great. I also know that he is looking for work. So if anybody's looking to hire a top-notch developer, seek out Salyam on LinkedIn. He's absolutely wonderful.

Carlton Gibson (01:02:06.687)
yeah, no, absolutely. I'll double that. I'll put my little tick on that as well. Yeah, I didn't know he was looking for work, but yeah, just hire him if you're in the market, It's a great opportunity. For a book, I wanted to mention the Web Accessibility Cookbook. This is by Manuel Matuzovic, which is, and it's just a kind of how-to.

It goes through everything you need to know about your website. So you can create accessible HTML from the get guy rather than trying to bolt it on afterwards. Really, really, really solid learning in that book. I really recommend it.

William Vincent (01:02:51.233)
I'll just quickly do a book and then Marlene, if you have your thoughts, you can go. So I've been reading Apple in China, which came out really this year. Tim, you'd love this one if you haven't read it. But it's, you know, basically how Apple thought it was. It moved tens of billions of dollars every year into China and really developed the infrastructure in China and thought it was getting the better end of the deal. And then turns out China was. So I think it's missed.

It's interesting, just day to day, the executives at Apple really did never step back and think, why are they being so, why is China being so accommodating? Why are they allowing us to do all these things? And, you know, Apple thinking short-term, China thinking long-term in a way. And the amount of investment Apple alone put into China just dwarfs any federal plans and everything else we have here. So I think it's, yeah, it's really, really eye-opening, really kind of scary.

but really well researched. So I'd recommend that book. So Marlene, do you have something you want to add?

Marlene (01:03:54.382)
Okay. Yes. So I was trying to look for the repository. I can't find the name of the repository. But I mentioned Corey Zhu and that when I was researching for my Django talk, I had been looking on the discussions page for on the Django project website. And I'd actually seen a

conversation between Will and Cory where they were discussing an agents.md file for the Django project. I think Will had been like asking if there was one around or what people's on it were. And that's where Cory shared his YouTube video. And I went off like on a tangent just to watch the YouTube video. And anyway, in that YouTube video, Cory like

William Vincent (01:04:23.29)
yeah.

William Vincent (01:04:27.702)
Yeah.

Marlene (01:04:47.118)
who works about principles for vibe coding. And then he also shares a project that he created, a Django project he created through this vibe coding process. So I was gonna recommend that. I cannot find the repo at the moment, but hopefully I will find it after and I will link that so that it can be linked somewhere for people to help. a great person to follow, would say, is Corey has a YouTube channel with great thoughts there. And then for...

book recommendation. I don't know, I'll kind of do like a half, two half recommendations. The first is, there's not a book, but it's Simon Willison's blog, which I think is super, is really good if you want someone who's a realist, someone who's connected to Django, of course. I think he has

fantastic thoughts about AI and generally with industry is heading and has very kind of realist perspectives there. But that's like a that's like not a book. So a fun book kind of fun to recommend for reading is I've been reading a book called Children of God and it's like a sci fi book.

William Vincent (01:06:07.115)
Is that fun though? That's a little bit apocalyptic.

Marlene (01:06:09.76)
It's a little bit postulative. It's a little bit, yeah, it is. It is not a fun book in the chill sense of the term, but it's so interesting because in the book, I've been surprised by, this is a book that was written a long time ago and the author imagines AI. so there's several, yeah, yeah, exactly.

William Vincent (01:06:12.777)
I mean, a little bit in a British sense, right?

Tim Allen (01:06:15.28)
Ha ha ha ha.

William Vincent (01:06:35.114)
this is Mary Doria Russell, right? So I'm sorry to interrupt. I love her. I've read this. I've read a ton of her stuff. She's so good. Sorry, go ahead. All her stuff, yeah.

Marlene (01:06:43.786)
Okay. She's so good. And I remember reading this part where she was talking about AI in the book for this, like they are on an alien planet somewhere. And she's imagining what AI would like talking about what AI is like. And I was just like, I do not know how someone from back in the day could have this kind of foresight.

to imagine that it would be like this. So that book is called Children of Gov, it's a fiction book, can recommend it. Yeah, yeah.

William Vincent (01:07:21.512)
Yeah, I mean, I still remember because it's Father Sandoz. So he's this linguist who speaks like 13 languages. And there's a line that still haunts me where he, I think it's Community of the Aliens, he has a line about, know, fluent in 13 languages and he couldn't find the word to express a phrase or a feeling and that utter frustration of language failing oneself. thank you for mentioning her. She's so good. I think that's her best one. She's written a ton.

Marlene (01:07:26.254)
Yeah, exactly! Yes! Yeah. Yeah.

Marlene (01:07:40.482)
Yeah. A hundred percent. Yeah. Yes. A hundred percent. She's great. Yeah. Yeah. Yeah. Really good book. Can recommend. Yeah. Some cool, some cool pieces for sure about language and about AI as well. So think it's great. Yeah.

William Vincent (01:07:51.06)
She's done poetry too. Deep Cut. Yeah, she's awesome.

Tim Allen (01:08:03.599)
love the fact, I think it's a testament again to Python and Django community that, you know, it's like recommend a book and only one out of the three was on tech. It's kind of a...

William Vincent (01:08:11.688)
Yeah.

Marlene (01:08:11.949)
Yeah!

Exactly. Carlin's good.

William Vincent (01:08:14.676)
Carlton's pretty consistent, Carlton's good.

Tim Allen (01:08:17.947)
Here's some things to make you think and some patterns on how to live life.

Carlton Gibson (01:08:18.017)
I've realised it's my job to recommend a tech book because everyone else always goes off their field. I'm like, okay, I'll be the tech book person.

Marlene (01:08:22.06)
Yeah.

Marlene (01:08:29.972)
Yeah, 100%.

William Vincent (01:08:33.47)
Well, as we wrap up, there anything Tim or Marlene you wanted to mention that we didn't get a chance to cover?

Tim Allen (01:08:39.943)
just wanted to tell anybody who's out there looking for work that I don't think AI is going to take all the jobs. I want to assuage that fear. But when you look at the history, whether it's farm work or, I mean, the computer being introduced in the 1980s, it eliminated 3.5 million jobs, but it created over 19 million jobs.

So while some people were displaced, it ultimately led to more work. I remember 20 years ago, headlines saying that half of the employment of the healthcare industry was gonna be replaced by technology. And instead over the next decade, it doubled. So I wanna leave anybody who's struggling to look for work right now, I think that's got a lot more to do with sort of political forces right now and uncertainty than it does anything AI.

know, Mark Benioff, CEO of Salesforce has been gleefully been celebrating laying off 10,000 of his own employees, but they hired 20,000 people in one year during the pandemic.

Is it more likely that he is actually using AI to replace these people right now? Or is it more likely he's covering up for a stupid hiring bitch that his company made? So if you're looking for work, I know it's tough. This is the worst job economy I've seen for tech in many years. I want to send out my sympathies to anybody who's looking for work right now. It's incredibly hard, but I also want to give a ray of hope that I don't think that AI is going to take all of the technology jobs in the future. And,

think there will be hope and it will rebound. So I just wanted to send that out to anybody out there because I know it's tough.

Marlene (01:10:21.326)
Okay.

William Vincent (01:10:22.076)
Marlene, you don't have to, but if you had something you wanted to.

Marlene (01:10:25.422)
I think I would just, yeah, I agree with him that the market is really tough right now. And I would say that I do still think that the core skills matter. you know, right now, for example, even at Microsoft, had like loads of labels as well. Not great. But at the same time, there's like, I have no idea why I laugh when I was saying that, because it's such a serious issue. I'm so sorry.

but like, I think the issue is that we had like lots of layoffs, but at the same time, the company is hiring as well in, in a lot. And there's so many open positions right now. And a lot of those positions are kind of aligned to AI. So I would encourage people as well, not to be afraid of AI. think there's a lot of negative things that it's led to, but I think if we can combine the skills that we have right now with that knowledge, I do think that.

There are opportunities there for us. And I really hope that as an industry as well, we can just work together to hopefully shape the direction of where the industry is headed for the better. So yeah, that's what I will say.

William Vincent (01:11:42.358)
I like that. Well, Tim and Marlene, thank you so much for coming on, for continuing this conversation. That's really kind of the point of this podcast is to do that, to have the conversations and to share them. So thank you for making the time.

Tim Allen (01:11:55.847)
Thanks so much for having us.

Carlton Gibson (01:11:55.873)
Yeah, thank you.

Marlene (01:11:57.528)
Thank you.

Carlton Gibson (01:11:58.817)
really enjoyed it.

William Vincent (01:11:58.92)
All right. And we are DjangoChat.com and we're also on YouTube and we'll see you for the next time. Bye bye.