Django Chat

Building a Django API Framework Faster than FastAPI - Farhan Ali Raza

Episode Summary

Farhan is a software engineer from Pakistan who added template partials to Django 6.0 as part of a Google Summer of Code project. These days he is pushing the boundaries of what Django can do in a host of exciting new projects, most notably `django-bolt`, a fully typed API framework for Django that is faster than FastAPI on common performance benchmarks.

Episode Notes

πŸ”— Links

πŸŽ₯ YouTube

Sponsor

This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it’s scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.

See what’s possible at sixfeetup.com.

Episode Transcription

Carlton (00:00)
Hi, welcome to another episode of Django chat podcast on the Django web framework. I'm Carlton Gibson joined us ever by Will Vincent. Hello Will.

Will (00:06)
Hey Carlton.

Carlton (00:06)
Hello Will. And today we've got with us Farhan Ali Raza who was who I had the pleasure with spending the summer this summer work working on the Google Summer Code project to bring template partials into Django core. Hi Farhan. Thank you for coming on the show.

Farhan (00:21)
It's an honor to come here. I have been seeing you like for like months and like I have been following Carlton for years so like it's an honor to be here.

Carlton (00:37)
⁓ I'm all blushing now. ⁓ So come off our hand, let's talk about template partials and the Google Summer Code Project, because that was the big thing. And I've mentioned it a few times on the show, but you were on the sort of cold face doing the hard work there. So how is that for you? Describe the summer.

Farhan (00:52)
Yeah.

It was amazing. I did not ⁓ think that it will be that hard. when I was applying, was just thinking it is just a package, I'm just going to copy paste things. But when doing, I found out it was not just copy pasting.

It was amazing experience. ⁓ I learned a lot of things. It was great.

Will (01:33)
Well, and I'll just mention you have written about this on your blog, so we're going to put links to everything. But you went into more depth. But it's pretty impressive that your first Google Summer Code is a top three feature in a major release, I would say.

Farhan (01:39)
Yeah.

Yeah.

Will (01:51)
Carlton, what about for you? You've mentored other people.

Carlton (01:53)
I was

just gonna ask for a hand. Was there a moment when you panicked? Is there any moment where you were like, oh no, this is going wrong?

Farhan (02:01)
⁓ panic I will not call that panic but like first moment like when I realized like this is not like what I thought it will be ⁓ was like when I was handling like I think it was the first PR in Django packages Django template partials ⁓ when we were handling render time and compile time things

Carlton (02:27)
Okay.

Farhan (02:28)
So

I did not even think about that when applying. ⁓ I was thinking like this is something else. ⁓ There is a kind of thought that you have to think before making changes and stuff like that. that was... The whole process was cool. I enjoyed the whole process. There was no panic per se.

But it was just a surprise there.

Carlton (03:03)
Okay, I mean, so the way I've described it when we talked about it on the show is that without your help, I would never have been able to get it in. Not because I couldn't physically do it, but I just didn't have the time, I didn't have the capacity. It really does need, like somebody dedicate, you know, a 12-week project does need someone to dedicate a good chunk of time to make, actually make it happen. Because there's a lot of things which were, the third-party package was okay, but not up to Django standards.

Farhan (03:13)
Yep.

Yup and like there are so many ⁓ kind of what can we say like side quests that happen when we are merging something into Django. The thing that I remember is the ⁓ like we merged the like that thing like to find partials ⁓ the regex solution that was merged into Django template partials and then that was merged. So

We thought the regex was the solution and we I think there was Five or six reviews for that regex Then like we have to fix the regex part regex part regex part and in like in the next pr we removed that regex so there are many side quests that happened

Will (04:15)
Ha

Carlton (04:15)
if something better.

Well, anytime you bring in a reggae, you're in trouble, right?

Farhan (04:23)
Yeah

Carlton (04:25)
So okay, move along there, but tell us a bit more about yourself. Are you a student? Were you a student? What are you up to now?

Farhan (04:35)
I have graduated in 2023 ⁓ bachelor's of computer science. Now I'm just freelancing and stuff, working part time like that. Before that I was a pre-med student. So I was not a computer science student. like...

I don't want to go that deep into it but like there is a process ⁓ in Pakistan called MCAT. So there is like a competitive exam for medical admissions like MBBS student admissions. So that was the goal when I was doing pre-med but I discovered kind of programming in my like ⁓ 11th grade. So there is like it's different. I don't know like what is ⁓ in Spain or

America but like it's first ⁓ 11th grade 12th grade and after that you start your bachelor's degree So I discovered programming in like 11th grade. I was just Like I I downloaded a book headfirst like HTML CSS that Then I discovered HTML CSS and like I thought that was very interesting. So like like it was lucky like the

Carlton (05:51)
Yeah.

Farhan (06:04)
I could not clear the competitive exam but I could become a doctor of pharmacy or something like that. So I thought I don't want to do medical at all. I want to do computer science. So I switched the whole thing around.

Carlton (06:23)
Okay, that all seems to be coming out quite nicely for you now.

Farhan (06:25)
Yeah.

Will (06:27)
Can I ask what, ⁓ how is it taught in Pakistan? Do you use Python? Is it Java? Or what is the formal or what are the languages that are used?

Farhan (06:35)
⁓

It depends on the university like every university have ⁓ different way of teaching like most commonly I think they start with C++ like in first semester there is introduction to programming that is C++ and after that Python comes at like I think fourth or fifth semester after that like Python and stuff.

Will (07:00)
And do they have anything on web development per se? Because in the US, they don't. It's like an elective course usually.

Farhan (07:08)
I think in Pakistan that is also an elective course but they have made it like it is not elective like you will have to take it. So because they in like smaller universities they don't have like a lot of teachers so they just enforce even elective subjects. So there is a one development or web development course but.

Will (07:32)
And is that,

I'm sorry, go ahead.

Farhan (07:36)
But they don't teach anything productive there. So I remember like we did HTML and CSS in 2021 I think. Like HTML and CSS just in six months or so that semester.

Will (07:41)
Okay.

Right. That's not, ⁓ that's the case also in some US universities. So it really depends. Sometimes the web development course is designed for people who are not computer science majors. Actually often it's not. So that's why it'll be HTML, CSS, a little bit of JavaScript, but I, there's not that many places that have a, know, Django course or a spring Java course. ⁓ For some reason it seems, seems important, right? Cause everyone gets trained and then goes,

Farhan (08:20)
Yep.

Will (08:21)
to work on the web, but I guess there's a databases course, there's really, there's even MIT, ⁓ just down the street from me, they have a missing semester, it's like a January term, where they teach how to use Terminal, how to use GitHub, how to do web development, because in the core MIT curriculum, you're not gonna take web development unless you do it on your own, right? So it's everywhere.

Farhan (08:49)
There is a very good course that Harvard gave like that is called CS50 ⁓ like I learned my Python from that course.

Will (08:54)
Hmm.

wow. Did you actually, this is the David Malin course, did you work through the whole thing? Because like famously people start, but then it's a lot of work to do on your own and it just sort of keeps ramping up the demands.

Farhan (09:03)
Yeah, yeah.

I

knew programming like some programming before that. So I just like watch the stuff that I wanted to learn. ⁓ There was I think a Django video for one hour and like the Python video there is ⁓ for one hour. So I just saw like what I wanted to learn from that. I did not like follow the whole course. I just like learned.

the small parts that I wanted to learn from that but that was very like very good course.

Carlton (09:52)
So I want to ask you about all the things you've been doing since the Summer of Code project because you've been quite prolific. I've got a whole bullet point, a list of projects to mention, but can you talk us through your open source work around Django for the last few

Will (10:08)
Well, can

I just hype you up a little bit? would say besides the fact Carlton knows you, when I look and see who's doing really interesting stuff, it's like you again and again and again and again. So it's not just with Carlton. I'm having discussions with people. We're sitting back going, well, who's this mad scientist doing all these crazy things? So it's just so you know, it's really exciting to see someone smart with new ideas, just doing stuff because so it's exciting for the community that you're doing this.

Farhan (10:37)
I started Django board because I did not want to use like fast API so ⁓

Will (10:45)
you

Farhan (10:47)
Because I I was there was a like side project. I was building that ⁓ required web sockets or like a server like SSC server sentiments so when I Went to like the forum it said like you will have to use like Daphne for because like some like long views take time So you will have to do that

or use channels, Django channels. So there was a fear of fast API and Django channels. So I thought I should build a new framework to solve that. So I built.

Carlton (11:33)
So go

on, Carol, Carol.

Farhan (11:35)
I there like for some reason I just wanted to have something fast because I really enjoyed ORM part of Django like it is amazing so I just wanted to have something fast that is built for Django so and then I I saw Robin there is a framework called Robin it is also like Rust and Python mixture so I thought like

it should be possible to like build something that handles request in Rust and does business logic in Django part of the things. So that gives me inspiration to go all in. I started that just as an experiment so I wanted to see if that was possible or not. But

when I tried to implement it, its performance was very good. So I thought I can spend months after that to make it something better.

Carlton (12:40)
Okay, and you've mentioned phrase performance was very good. It's one of my pet peeves that on the show that people talk about fast, but they never give any numbers and you've been you've been laying down benchmarks from day one, which is what makes it so exciting. So can you give us some idea of the performance the throughput you get you're seeing on Django Bolt?

Farhan (12:59)
So I agree with you like performance is subjective like it will all depend what you are doing in views. So like I and there is like a whole ready discussion when I ⁓ like launched Django Vault on that. So everybody was saying these benchmarks are like just just wrong like this does not approve anything prove anything else stuff like that. So

like my whole point was like when I am building something I will have to use something to measure something like like what I will do like a build a whole application and deploy it on a VPS to measure I cannot do that I will have to measure like small JSON small database query small like stuff so that I know like if I make a change it makes a positive impact on my like base results so

What I am getting in like if we use one worker so one worker in Django Bolt is ⁓ one actis thread like actis is a Rust framework that I use and that one actis thread is bound with one like Python even loop. So actis handle the Rust part and Python even loop like a ⁓ kind of ⁓ like execute the whole Python thing. So that

makes one worker. So that one worker I get about 27000 rps on ⁓ like simple if like if we return a simple hello world json. So if we do like a database query so that like the ORM becomes kind of the bottleneck for that. So that like the performance that becomes that is still faster if we use SQL alchemy with SQLite.

So I have not measured with PostgreSQL or like faster ORMs with fast API but I have measured with SQLAlchemy so that I was getting about ⁓ 5000, nearly I think 5000 RPS ⁓ with like ⁓ I think there is one query for 10 records that reads

Carlton (15:17)
Okay, and what's the difference between that and this vanilla jango, I guess, would be the question.

Farhan (15:26)
the performance difference.

if we ⁓ take account database queries if we compare database queries then the performance difference is like I think 1000 rps on Django like if we use some faster serialization and like I have forgotten the actual numbers but like I think it was 1000 rps on Django like raw Django and like I think ⁓ I got about like

above 4000 or 5000 rps in Django bolt

Carlton (16:07)
So both of those sound quite fast, right? I mean, the first thing I saw here was you did a project called Django Rapid, was using different, which was just Python based vanilla Django, but then changing out the serializer layer. And you were comparing REST framework serializers to, I don't know, Pydantic serializers to Meshespec serializers. ⁓ And what was interesting, I thought there was that you were getting fast API like

Farhan (16:11)
Eeyah.

Yep.

Carlton (16:36)
through puts just with basically with Django.

Farhan (16:41)
If we compare ⁓ database queries, the performance is kind of equal. I think it will... If we have... Because Django ORM or FastAPI ORMs are async, so if we have a lot of queries and lot of business logic, then the performance I think...

the performance difference will increase I think and but like if we compare simple read queries the performance difference is nearly equal like the difference is not very strong.

Carlton (17:19)
And what I think is lovely about all these benchmarks is you've got them all up on GitHub and everyone can go and look at them and everyone can pull them locally and everyone can run them themselves. And they can see that these ideas that will actually, you know, Django slow is, no, it's not the same speed as everything else. depends what you're doing. You mentioned, you mentioned async ORM then today, I think, or yesterday you put out ⁓ a kind of a turbo ORM package, which is ⁓ an async query manager for

the Django RM, right? Talk us about that.

Farhan (17:50)
Yep.

So the idea was ⁓ I think Adam Gill, he created like a benchmark of like ⁓ he was comparing the whole like the request response like time cycle. So he wanted to measure that with like the async backend that ⁓ Frey created.

So I he also like I think he does not said like ⁓ another I forgot his name. He said like ⁓ you should be able to use that async backend with Django Bolt and Maria like what performance difference you get. So I when I tried to use it because it is just ⁓ async backend so we can just use raw SQL. So.

I thought like it will not be that hard just because in my in my mind I was thinking I am gonna overload some like execution of Django queries ⁓ and like ⁓ use that async backend and just create it but like when I tried to create it like that was the whole another process it took me like four five hours of work ⁓

to just like create something working because it worked like in one request it worked fine like it loaded the thing but when I wanted to benchmark it with the bombardier the like I was getting ⁓ like connection exhaustion errors so I had to fix ⁓ polling issues and stuff like that and I did not know like the full it's a naivety of

Carlton (19:33)
Yeah, right. OK.

Farhan (19:46)
like a person so I because I did not know how the full ORM and database things works I did not even think about like how the connection management works and how the stuff like that things work so in my naivety I started and after that I have to solve the whole connection things and stuff like that because ⁓ in a Django hand the sync version of Django handles the connection in a very different way it like I think

context was and stuff like that so it like I think it when you do not use polling it closes after request I don't know for fully ⁓ is if that the case but I could not find anything for async that like we were discussing before this podcast and so I will try the async signal thing on that

Carlton (20:34)
Yeah.

What I love is the way you just, like not just, but the way you dive in. You're like, yeah, I'm going to take this on, I'm going to do it. And then you discover the massive locker and you're like, right, I'm going to fix that as well. And you just keep going. I mean, for me, looking through the rust front end for the request response plan, I think that's a really exciting ⁓ development. Looking at the different serializers, we talked about that on the show before, but like...

REST framework serializers are kind of last generation and there's catas, there's pydantic, there's message spot, there's all these much quicker options out there. so to demonstrate those with Django is like a real service to the community. It's like, yes, look, this is how you do it and look how much faster they are. Wow. And then for me, the ⁓ async, properly fully asyncifying the ORM is like the last piece of the async story. If we go all the way down to the database connector, then, you know.

whatever you want to do with Async, whether you want to use it or not, the story's then at that point finished. So you're really hitting kind of all the tick boxes of the things I'd love to see.

Farhan (21:44)
Yeah, I think it's I'm like just Like I understand that part like as a beginner you kind of like obsess over benchmarks for some reason Like that thing is fast that thing is fast that thing is fast So as a beginner, I kind of like that my thing is fast kind of stuff. So That was the whole

kind of process if we have ORM that is full async that will make the ORM very fast because there is like a sync to async part and the whole stuff it is... it... it... I... when I was using... I was trying to handle sync in Django Bolt the performance drop for sync to async part like becomes very like stark at that

if you are getting like some.

Even if you don't use the ORM, the performance difference, if you are getting 22,000 RPS and you want to execute the synchronous views, you like ⁓ create like a wrapper using sync to async on the above the performance difference drops like 22 to direct like 10, 12,000. So there is like a 50 % drop there because of the third part of the whole thing.

So if we have the whole ORM async, think that will be, that will, I think, be the performance difference will become, I think, to talk like a 50 % or something like that. So.

Carlton (23:31)
Are you hitting the gil there just out of interest?

Is that where the, because obviously there's a cost to spin up the extra thread, but is it the fact that the Python threads aren't able to execute in parallel causing a lot of the blockage there? Do you know? I mean, you may not know. You may not have been able to profile.

Farhan (23:55)
because like how the Django bolt works there is one gil for one worker so actis is the one worker and then the gil part so if we have something that execute using thread it kind of blocks the gil for some reason so this is also a gil part and also a latency part of like spinning up a new thread for handling that request I think

Carlton (24:21)
But do you think, have you experimented with the, I can never say this because of the F and then the TH, the free threading mode on Python 3.14?

Farhan (24:31)
I

know its performance decreases very much like it's for some reason I think it is not kind of ready or something the performance difference like decreases it like decreases a lot like not like normal decrease it decreases a lot I tried it with like I think fast API and the Django board the performance difference decreases a lot I think there is some issues or like I was not able to

Carlton (24:35)
Okay.

Okay.

Farhan (24:59)
and also because the PIO 3 it does not like it has different like for it is a it has a different method of handling ⁓ the non threading Python. So I was not using the whole things that it provides to handle like non threading Python. So I think that also causes an issue. So I was not able to like properly benchmark that.

Carlton (25:27)
Okay, that's not quite ready yet. Okay, I think all that's really interesting. We'll jump in before we disappear down the rabbit hole.

Will (25:35)
Yeah, no, no, I'm

happy to listen. I mean, you have so many projects. I wanted to ask about the bulk mail verifier, which seems like was one of your early projects, but it gets a lot of usage. What can you say about that project?

Farhan (25:52)
So that was the project ⁓ like I have been because I had to work part time when I was like studying. So that is a project that I have been working when I was studying ⁓ that project is like a partnership with like another person. He's like I built stuff and like he manages the whole selling part of the stuff. So it's an email validation.

site like for email marketing people come in and validate their email like this is a valid email this is a invalid email this is a catch-all email and ⁓ that like people take that result and do email marketing with that.

Will (26:39)
So did he come to you with that idea or is that like how it seems like that would be the case right as a student I mean I wouldn't think of that unless I was in the email marketing world

Farhan (26:46)
Yeah

No, I he's a email marketing kind of guy So he knew that he it was his idea and stuff So I was just like kind of city a part of it So I I was gonna build and like we have like a partnership kind of thing there so that we

Will (27:09)
That's not a bad setup.

That's Carlton's setup right now at his startup.

Carlton (27:12)
Yeah,

I do exactly that. I'm the code monkey and he's the, you know, the face.

Farhan (27:16)
Yeah.

Will (27:20)
Okay, I guess the one other one, we're not quite at 30 minutes, but I want to ask, ⁓ Carlton has mentioned that, do you use ⁓ LLMs and AI in your coding at all?

Farhan (27:33)
Yep, I use a lot of it.

Will (27:35)
Can you tell us about it? Because I feel like a dinosaur.

We use them, but I use them to of tack on to my existing learning profile. But it seems like you're just sort of straightlining LLMs and doing it in a better way than I do. So what's your process? How do you think about them?

Farhan (27:56)
I am kind of lazy I enjoyed coding because like it's a puzzle like I like solving puzzles kind of that so but typing was never my thing when like I started there was no LLM but I did not enjoy typing the code so LLM part solves that part for me so I use cloud code so it

depends on like what I am trying to do. I usually start with plan mode because it is so hard to predict what they are gonna do. So I usually start with plan mode for cloud code like if I am for example like if we discuss like Django turbo. So what I will do with that is

I will start with plan mode. It does not produce any code with that plan mode. It just produces a markdown file. So I will go I will discuss the whole plan with it. So if I wanted to build like async ORM so I will discuss ⁓ like because I did not know so I tell him like this is the repository the Django backend and this is what I want to build it like research is the whole thing and

Will (28:55)
Mmm.

Farhan (29:17)
I usually have like a cloned Django repository in the project folder. So I say like this is the Django repository. Like tell me like how can I achieve that thing? So it like goes all in and like at the end it produces a plan like what we want to achieve. I read that and if like that makes sense or if I want to change something because like once you

Will (29:24)
Hmm.

Farhan (29:46)
enter like if you accept the plan and you want to change something they forget that like they have added a function or class and they are gonna just like create a new thing so you will have a lot of code that that has no like purpose in there so that is kind of an annoying part so i usually spend a lot of time planning and stuff ⁓

Will (30:04)
Yes.

Farhan (30:13)
I research if like ⁓ I am not sure if like the LLM is telling something. So I will go and like read my code myself and like tell it like you are thinking it is wrong or you like your hypothesis is wrong. Yeah, stuff like that. So that is kind of the whole process. So after that like testing is a whole another thing with LLMs.

Will (30:20)
Mm-hmm.

Carlton (30:29)
you

Farhan (30:42)
I think it is impossible to produce like correct test with LLM because they think like they don't think like we humans do so like their tests are like made up things they are gonna test what like they are gonna say it is working but I can see it is not working so the testing part is whole another thing I usually like tell it to

like test but if you remove that git commit the test should fail so that i followed from jango so like jango i think ⁓ i forgot her name natalia natalia yeah he said like if you create a test the test should fail without the fix so i i took that thing from that so i i like

Carlton (31:29)
Natalia.

Farhan (31:40)
Tell LLM to like test should fail, but it still produces a lot of bogus tests. So that is a very

Carlton (31:47)
It removes

the commit and then puts an assert false in the test and goes, look, the test failed.

Farhan (31:54)
I think that is like it is a problem with reinforcement learning that they to optimize for passing off the test whenever I will say like Create a failing test for this case It will create a test that is passing but in the like in the in the test It will have a if condition like it is failing but it will pass the test for some reason So they are very afraid of like failing tests

Will (32:02)
Hmm.

Yes.

Farhan (32:21)
So I think they are OCD kind of personality they have. They don't want to have a failing test at all.

Will (32:30)
Yeah, that's really interesting. I think that's the way to do it. I know you're a better user of them than I am, but you have to plan with them in advance. I mean, so one of my colleagues, Paul Everett, he's been doing a bunch on spec-driven development. so basically what you described, like talking through, planning things out. And I suppose you're too young to know, but the joke is you could never get a human, like a senior

a software architect to write specs for something. But now that the LLM will do it for you, everyone suddenly is excited about specs, whereas they never were excited about specs before. So it's forcing maybe good behavior on humans. So there's an irony in there for those of us who are a little bit older. But I completely agree. Yeah, I wonder if we're going to have different, because they're so eager to write code, so eager to pass things.

Farhan (33:17)
Yeah.

Will (33:26)
I don't know if it's a filter or it's a different retraining, but it feels like they do need to be harsher, at least in a coding context, right? It would, I wonder, know, yeah, can the companies get there? Can we get there with better prompts? Can some third-party package come in with a whole thing of like, you know, how do we get there? Because I agree, it's a big problem, especially around TUS.

Farhan (33:50)
because in plan mode they don't start to write code and I tweeted like few weeks ago at Claude like it for some reason it is like very eager to write code like it is very eager to produce a hypothesis even if like it is a wrong hypothesis like it is very eager to say oh I found it but it like it

in whole conversation like it will be say two and three times I found it but it never found it like it just say I found it but like it will create a like wrong hypothesis and like start working on it and after that it will ⁓ I did this thing wrong I but it will forgot to revert the changes that it made because of that wrong hypothesis so there is a lot of code that is like ⁓

just ⁓ like unowned code so that is just a junk that kind of lives there

Will (34:56)
Yeah,

I'll just say one last thing, Carlton, and then you take it. ⁓ I know from at JetBrains and our ⁓ AI teams work, a lot of times the benchmark is user accepted completion rate. So they want you to say, that looks good and move on. And so I suspect that writing more code or writing things that pass, even though it's not good, is better than them. They're not incentivized to say, don't do that.

Farhan (35:23)
Yes.

Will (35:24)
I think that sounds simple, I really do think it's almost like I can see, like we say, what's our completion rate, user acceptance rate, whatever, for Juni versus Cloud Code versus Copilot. That's the benchmark that internal AI teams are using, which would lead to that behavior. Yeah.

Farhan (35:42)
I think that

is why it is going to try to pass the test. Even if that feature is not complete or not working, it is just going to pass the test. is working. Congratulations.

Will (35:54)
But you should be able to put

a 10x multiplier on removing code. If you get a point for adding code, you should get two points for removing code. I want that agent.

Farhan (36:03)
There

is new skill that cloud code creator introduced, code simplify. I have not tried it ⁓ like very significantly, but it is kind of does that like it is gonna see the commit that you made for the solution. It is gonna go over all the changes that it is gonna simplify the code.

Will (36:12)
⁓

Hmm.

Farhan (36:28)
and like try to remove if like it thinks it is it does not belong I think that is a step in the right direction I think

Will (36:38)
interesting. Carlton.

Farhan (36:40)
beautiful.

Carlton (36:40)
Yeah,

go on, on fine.

Farhan (36:44)
The difference that I found in a human code and an LLM code is also like a viral ⁓ tweet. The human code is beautiful. Like everything that it produces, looks like it was designed to do that thing. The LLM code is like a kid coding. It is just gonna solve the thing.

like it is not gonna see like if I'm putting that functions there why I'm putting there like it should belong to that file or not the a lot of comments that it adds so like beautification of the beautiful code like if you think like according as an art like that is human and if you think like NLM coding that is like a kid coding like it is gonna solve the

task it like it is gonna solve like solve the issue but like the code will be everywhere and like stuff like that the difference we can see if like we see a professional code base like Django and a solution provided by an LLM so the difference is very clear there

Will (38:04)
Sorry, I have one more question. ⁓ That code simplifying thing from Claude, I think it was only four or five days ago that that came out, but that's really interesting. So my question is, how do you stay informed about all this? mean, because Carlton and I, we pass tips on RSS feed readers, because we love to see blog posts. And I feel like we're multiple generations back. What are your news sources to stay up to date on all this?

Farhan (38:30)
I think you also have that Simon Wilson newsletter that is a very good source and like on I think Twitter part of social media is very eager for wipe coding. Mastodon is like 180 degrees of that. They don't want like AI in their code. So

Will (38:50)
That's where Carlton and I

hang out with our Django, you know, gray beards.

Carlton (38:55)
Django Greybeard, how you high haters?

Farhan (38:59)
The creator of Flask, don't, I forget his name. ⁓ He also did like Paul, the Mastodon people and the Twitter people. The Paul is like a 180 degree. Like Twitter, is like less than 10 % of code like LLM write and in Mastodon, like there is like ⁓ I think larger than 50 % code I write myself. So there is a difference. So...

I like there is like the extreme kind of like two things. I kind of believe that like both like any extreme. I don't know the word for that like any extreme. What is the word like that? Like any extreme thought a person have like an extreme opinion or sorry and any extreme opinion a person have like extreme opinions are never right.

So the truth will always be like something in like in between. So if like coding changes, it will be like kind of 50 % you burn 50 % and stuff like that.

Carlton (40:14)
I guess my question here is, ⁓ it probably loops back to what you've already said, but I think these machines are very good at creating stock code, boilerplate that you've seen written 100 times. It seems to me they're less good when you're trying to push the boundaries into something that's not directly in the training data, that's new, right?

And you seem to be doing a lot of things that are new. how do you find it? Is it the planning mode, spending enough time in the planning mode helps to pin it down?

Farhan (40:48)
It's like trying different things. I asked for like if something is not working I asked for like options like give me options to solve this thing So it is gonna say like it is gonna be an extreme option there It is gonna be like something that like that I think like can work So I will say try that that is also I think a very good point for LLM because we can try a lot of option because the

Carlton (40:58)
Yeah.

Farhan (41:18)
the price of code is very cheap that way. So if like that thing does not work, I can just like ⁓ discard those whole changes. So I tried different options. There is ⁓ like two extreme. One time you are gonna say like LLM is LLM this solution was very intelligent and sometimes it is gonna be stupider than a dog.

The solution is very stupid, it thought something else and produced something else. So there is extreme cases there. So I think we can never predict LLM stuff that makes them dangerous. Kind of that way. So there will always be human for coding, think. There will be less humans, but there will always be humans somewhere.

Will (42:14)
Well, it's so interesting to think about how your brain is being wired because for me, I had to type everything and you're making me almost rethink the advice I would give people, which is before you start coding, make sure you're a really good typist because you don't want any friction between typing and thinking. But so for me, like if I don't type something, this is my biggest problem with LLMs. If I don't type it, it's not I'm missing a pathway in my brain. So I'll have LLM code.

And part of it is I didn't fully think it through, but I think a lot of it is I just didn't type it. ⁓ And so you're making me wonder, maybe that's just a step leap that the newer generation of people won't have that problem. They'll just be like, ⁓ this is normal. This is how I do things. Because don't know, Carlton, is that the case for you? For me, if I don't physically type it, it doesn't quite get in there. But that's because that's how I learned it, learned how to code originally.

Carlton (43:09)
There's something in that, yeah, Gov, Fahan, you were gonna speak to you.

Farhan (43:14)
I think that is kind of right because if LLM produces a code like the solution works but I don't fully understand it 100 percent like I don't understand it fully. So the difference kind of becomes you have to be a good reader of the code if you like read it and some like

and you will have to because sometimes like because I was new to Rust so sometimes it produces like keywords or something like that that I don't understand so there is like a skill to have like you should have a kind of a way to like you want to learn so if it uses rw-loc you don't know rw-loc you should ask it to like what is rw-loc if you just become a like ⁓

just inter inter inter like everything except like it just works and stuff like that so that you will not never like understand like what it is doing there is a thing like if you code it yourself you understand it better when i was working for gsoc i i was very aware of that thing so i usually if i i used llm i used it for like small commits

There is a mode in cloud code where you like you don't accept changes it just shows you that commit so if ⁓ if like I ⁓ See that commit I will say like this this is not Django standard so we write it This way or like if I wanted to do it myself. I am gonna rewrite it that way so if you want to understand the code it is ⁓ like ⁓

a requirement that you will have to read it and if you type it then like you will be a better understander than ⁓ just a simple person that uses LLM to just push everything.

Carlton (45:19)
I'm really interested here where the ⁓ equilibrium point ends up falling in that I'm 100 % convinced that the person who just taps yes and accepts that they're going to be in trouble. They might be able to produce something very quickly, but code is more maintenance than it is production. And then at the other end, I'm...

as much as it might pay me just to think, I don't think we can handcraft every character anymore ⁓ because just the economics of it aren't going to make any sense. And so between those two extremes, it's got to settle somewhere. And maybe there are a range of sustainable positions that an individual developer could take up. I mean, how are you finding in your freelance work that you have to balance these things?

Farhan (45:47)
Yep.

freelance work is very repetitive so if like a person comes most of the people come with a like kind of an MVP of the project so I want or I want like stuff like that that is already there like solved kind of things so LLM is very good at those like if you tell it to do or thin Django it will like does it like 100 % correctly kind of

So in freelance work, if like problem is something interesting or something complicated that kind of like a recent project that I am doing it is a like a software for ships like ⁓ there ⁓ is like tasks assigned to people on ships like ⁓ like ocean ships kind of that. So ⁓

It does not get it. Like it does not understand it like if a task is like. The whole thing is that it does not understand world like AI company says it does like it does not. So if like it it creates a task it like makes wrong hypothesis about the real world stuff like that. So.

that way you have to guide it like you like ⁓ if a delivery comes a person have to accept that delivery it is going to put the all the buttons on the top to accept delivery accept accept accept I saw that because I built that it's like if a delivery comes I have to see the product and like receive that delivery because that's how the real world works but it created like every button on the top to accept delivery and

It like it asked nothing like like what is the delivery ID? What is the like? There is a whole process there is a difference the whole like how the real world works and how LLM thinks it works because I think that this is the whole context problem also because if it is working on a small solution it kind of forgets a lot of how the other things work or something like that, so it It has a narrow focus kind of that way so

there is a difference if a problem is a kind of real like so we I will have to like explicitly say that like you don't want to do that you have to do this like in this way that's the kind of the difference

Will (48:57)
I mean, I wonder if it's a training data problem too, because there's a lot of authentication examples, like how many real world Django shipping container options are there? Like, is there a point where it, ⁓ and actually that's, I'm gonna pose you my philosophical questions to get your take. Are you familiar with the term model collapse? Have you seen this idea out there? ⁓ I can describe it. It's basically that as,

Farhan (49:07)
Yep.

What?

Will (49:24)
the data goes from being human generated to being computer synthetic generated. There was an academic paper out of Oxford 2024 showing that over time, if you train on synthetic data, the LLMs get worse, right? So in my mind, I think of it as we know the underlying data is kind of getting worse. Like it's AI stuff, even as the models are somehow getting better. So, but where does that all end up? It feels...

Farhan (49:46)
Yeah.

Will (49:54)
To me, like maybe the whole foundation is crumbling beneath us. ⁓ But I'm curious what your take is on that. I feel like you're much more in tune with these tools.

Farhan (50:03)
I watched your video that you made about I just don't remember the keyword so I Agree like I agree hundred percent It is also becoming a race for the benchmarks that they have set up like it is like they want to optimize for those benchmarks so if like

Will (50:07)
okay.

Yeah, yeah, yeah.

Farhan (50:31)
you compare a GPT 5 point something to cloud code even if like cloud code is better but like GPT is gonna be like ⁓ higher on the benchmark or something like that and the data problem is real like that is also I think that data problem and the architecture problem because like you can say it is AGI but it is just a probabilistic like ⁓

Prediction of the next word you can say it is more intelligent, but it will never be it is just a prediction of the next word but like the like people who are like ⁓ more intelligent than me, but Like they are gonna say like your word your brain also predict the next word, but it does not like We cannot explain it like what our brain does, but it does not predict the next word

because if I like if I walk on a like an icy road or something I know I'm gonna fall because I have like fallen when I was four or something so the context of that 20 years is there with me so LLM the like I agree with that your statement like 100 % the like artificial data part

and like the optimizing for the benchmark part like it is gonna be it is never gonna be like more intelligent than it is my opinion it is not gonna be more intelligent than human in like general task you can train it for a specific task in specific environment and specific like in a box it is gonna be more intelligent than a human but if you

break their box and everything is gonna fall like stuff like that

Will (52:32)
Carlton, you look like you wanna say something.

Carlton (52:33)
Yeah, no, no, no, I'm

just, I'm just nodding away. You talked about lack of ideas of causation, lack of decent, proper memory, lack of world models. These are the, like the criticisms and the LLM architecture has gone a long way and they are truly amazing what they can do. But you know, if nothing substantive changes, then it looks okay. That's a very clever trick and it's a very useful trick, but it's not AGI as you know, was promised to us by now.

Farhan (53:04)
Yeah.

Carlton (53:04)
Ludo.

Will (53:04)
Yeah,

well, let's dodge the AGI bullet. I think you're right when you were saying, though, about the benchmarks. mean, is something that JetBrains, again, I'm just going to read it out. They have a developer productivity AI arena, DPAI. So they're trying to do an open standard, because right now there are these kind of closed standards for the major models. And it just leads to optimizing for the standard.

When it's like, well, who set the standard? What is the standard? So I think there is some work being done there to have something better that hopefully makes the models better. it's like, as complex and intelligent as these things are, it's sort of like we all have narrow incentives as humans and as AIs.

Carlton (53:47)
So I guess to bring it back to Coda, I'm a veteran, I lose these tools, I think, OK, they're interesting, but I don't feel that my career is under threat. You're sort of at the other end of your career. How do you feel about that?

Farhan (54:05)
I kind of think like there is a fear like I'm not gonna lie like there is a fear of because in the world kind of like that we live in corporates are gonna optimize for the cost that is like stuff like that so if I am like if I am trying to get a job in a small city like where I live

Carlton (54:22)
Yeah.

Farhan (54:35)
I would like there is a fair like in short there is a fair like the whole thing is gonna change and they like there will be less jobs but the the thing that is like I am hopeful like because like I like to learn and like to know stuff so

only those people who kind of have that like there are some people who are gonna do computer science just for the job. So there is a difference between like ⁓ programmers who want to do is for like because like they love programming and there is a difference because he like someone said it to like the computer science is a good like job or good pay and stuff like that.

Carlton (55:31)
Yeah.

Farhan (55:33)
So I think there will be less programmers, but we don't know because the costs now are very optimized and stuff like that. We don't know how much the LLM actually cost because like not everybody is going to be able to like use them if like something like not a doomer, but like something the bubble crash or something like that.

So if the cost is going to be the same or something like that. So there is a fair in like normal computer science student like there is going to be no jobs or less jobs.

Will (56:20)
Well, speaking of jobs, believe, are you looking for work? Is that fair to say? Or what is your current employment status?

Farhan (56:30)
I am looking for work, like ⁓ full time work. ⁓ Now I am just doing freelance stuff. ⁓ I am open to freelance contract or job, anything works for me.

Will (56:43)
Okay, well.

Carlton (56:43)
and i will just

put a shout out here having worked with Farhan over google summer of code and seen him in action since he is a good find you should snap him up

Farhan (56:53)
Thank you.

Will (56:55)
Well, we're going to have links to everything. guess if somebody does want to get in touch is, I guess, email on your personal site, LinkedIn. What is the best way for someone to contact you?

Farhan (57:05)
like my ⁓ LinkedIn is fine there is also email on the LinkedIn so like both options are fine LinkedIn or email

Will (57:16)
OK. Well, have, I mean, we're basically out of time. I haven't asked you about, you know, static typing and your thoughts and whether in an LLM world everything is pushed that way. But I guess I don't know. Is there's I always like to ask, oh, Carlton, are we doing are we doing book reviews? We're not.

Carlton (57:35)
I don't

know about a book, it's been Christmas holidays, I've hardly read a thing. I've got a project I can mention, one of our hands that we haven't mentioned is Django REPL, is a full Django environment in the browser, is an amazingly built on, is it PyScript?

Will (57:44)
Okay, shout out. Go ahead, Carl.

yeah, yeah yeah.

Farhan (57:56)
It is TypeScript's welt on the front end and using WebAssembly, a pod for the Python part.

Carlton (58:03)
But that's

Will (58:03)
Piodi

log.

Carlton (58:06)
a super project, so check that out.

Will (58:10)
Well, I'm a yeah, I'll just shout out not a new book, but a gravity's gravity's rainbow. I'm getting on a postmodern literature kick. So it's a little David Foster Wallace, little gravity's rainbow. I can't I can't say everyone's going to like it, but I like it. So shout out on that. Is there anything you you want to mention? I mean, you have so many projects. Are there new ones upcoming things in the works? Anything you want to.

Carlton (58:23)
That's a bit high, Brown.

Will (58:40)
Draw attention to.

Carlton (58:41)
or any project out there that's caught your eye recently.

Will (58:45)
Yours or someone else's.

Farhan (58:48)
My brain is blank. So I don't... I...

Carlton (58:51)
Okay, that's fine. We didn't prep you. We should have prepped you.

Will (58:51)
That's fair. I know.

We didn't prep you at all, I know, that's on me.

Farhan (59:01)
I am not a

very like ⁓ book wise I am not a very like book kind of a person. I recently watched a movie called Frankenstein the modern version of Frankenstein that is very that is very good looking movie. I am reading like I started to read I have not fully read it the book that I am trying to read is the Ghost of the Hungry Realm by Gabra Mathe.

Will (59:14)
yeah, that's supposed to be great, right?

Farhan (59:30)
that I reading. I have read about 100 pages or so. It's a non-fiction book. ⁓

Will (59:38)
check

that out.

Huh, you said ghosts of the hungry realm or what is that again?

Farhan (59:46)
it's a ghost of the hungry realm hungry ghost uh in the realm of the hungry ghost then the full name in the realm of the hungry ghost so i'm just confusing gabor marty

Will (59:51)
Okay. Okay. Okay, my take. Okay. ⁓ yeah.

Okay, cool. I'll check that out. Yeah.

Carlton (1:00:00)
Okay, we put those in gel nuts

Will (1:00:05)
Carlton, take us away.

Carlton (1:00:06)
Okay, so Farhan, I've really, really enjoyed chatting with you and working with you, you know, over the last year. And I'm so excited for all the things we're doing in the Django ecosystem. So thanks for coming on the show. It's been super.

Farhan (1:00:12)
Thank you.

Thank you. It's been an honor. Thank you for inviting me.

Will (1:00:22)
Great.

So DjangoChat.com, we're on YouTube and we'll see everyone next time. Bye bye.

Carlton (1:00:29)
Bye bye.