Honestly, it’s still ridiculous to me how slow Python, Java, JS, Ruby etc. continue to feel, even after decades of hardware optimizations. You’d think their slowness would stop being relevant at some point, because processors and whatnot have become magnitudes faster, but you can still feel it quite well, when something was implemented in one of those.
Many of these have C-bindings for their libraries, which means that slowness is caused by bad code (such as making a for loop with a C-call for each iteration instead of once for the whole loop).
I am no coder, but it is my experience that bad code can be slow regardless of language used.
Bad code can certainly be part of it. The average skill level of those coding C/C++/Rust tends to be higher. And modern programs typically use hundreds of libraries, so even if your own code is immaculate, not all of your dependencies will be.
But there’s other reasons, too:
Python, Java etc. execute their compiler/interpreter while the program is running.
CLIs are magnitudes slower, because these languages require a runtime to be launched before executing the CLI logic.
GUIs and simulations stutter around, because these languages use garbage collection for memory management.
And then just death by a thousand paper cuts. For example, when iterating over text, you can’t tell it to just give you a view/pointer into the existing memory of the text. Instead, it copies each snippet of text you want to process into new memory.
And when working with multiple threads in Java, it is considered best practice to always clone memory of basically anything you touch. Like, that’s good code and its performance will be mediocre. Also, you better don’t think about using multiple threads in Python+JS. For those two, even parallelism was an afterthought.
Well, and then all of the above feeds back into all the libraries not being performant. There’s no chance to use the languages for performance-critical stuff, so no one bothers optimizing the libraries.
For example, when iterating over text, you can’t tell it to just give you a view/pointer into the existing memory of the text. Instead, it copies each snippet of text you want to process into new memory.
As someone used to embedded programming, this sounds horrific.
Yep. I used to code a lot in JVM languages, then started learning Rust. My initial reaction was “Why the hell does Rust have two string types?”.
Then I learned that it’s for representing actual memory vs. view and what that meant. Since then I’m thinking “Why the hell do JVM languages not have two string types?”.
I’m not a java programmer, but I think the equivalent to str would be char[]. However the ergonomics of rust for str isn’t there for char[], so java devs probably use String everywhere.
Nope, crucial difference between Java’s char[] and Rust’s &str is that the latter is always a pointer to an existing section of memory. When you create a char[], it allocates a new section of memory (and then you get a pointer to that).
One thing that they might be able to do, is to optimize it in the JVM, akin to Rust’s Cow.
Basically, you could share the same section of memory between multiple String instances and only if someone writes to their instance of that String, then you copy it into new memory and do the modification there.
Java doesn’t have mutability semantics, which Rust uses for this, but I guess, with object encapsulation, they could manually implement it whenever a potentially modifying method is called…?
They do have optimizations however they are interpreted at runtime so they can only be so fast
Frankly you won’t notice much unless the program is doing something computation heavy which shouldn’t be done in languages such as JavaScript and Python
True, plus the bloated websites I see are using hundreds of thousands of lines of JavaScript. Why would you possibly need that much code? My full fledged web games use under 10,000.
They aren’t as fast as a native language but they aren’t all that slow if you aren’t trying to use them for performance sensitive applications. Modern machines run all those very quickly as CPUs are crazy fast.
Also it seems weird to put Java/OpenJDK in the list as it is in its own category from my experience
Java is certainly the fastest of the bunch, but I still find it rather noticeable how long the startup of applications takes and how it always feels a bit laggy when used for graphical stuff.
Certainly possible to ignore that on a rational level, but that’s why I’m talking about how it feels.
I’m guessing, this has to do with just the basic UX principle of giving the user feedback. If I click a button, I want feedback that my click was accepted and when the triggered action completed. The sooner those happen, the more confident I feel about my input and the better everything feels.
Yep, I also don’t fully agree on that one. I’m typing this on a degoogled Android phone with quite a bit stronger hardware than the iPhone SE that my workplace provides, e.g. octacore rather than hexacore, 8GB vs. 3GB RAM.
And yet, you guessed it, my Android phone feels quite a bit laggier. Scrolling on the screen has a noticeable delay. Typing on the touchscreen doesn’t feel great on the iPhone either, because the screen is tiny, but at least it doesn’t feel like I’m typing via SSH.
I have experienced the delayed scrolling, mostly on cheaper phones.
But that’s mostly because i’m used to phones having 120+hz screens now, going back to a 60hz screen does feel a bit sluggish, which is especially noticeable on a phone where you’re physically touching the thing. I think it might also have something to do with the cheaper touch matrixes, which may have a lower polling rate as well.
Why? I certainly expect that to be a factor, but I’ve gone through several generations of Android devices and I have never seen it without the GC-typical micro-stutters.
It is always a question of chosing the right tool for the right task. My core code is in C (but probably better structured than most C++ programs), and it needs to be this way. But I also do a lot of stuff in PERL. When I have to generate a source code or smart-edit a file, it is faster and easier to do this in PERL, especially if the execution time is so short that one would not notice a difference anyway.
Or the code that generates files for the production: Yes, a single run may take a minute (in the background), but it produces the files necessary for the production of goods of over 100k worth. And the run is still faster than the surrounding processes like getting the request from production, calculating the necessary parameters, then wrapping all the necessary files with the results of the run into a reply to the production department.
Honestly, it’s still ridiculous to me how slow Python, Java, JS, Ruby etc. continue to feel, even after decades of hardware optimizations. You’d think their slowness would stop being relevant at some point, because processors and whatnot have become magnitudes faster, but you can still feel it quite well, when something was implemented in one of those.
Many of these have C-bindings for their libraries, which means that slowness is caused by bad code (such as making a for loop with a C-call for each iteration instead of once for the whole loop).
I am no coder, but it is my experience that bad code can be slow regardless of language used.
Bad code can certainly be part of it. The average skill level of those coding C/C++/Rust tends to be higher. And modern programs typically use hundreds of libraries, so even if your own code is immaculate, not all of your dependencies will be.
But there’s other reasons, too:
And when working with multiple threads in Java, it is considered best practice to always clone memory of basically anything you touch. Like, that’s good code and its performance will be mediocre. Also, you better don’t think about using multiple threads in Python+JS. For those two, even parallelism was an afterthought.
Well, and then all of the above feeds back into all the libraries not being performant. There’s no chance to use the languages for performance-critical stuff, so no one bothers optimizing the libraries.
As someone used to embedded programming, this sounds horrific.
Yep. I used to code a lot in JVM languages, then started learning Rust. My initial reaction was “Why the hell does Rust have two string types?”.
Then I learned that it’s for representing actual memory vs. view and what that meant. Since then I’m thinking “Why the hell do JVM languages not have two string types?”.
I’m not a java programmer, but I think the equivalent to str would be char[]. However the ergonomics of rust for str isn’t there for char[], so java devs probably use String everywhere.
Nope, crucial difference between Java’s
char[]
and Rust’s&str
is that the latter is always a pointer to an existing section of memory. When you create achar[]
, it allocates a new section of memory (and then you get a pointer to that).One thing that they might be able to do, is to optimize it in the JVM, akin to Rust’s
Cow
.Basically, you could share the same section of memory between multiple
String
instances and only if someone writes to their instance of thatString
, then you copy it into new memory and do the modification there.Java doesn’t have mutability semantics, which Rust uses for this, but I guess, with object encapsulation, they could manually implement it whenever a potentially modifying method is called…?
At least with Java, its the over(ab)use of Reflections and stuff like dependency injection that slows things down to a crawl.
There are a few reasons for this, some of the most important being:
Speed is not just about processors becoming faster - this is a large part of why DSA is important to learn as a programmer.
They do have optimizations however they are interpreted at runtime so they can only be so fast
Frankly you won’t notice much unless the program is doing something computation heavy which shouldn’t be done in languages such as JavaScript and Python
True, plus the bloated websites I see are using hundreds of thousands of lines of JavaScript. Why would you possibly need that much code? My full fledged web games use under 10,000.
But but but virtual DOM /s
“Slow”
They aren’t as fast as a native language but they aren’t all that slow if you aren’t trying to use them for performance sensitive applications. Modern machines run all those very quickly as CPUs are crazy fast.
Also it seems weird to put Java/OpenJDK in the list as it is in its own category from my experience
Java is certainly the fastest of the bunch, but I still find it rather noticeable how long the startup of applications takes and how it always feels a bit laggy when used for graphical stuff.
Certainly possible to ignore that on a rational level, but that’s why I’m talking about how it feels.
I’m guessing, this has to do with just the basic UX principle of giving the user feedback. If I click a button, I want feedback that my click was accepted and when the triggered action completed. The sooner those happen, the more confident I feel about my input and the better everything feels.
I’ve never experienced that. Also Android is OpenJDK based and the applications in Android work well and the system is well optimized
Yep, I also don’t fully agree on that one. I’m typing this on a degoogled Android phone with quite a bit stronger hardware than the iPhone SE that my workplace provides, e.g. octacore rather than hexacore, 8GB vs. 3GB RAM.
And yet, you guessed it, my Android phone feels quite a bit laggier. Scrolling on the screen has a noticeable delay. Typing on the touchscreen doesn’t feel great on the iPhone either, because the screen is tiny, but at least it doesn’t feel like I’m typing via SSH.
I’ve never experienced that and I am running a several year old phone
I have experienced the delayed scrolling, mostly on cheaper phones.
But that’s mostly because i’m used to phones having 120+hz screens now, going back to a 60hz screen does feel a bit sluggish, which is especially noticeable on a phone where you’re physically touching the thing. I think it might also have something to do with the cheaper touch matrixes, which may have a lower polling rate as well.
That has to be because the code is better optimized for the hardware in case of iPhone and less so which language it was written in.
Why? I certainly expect that to be a factor, but I’ve gone through several generations of Android devices and I have never seen it without the GC-typical micro-stutters.
It is always a question of chosing the right tool for the right task. My core code is in C (but probably better structured than most C++ programs), and it needs to be this way. But I also do a lot of stuff in PERL. When I have to generate a source code or smart-edit a file, it is faster and easier to do this in PERL, especially if the execution time is so short that one would not notice a difference anyway.
Or the code that generates files for the production: Yes, a single run may take a minute (in the background), but it produces the files necessary for the production of goods of over 100k worth. And the run is still faster than the surrounding processes like getting the request from production, calculating the necessary parameters, then wrapping all the necessary files with the results of the run into a reply to the production department.
This is because they are meant to be simpler for complex logic. They don’t want to be faster by ditching the simplicity
Java isn’t slow