Let's start from the very beginning, defining the different approaches of how code is finally executed. In general, when a programming language is used, two approaches are possible: either compilation or interpretation. Oh and there is also a third one which is a hybrid approach and is in between both of those said previously. Let's take a look:
- Compilation: When compilation process is done, what is done is a executable code obtained from the whole code, so from all the source code what is obtained is a executable that can be used as it is and that has the same characteristics needed by the machine. (Let's say an example would be Java, which through compilation it can be run over the JVM) The guy in charge here to do this job is the compiler (duh! of course he is!) In the image below, we can see more in deep the whole process...
As you can see, it takes the whole thing in there and then produces the executable to be then processed.
- Interpretation: On the other side, with interpretation we obtain let's say a "translation" of each instruction from each of the statements, so it is a job done statement by statement, that is, instruction by instruction is being interpreted by the interpreter (duh again, of course that's his name!) Languages that could be used as an example of interpretation are shell scripts, like bash for example, or Javascript, PHP, and oh! also the most important dude in this article: Python.
This process can be seen in a better explanation here:
As you can see, statement (stm for short) by statement are processed and immediately executed.
Ok but there should be a third one right? there is always something in between... and yes there exists also a third approach that uses a combination of both approaches and it is for intermediate languages, we can see it in the following image:
In this last one, the compilation is done up to an intermediate language and then using the executor which is no more than an interpreter like the one used in the interpretation process, we can say that is an approach in the middle (has some features from compilation and some from interpretation)
Okay, but going back to the idea of why Python is so cool? I mean, it is widely used in the areas like Bioinformatics, Data Science and so on, and the reason is because of many reasons like for instance:
Readability : It is considerably ordered, and this helps a lot for making it easier to people to read code ( even to make your code work you will need to indent everything okay before using it! )
Code reuse: There are many modules available to be used in Python for different purposes, that's the feature that some call "batteries included"
High Performance: It's very close to C, but the use of other libraries that enable an efficient computation like for example NumPy, increases much more the performance.
Simple: Again, and this one is also related to the first one in this list, the readability and the minimalistic characteristic it has, makes it easier to concentrate in the problem you have instead of thinking in other low-level language problems. (Like for example exceeding memory or garbage collection... who thinks of that in Python? )
Easy to learn: Another point for Python, the learning curve is considerably low and you'll learn very fast! As a matter of fact I think this is the one that actually is considered for Bioinformatics areas, I had colleagues in Bioengineering areas who are not used to code all the time, but this language makes it easier to get started!
Free and Open source: Another one yet is that it's free! so you don't have to worry about licenses and things like that, even with so many different IDEs like Spyder and package managers like Anaconda. (Another cool feature from Spyder better than the not free IDE PyCharm is that for Pycharm you don't get the coolest help inside the IDE, while with Spyder you can get to search the documentation of the code inside the IDE, how cool is that?
Interpreted: This one is more related to the previous we talked about the performance but it's about it.. since it's interpreted, the execution is very fast, but of course this will also depend the amount of data you're processing!
Great, so now that you got all these ideas I hope this has been an eye opener on what to expect from the language, and if you are considering on learning about it, I totally recommend it!
If you're having girl problems I feel bad for you son, I got 99 problems but Python ain't one!
Top comments (18)
People generally pay too much attention to "performance" anyway. In most cases nowadays things are getting more and more distributed, the machines they're running on are getting more powerful, and the performance-critical parts are handled by optimized libraries written closer to bare metal.
The core message here is "Python is fast enough for most things", most people who disagree just don't know what they're talking about and start spouting nonsense about the GIL being a performance killer or some such.
Also the fact that there are multiple Python implementations in addition to the reference implementation (CPython) means there's a wide array of performance characteristics you can get from Python.
Just switching to PyPy can often make your code run very close to C-speeds with exactly no changes required. There's also other tools like Cython that allow you to work with Python, and transform the performance critical parts of your application code (usually very small bits) into something faster.
Also small tricks like converting your single threaded sequential program into a multiprocessing/multithreading application is often super easy, and with the load distributed over multiple cores etc. you get easy access to speedups that would usually require much more work in e.g. C/C++ world.
Personally I find the most important time resource for a technical team tends to be programmer time, and not CPU time - and this is where Python shines. You achieve greater things in less time than in pretty much any other language.
Saying that "Python is fast enough for most things" is one thing, claiming that "Python performance is close to C" is another. The former is an opinion, the latter is just not true: benchmarksgame-team.pages.debian.n... .
Sure you can use Pypy or Cython to make make your code run faster (geez, when have I ever wanted my code to run slower), but then you risked running your code on a less popular and less supported runtime, or sacrificing portability. Most Python developers run their code using the mainstream interpreter and the performance is nowhere near that of C/C++ or even VM-based languages like C#/Java.
Now you can argue this won't be a problem for most programs and you can deal with it by scaling up your hardware, but you can't deny that faster languages (and there are many of them) require less resources which is beneficial especially if you run your programs on the cloud where every CPU cycle counts as money.
Yes but that'd probably mean you're choosing the wrong tool for the job. For most of the stuffs Python do, you're not gonna need to resolve to C/C++ as alternative. However there are certain applications that need C/C++ level of performance that Python just won't be able to achieve.
My experience has been that people generally pay too little attention to performance. Sure they love to use self proclaimed high performance softwares, but most programers put performance as an afterthought when writing code. Caring about performance is a good thing. It will not only result in faster code, but also make you a better programmer.
Some of what you say has a point. Yes, claiming Python is close to C in performance is untrue in most cases, the point is that it doesn't make much of a difference.
True to some extent, but PyPy is just a drop-in replacement with no special considerations to keep in mind in a lot of cases, and nobody is suggesting writing your whole codebase with Cython or the like, just the bits that are performance critical.
I actually can deny that. Generally on the cloud, unless you're using something like Lambda, you pay for uptime, not for CPU cycles used. It doesn't matter much if your CPU is 90% idle, or 99% idle, you pay the same. When building applications you host in the cloud your performance constraints tend to be I/O related - how fast does some other service respond, how fast is the internet connection, how fast is the disk, etc. - instead of how fast is your CPU.
Add to that the fact that most of the time your performance problems that you can directly affect with your code are "algorithm" problems (as people like to call them), i.e. just doing things in a less than optimal way, instead of your actual programming language being slow. These problems are much more easily solved in a programmer friendly language, such as Python.
Also, when people complain about CPU cycles costing in the cloud, they generally have not fully grasped the concepts of "budgeting", "scheduling", "good enough" and "the real world". In the real world, you tend to build applications that are good enough for your needs, you have a schedule you need to keep, and a limited amount of money to do that with.
If you spend your time and money trying to build your micro-optimized application in C, instead of building it in Python, you will find out that you have the same performance problems as you would've with Python, but you ran out of time and money to release and you go bankrupt.
Sure, there are areas where Python isn't the right tool, and there area areas where C isn't the right tool. Btw MicroPython is a cool tool for embedded programming too.
That doesn't mean that if you're writing a tool that needs to process a few hundred million entries in the database you should write it in C because it's faster. What you SHOULD do is write it quickly in Python, realize you don't want to wait 24 hours to see the results, and slap on multiprocessing to parallelize the task to 24 processes and wait an hour instead.
You end up spending much less of your time to solve the problem, and get probably a better, more easy to understand, easier to refactor, etc. end result.
Now you're just trying some random strawman fallacy, nobody said you must not care about performance. Performance profiling and optimization is vital, if you have issues. If you have no issues, it's generally pointless.
Someone once said "premature optimization is the root of all evil", and that starts with your choice of language. Python is good enough for most purposes. There are purposes where it's not, and you need to think a bit to make sure you choose the right tool for the job in every case.
To me it's not just a matter of what personal preferences I have in a language, or their supposed performance characteristics in some artificial test suite published online, but a matter of practicalities.
Also as a personal note I do almost anything I can to avoid working with the archaic and damn hostile languages like C/C++. When I need more performance overall, I find Go, Rust, and the like are much friendlier to work with and deliver excellent performance with much less effort.
Great post overall. I think some of the other commenters are misguided to fixate on some of the performance elements. I think it's true that this part of the post may be a bit off, but overall the benefits of Python are well described.
Python continues to be one of the most powerful languages on a number of fronts. Every language has its haters, but Python is great.
I also agree. Their criticisms are valid (and I wholeheartedly agree on their points) but it could definitely have been worded better. Threads are getting heated up these past few days :/
Only if your program is not performance critical. Having a GC means that you will pay the price for it. Careless allocation will put pressures on the GC which will come back and hit you sooner or later.
One thing a lot of people forget is also that Python is not just one thing. There's CPython, the reference implementation, and e.g. PyPy when you just need your stuff to run faster, MicroPython for embedded programming, Stackless Python for other special needs, and so on.
There's several implementations that all give you the power of Python in slightly different packages, and you don't have to relearn everything when switching to a different use-case.
Your argument is quickly devolving into random ramblings.
8GB sounds like you had plenty of RAM to give a gig to your IDE and browser to make your life more convenient and faster. If you feel like your browser is taking too much RAM, switch to another one or uninstall the plugins that are wasting your RAM. Not really relevant to this discussion.
How did you come to the
that makes life for most people less bearable
conclusion? If I write my service faster, and all you will see is that it works and you got a new feature faster than if I had tried to write it doing premature useless optimization, how did it make your life less bearable? And it sure didn't make my life less bearable, I can focus on writing the next feature instead of worrying about how to optimize the code.This of course does not mean "never optimize anything", that's just a stupid assumption. You optimize where it matters, when the CPU/RAM usage gets too high you optimize. It simply seems that the definition of "too high" for you is a cloud castle that you yourself fail to accept is impossible to reach - again, for quite a large number of problems CPU and RAM usage cannot both be optimized at the same time.
This thread is pointless for me to continue on, you're failing to bring relevant arguments to the table and ignoring every argument against your case.
Great post. I love Python, it's easily my favorite language. I started programming using PHP exclusively. I still use PHP at work for database stuff. However, Python is my go to language in most of my personal projects. I'm starting to learn C and learning how compilers work. I can safely say that Python is the easiest to grasp.
Once again, great post!
Thanks for the post, but it biased and incomplete, don't mislead young devs!
Python is sucks in big projects as most dynamically typed langs. Python is more about prototypes, simple sites and scripts, no more.
As proof, for example, Dropbox and Duolingo are rewritten their sources from Python to Go and Scala.
Honestly I don't much care if my browser/IDE takes half a gig of RAM, even my laptop has 32GB. I'd rather buy RAM than waste programmer time, RAM is cheap, programmers are not.
(edit) Completely forgot to mention in my haste that you might not know that quite a lot of performance (read: time) optimization techniques depend on increased RAM usage - lookup maps, pre-caching, and various other things that reduce the CPU time necessary to calculate things.
Most of the time you can't have both, so pick the battle you want to argue about and stick to it. I'm sticking to programmer time.
Sooo if you have significant CPU & RAM constraints, like when working in the embedded world, pick a tool that works for it (e.g. MicroPython, ha).
The fear of the unknown future is not a great argument against today's problems, we shouldn't write great programs faster because in the future we might not be able to just buy more RAM.
If you honestly want everyone to write everything in assembler and optimize both CPU & RAM use (which, as I explained earlier, is impossible) for everything written, you can expect the software world to not move anywhere.
There aren't enough working hours in the world to do that.
Sensible people pick the battles that are important and focus on those: Some specific bit is taking too much CPU/RAM on Python? Write that bit as a C extension for Python, write the bits that aren't that critical in plain Python.